This site is supported by Nobility Studios.


  • Content count

  • Joined

  • Last visited

Community Reputation

2 Fair

About jols

  • Rank
  • Birthday 07/24/2005

Profile Information

  • Gender: Male
  • Location: U.S.
  • Real name: Jols M
  • Interests: Philosophy of Science, Physics, Music

jols's Activity

  1. jols added a post in a topic Why Worry About Cognitive Penetration if Perception Is Already Computational?   

    So you accept an information processing view, but not a computational one? (Or do you mean something else by "processing"?) I don't draw too much distinction between the two, and I think most people describing perceptual systems would interchangeably use both terms. I also don't think of computation as necessarily having a cognitive connotation. After all, don't computers and calculators compute? (Of course, some might take a wide view of cognition and say that all computations are cognitive.) And the neural circuits in the brain, whether in the perceptual system or higher areas, are essentially performing computations (and perhaps something more). The controversial question or issue is whether the computations going on in the perceptual system can be considered representative of conceptual, theory-like, or cognitive-like processes.

    The key thing here is whether you believe this dependence merely acts to receive information or actually mediates or constructs the perceptual experience based on the received information (light stimulus upon the retina). That is, whether the biological dependency serves to process the received information in order to produce the perceptual output/experience. Direct realists, ecological psychology views, etc., only believe it acts as a receiver, which I (also) find difficult to accept. If the dependency processes the information, then epistemic questions are raised as to the truthfulness or accuracy of this processing, since naturally there can be faulty processing that delivers misinformation about the environment to the agent.

    Perception is one kind of observation, where only or mostly the human senses, with perhaps the aid of eyeglasses, hearing aids, etc., are used to observe the world. Early on, philosophers used to consider perception as the only kind of observation. Today, telescopes, microscopes, and other magnifying or amplification devices are acceptable, but now a large part of the observational process is non-perceptual, relying on non-human faculties. And then there are observations that involve processes that do more than merely amplify; they allow us to observe things that our senses cannot perceive with any mount of amplification: electron microscopes, geiger counters, radio telescopes, etc. In such cases, the role of the senses (human perception) is very limited. In these cases, many philosophers would argue that the role of theory becomes ever more paramount to ensure the validity of the observation.
    • 0
  2. jols added a post in a topic Why Worry About Cognitive Penetration if Perception Is Already Computational?   


    I don't think I disagree with anything you are saying, but I don't think you understand my question. The question here is not one of agreement or universality, but of truth or accuracy. How do we know that our more or less universal perceptions accurately represent the world? The problem with theory-dependency (or other kinds of dependency) is not just that variations exist but that they can exist. If perception is dependent on biology or theory, then by changing one's biology or theory, one's perceptual experience can change. This raises the question of what is the correct theory or biology that delivers veridical perception, regardless of whether theory or biology actually does change. The fact that we have the ability to change our theories and not our biologies (with exception to perceptual plasticity) does not mean that biological-dependent perception is any more veridical than theory-dependent perception.

    Another way to put it: Assume that perception is in fact theory-dependent (in the cognitive sense), but that everybody in the world held the same background knowledge and theories, and therefore their theory-dependent perceptions were all in agreement. Does this mean that the epistemic problems associated with theory-dependent perception goes away? No. Similarly, any universal consistency in biological-dependent perception does not necessarily shield it from any potential epistemic worries.
    • 0
  3. jols added a post in a topic Why Worry About Cognitive Penetration if Perception Is Already Computational?   

    Perhaps you hold a view of perception closer to direct realism or ecological models? It sounds like you don't buy the information processing view. That is fine—there are certainly other philosophers who don't either—but my original query was based on the assumption that perception can be viewed within an information processing or computation paradigm (though not necessarily a cognitive or theory-dependent one). Debating whether perception is an informational or computational process can be fun, but perhaps we should start another thread for that. But I have to say that, for me, it is very hard to deny the computational view after reading the various neurological literature on perception.

    But even if one deny's that perception is a computational process, but grants that it is, nonetheless, a constructed or mediated process, then my query is still addressable in a limited form. In other words, one grants that perception is some sort of physical process in which the physical stimuli received at the sensory organs of the organism is subsequently processed in order to produce the perceptual experience. Perception, then, is at least biology dependent in some significant sense. (This is as opposed to some form of direct realism or ecological model of perception, in which the information within an agent’s environment is sufficiently rich that the agent need not represent or process it further, but only pick it up and use it.)

    I'm not sure whether you deny computational perception but still accept, say, biological perception, or whether you reject any form of dependence or influence and rather subscribe to direct realism type views.

    I suppose it could. It is an interesting way of looking at it: using cognitive influences to correct for one's (faulty) biological influences, but in a passive way. We can certainly mitigate against such perceptual biases by means of artificial instruments. In fact, philosophers of science often point out that human perception is inherently biased, quirky, and error-prone, and therefore, that true scientific knowledge can only be had with artificial means of perception, where the role of human perception is very limited—reading dials, counters, computer screens, etc. But then we step out of the domain of perception to the domain of observation, which is certainly much more susceptible to theory-dependence, but not in the cognitively penetrated sense. Artificial observing instruments are designed and operated based on scientific theories. This is explicit scientific knowledge. And such observations are theory-dependent in a very explicit scientific sense. So, although we can mitigate against perceptual biases via scientific observations, we still cannot escape theory-dependency and must contend with the possibility of biases with the scientific theories. Here too, we still face the specter of relativism and circular justification, though not inevitably so, as realists would argue. In a sense, we just passed the buck.
    • 0
  4. jols added a post in a topic Why Worry About Cognitive Penetration if Perception Is Already Computational?   

    Examples of non-cognitively penetrated perception that, nonetheless, can vary from person to person: One person's visual system may be more sensitive to certain colors than another person's. Similarly, one may be more sensitive to certain sound frequencies than others. People in such situations will see and hear the world slightly differently, regardless of any cognitive penetration. Based on these perceptual experiences alone, how do these two people decide which of their perceptual experience's is more veridical? Other examples include variations in depth and distance perception. More extreme cases may include color blindness or other abnormalities or defects. In the end, we all see the world differently, not just because of any thoughts, concepts, or knowledge that we possess, but also because our biologies are slightly different. My question is how does the way in which our biologies influence perception differ from the ways in which are thoughts or cognition influence perception? Are such differences significant and, if so, in what ways? Can biological influences lead to relativism, skepticism, epistemic circularities, etc. in the same way that cognitive influences presumably can?
    • 0
  5. jols added a post in a topic Why Worry About Cognitive Penetration if Perception Is Already Computational?   

    Sorry to reply so late. I was out of town for part of the weekend. I will try to be more prompt in the future.

    I think some of the discussion here has veered toward the debate on whether perception is theory-dependent (in the cognitively penetrated sense). While that is quite often an interesting and heated debated and perhaps bound to occur anytime the subject of perception is brought up, in my original post, I was more concerned with the low-level, non-cognitive aspects of perception and its relation to the cognitively penetrated case. (I think Tzela was trying to point this out above. Thanks!)

    I think part of the problem is terminological, as Michael suggests. What do we really mean by a “theory”? What constitutes “knowledge”? When can a system be said to be cognitive? Should knowledge only be considered in the epistemic sense, as justified, true belief? And then, how exactly do we go on and define “truth”, “justification”, and “belief”? Obviously, these are complex issues that continue to plague philosophers. How we resolve such issues will certainly have bearing on the query raised in the original post.

    If one considers cognition and knowledge akin to what goes on in the higher areas of human brains, then it may indeed be overly suggestive in using terms like knowledge and theories when talking about the going ons in lesser systems. And even the lesser sophisticated term “computation” may be overly suggestive if used to describe even lesser sophisticated systems.

    On the other hand, one may take a rather wide view of such terms. Perhaps there are many different levels of cognition and computations. Similarly, there may be many different levels of knowledge and, even, theories—need theories only have a symbolic structure? On such a wide view, it would not be overly suggestive using such terms to describe lesser systems; rather, by doing so, one is extending the notion of what it means to be cognitive or what a theory is, etc. Perhaps a calculator is computational but not cognitive. But what about a sophisticated computer program or robot? Perhaps these are cognitive but not conscious.

    Turing back to the issue of the going ons in low-level perceptual systems, maybe terms such as “knowledge”, “theories” are more terms of art rather than technically accurate descriptors, as Michael suggests. Though philosophers refrain from using the term “cognition” when describing low-level perceptual processes, maybe they should also refrain from using these other descriptors. Michael suggests “transformations”, along with perhaps “sensitivities” or “receptivities”. Personally, at this point in the discussion, I don’t really have a problem with that. However, many philosophers would not have any problem using the term “information process” to describe what happens in low-level perceptual systems. In fact, Raftopoulos, Fodor, and many other philosophers who deny that perception is cognitively penetrable, whole-heartedly embrace and defend the information processing view of perception against non-information processing views, such as ecological or direct realism views. Unless one in fact holds one of the latter views, I don’t think “information process” or even “computation” is overly suggestive.

    Whatever terminology we use, one thing should be clear: that these physical or information processes are extremely complex. Failure to emphasize this point in my original post may be part of the confusion here. If one uses the terms “sensitivities” or “receptivities”, one should realize that they can involve extremely complex processes, especially in higher-level organisms. The sensitivities cannot be characterized as some simple set of physical reactions. Likewise, the “transformations” are also not simple, but complex. Though the perceptual processes may not be as complex as the higher cognitive processes of the brain, they are still quite complex. They involve an enormous number of interconnected neural networks segmented into multiple subsystems. It is not an exaggeration to call them computations. Even the retina itself does not simply receive and forward information. The connection between the photoreceptor cells and the optic nerve is not one-to-one, but involve multiple layers of networked neurons. This neural network style connection serves to perform image compression and enhancement (e.g., edge detection), among other processes, before sending the signal to the brain, where even much more sophisticated computations occur. Moreover, these computations (or transformations, if you prefer) are not neutral. The visual system, for example, is biased toward darker contrasts and fruit-like colors. The latter likely evolved as an evolutionary advantage to find fruit among the forest foliage. Such biases share a striking similarity to theory-dependent observational biases. So even if perception is not cognitively penetrated, it is still biased and not theory-neutral or transformation-neutral or “whatever descriptor”-neutral.

    As one begins to read the literature about the neurological processes in perception, one begins to realize that what seems as a simple process in our day to day experiences is actually a series of extremely complex computations (or at least complex physical processes) taking place within the millions upon millions of interconnected neurons of the perceptual system. The seeming simplicity and almost instantaneous, real-time awareness of our environment belies this underlying complexity. And as one starts to appreciate these complexities, one cannot help but notice the striking similarities between these processes and the processes of other intelligent systems. It is not merely metaphoric when scientists use computer terminology, such as “algorithm”, “compression”, “predictive coding”, etc. when talking about perceptual processes. Even words like “knowledge” or “theory” may naturally come to mind.

    I hope that helps. It's late. I'll try to say more later….
    • 0
  6. jols added a topic in History and Philosophy of Science   

    Why Worry About Cognitive Penetration if Perception Is Already Computational?

    I have a question about observational theory-dependency (or theory-ladenness), particularly concerning the issue of perceptual theory-dependency.

    It is quite often debated whether perception is theory-dependent by debating whether perception is cognitively penetrable. That is, whether perceptual organs and brain systems are, in a sense, informationally encapsulated from the higher, cognitive areas of the brain, where such background theories and knowledge that could presumably influence perception reside. If brain processes in the higher, cognitive areas do not interfere with those processes in the lower, perceptual areas, then perception is cognitively impenetrable. So, while perception may influence one’s thoughts, the latter cannot influence the former. Like I said, this is a frequently debated issue, but my concern is not with the debate directly but rather with a side issue. (Jerry Fodor and Paul Churchland have famously debated this issue in the past, and more recently Athanassois Raftopoulos and other philosophers have brought up the issue again. Here at the Library, Hugo discusses the issue briefly in his broader essay on Theory-ladenness, where he also explains why the latter is an important issue in philosophy. Links for the Raftopoulos articles, along with other cited references, are at the end of this post. I’d also be happy to point interested readers to any other mentioned or related authors and articles.)

    Let us assume that perception is in fact cognitively impenetrable and, thus, is not theory-dependent in the cognitive sense. Nonetheless, it is a widely held view (even by Fodor and many other proponents of cognitive impenetrability) that perception is a computational/information process in which the perceptual systems of the brain compute and process stimulus received at the sensory organs to produce the perceptual output that we ultimately experience. The perceptual system does not simply relay sensory information from the sensory organs. The computations and information processes are quite complex and involve many layers. These computations are based on certain low-level knowledge about regularities in an organism’s perceptual environment, acquired through evolutional and developmental (e.g., child development) mechanisms. This knowledge includes principles such as local proximity (adjacent elements in a scene belong together), colinearity (straight lines remain straight across a scene), cocircularity (curved lines keep a constant radius of curvature across a scene), and numerous other principles and constraints (see Raftopoulos 2001, p. 429; Gilbert 2001 pp. 691-693; Geisler 2008). This low-level knowledge is sometimes referred to as theories. So, regardless of whether perception is cognitively penetrated and theory-dependent in the cognitive sense, it is nonetheless a computational or informational process that is dependent on some kind of low-level knowledge or theories. All of the above is pretty much accepted by Fodor and his defenders. (There are, however, those who would disagree with this computational account of perception. James Gibson and Rodney Brooks, for example, hold a non-representational or ecological model of perception. See also recent discussions on direct realism.)

    My question is why are proponents of cognitive impenetrability (and other philosophers who worry about theory-dependency) seemingly unconcerned or less concerned with the computational nature of perception and its low-level knowledge/theory-dependency?

    Consider the following passage from a paper by Athanassois Raftopoulos (2001, p. 424) describing Jerry Fodor’s views on the matter:

    “Fodor’s (Fodor, 1984) argument is that, although perception has access to these background theories [the low-level theories that perceptual computational processes are based on] and is a kind of inference, it is impregnable to (informationally encapsulated from) higher cognitive states, such as desires, beliefs, expectations, and so forth. Since relativistic theories of knowledge and holistic theories of meaning argue for the dependence of perception on these higher states, Fodor thinks that his arguments undermine these theories, while allowing the inferential and computational role of perception and its theory-ladenness.”

    First, I was unaware that arguments concerning relativistic theories of knowledge and holistic theories of meaning necessarily relied on the observer’s higher cognitive states (thoughts, beliefs, etc.). I assumed that they simply relied on the assumption that perception (and observation in general) exists within the context of other theories and knowledge, which even cognitively impenetrable perception does.

    Second, even if such arguments did only rely on the observer’s higher cognitive states (e.g., high-level theories), I don’t understand why they wouldn’t work just as well if perception was only dependent on the lower, non-cognitive states (e.g., low-level theories). For example, if two observers whose perceptual systems are wired differently claim to have two different and mutually inconsistent (or incommensurable) perceptual experiences, how do we judge which one is correct and, thus, which perceptual system (which set of low-level theories) is correct? How is this different from judging which of two high-level theories of perception or observation is correct? (I am not claiming that such judgements can’t be made, but only asking how they differ in the low-level and high-level cases. Another way to put it is that, if such judgements can be made with low-level theories, then why not with high-level theories—as indeed some philosophers claim?)

    To continue, if competing high-level theories of perception can lead to relativism (as well as skepticism, circular reasoning, etc.), then why not competing perceptual systems? In neither case is there such a thing as empirically pure knowledge that one can use to arbitrate between the different theories or different perceptual systems. Both in the cognitively penetrable and impenetrable cases, the observer is relying on perceptual information that has been processed by the brain, and thus susceptible to bias and errors, to make reliability judgements. (Perhaps one can say the cognitively impenetrable case involves less processing and less variability in processing than the cognitively penetrable case and, thus, is less susceptible to bias and errors and, hence, relativism claims, although not immune to them. And so perhaps it makes the case against relativism (and for realism) a little easier to argue.)

    To put it yet another way, even if it turned out that perception is not theory-dependent (not cognitively penetrable), are we not still left with the original epistemic problem of showing that our perceptual experiences or beliefs are true or accurate? The latter being a problem because we don’t have direct, independent access to the external world (unless one believes in direct realism). Though we may not be dependent on theories, we are still dependent on our bodies and mind, our perceptual faculties. How do we know that our perceptual faculties accurately represent the external world when our only access to that world is through our perceptual faculties? This is part of the “problem of the external world” (see Stanford Encyclopedia of Philosophy entries on “Epistemology” and “Epistemological Problems of Perception”). Attempts to resolve it have led to skepticism, idealism, and other such views, not unlike the potential consequences if it turns out that perception is in fact theory-dependent (cognitively penetrable). (I’m not claiming here that such pessimistic views are the only conclusion. It is a continuously debated issue, and realism views have also been expressed via various arguments: (abductive) inference to the best explanation, success semantics, reliabilism, etc. I was only trying to point out how the epistemic problem of perception and the issue of cognitive penetration share similar worries.)

    I will now briefly list some reasons that seem to be (or could be) expressed in the literature as to why philosophers are seemingly unconcerned or less concerned with the computational nature of perception and its low-level knowledge/theory-dependency? (As a consequence of this attitude, however, these reasons—and perhaps others—are not often discussed or defended, to the best of my knowledge.)

    (1) Some philosophers are just unaware or not fully aware of the computational nature of perception. Others hold a non-representational, ecological, or direct realism view of perception and, thus, reject that perception is a computational process. Either way, these philosophers believe that perception, even if cognitively impenetrable, delivers unmediated, direct factual information about the environment. I will put aside these group of philosophers, as we are interested in why philosophers who recognize and accept the computational nature of perception are, nonetheless, unconcerned.

    (2) One reason seems to be that the low-level theories that perceptual computations rely on appear to represent “general truths” about our world, about “general reliable regularities about the optico-spatial properties of our world” (Raftopoulos 2001, p. 429). My question is how is such a conclusion reached. Are we not relying on perceptions produced by processes that are based on these very “truths” and “reliable regularities” to judge that the processes themselves are truthful or reliable? Is there not the possibility of circularity and relativism here as well, as with high-level theories? (Again, I am not claiming that judgements can’t be made in such a way—many philosophers offer strategies for averting or dealing with circularities when it comes to high-level theories of observation. I’m asking why in the low-level case such judgements are often portrayed as being prima facie unproblematic, unlike in the high-level case.)

    (3) Another reason seems to be that low-level theories are generally fixed and hardwired, and so are not affected by our desires, beliefs, or expectations, as high-level theories can be (*). And moreover, the low-level theories are generally consistent from one human observer to another, unlike high-level theories (*). But fixedness of one’s perceptual experience and agreement (consistency) with others does not necessarily imply correctness, truthfulness, or accuracy. After all, a change in theories can sometimes be for the better, and a consensus can be mistaken. (*Recent neurological evidence on perceptual plasticity, learning, and variability may undermine these reasons to some greater or lesser extent. See, for example, Gilbert 2001.)

    Some philosophers argue that the flexibility associated with the cognitive penetrability of higher level organisms evolved as a way to serve the individual needs of different agents within their lifetime (Goldstone 2003; Macpherson 2012, pp. 32-33). Otherwise, the perceptual system would have to come ready-made to represent an enormous amount of perceptual possibilities, which would have rendered it too large and unwieldy. For example, the perceptual system may come ready-made to represent faces, but not necessarily any individual face. In this sense, we can look at the high-level theories as customizable “software” extensions of an underlying perceptual “hardware”. On such a view, it makes less sense to draw a truth bearing distinction between the two; it is not the form (hardware or software) that matters, but the content and how accurately it serves to represent the environment—but see (4) below. (It is also possible that individual perceptual needs could be served via some form of low-level perceptual learning; see Goldstone 2003 and Gilbert 2001. But then that would imply that the low-level theories are not entirely fixed or hardwired. Also, bear in mind that some proponents of cognitive impenetrability would deny that individual faces or even faces, in general, constitute perceptual content; rather, it is the low-level constituent features of the face that constitute perceptual content. See Macpherson 2012, pp. 31-34 and also the Stanford Encyclopedia of Philosophy entry on “Contents of Perception”.)

    (4) Some may consider low-level theory-dependency to be more objective and reliable than its high-level “evil” cousin. Low-level theories have evolved over millennia in the perceptual systems of biological organisms to be highly reliable. On the other hand, high-level theories are created by us imperfect human beings and can be fallible. But this seems somewhat unconvincing to me. After all, many high-level theories of perception (and observation more broadly) can be reliable, and the low-level theories can be mistaken at times, as in the case of visual and other perceptual illusions.

    (5) Last but not least, low-level theories may avoid certain problems of reference associated with high-level theories. Raftopoulos (2008, pp. 78-80; 2012, pp. 14-15) acknowledges that perception is dependent on certain low-level theories or knowledge, but adamantly argues that they are non-conceptual in nature, unlike high-level, cognitive theories. (The non-conceptual nature of perception, in general, is a debated issue, which very much depends on what we mean by “concept”, a debated issue in itself. See Tacca 2011 and the Raftopoulos 2012 reference above and also the Stanford Encyclopedia of Philosophy entries on “Contents of Perception”, Section 6, and “Nonconceptual Mental Content”, Section 4.1.)

    Now, in the non-conceptual case, just as in the conceptual case, we still face the problem of circular justification (epistemic problem of perception): how do we know that our perceptual faculties based on these low-level theories or knowledge accurately represent the external world when our only access to that world is through these very same perceptual faculties? Typically, in the conceptual case, philosophers argue around the circularity by appealing to pragmatic or aesthetic considerations. A pragmatic argument (a la success semantics) might be that observations based on some theory or perceptual system allow an organism’s interactions with its environment (to find food, reproduce, etc.) to be more successful than observations based on some other theory or perceptual system, and therefore the more successful theory or perceptual system must be the truer one. But such pragmatic (or even aesthetic) arguments are not without their problems. Particularly, some philosophers claim that pragmatic arguments, like success semantics, are plagued with certain problems of reference (see Raftopoulos 2008, pp. 66-76). And some philosophers, like Raftopoulos—and this is the point here—claim that these problems of reference can be avoided if perception is non-conceptual rather than conceptual. (Though this is debated; see SEP entry on “Nonconceptual Mental Content”, Section 4.1.) Then barring other problems with pragmatic arguments, they can be availed to circumvent the problem of circular justification when it comes to perception.

    Thus, given that perception is a computational process, there seems to be some legitimate benefit to denying that perception is also cognitively penetrable (a contentious move), but only if one is also willing to deny that the remaining low-level theories of perception are conceptual in nature (possibly also a contentious move). While these moves do not eliminate the original problem of circular justification associated with theory-dependent (cognitively penetrable) perception, they may make appeals to pragmatic considerations more viable, assuming that non-conceptual perception can solve certain problems of reference that conceptual perception presumably cannot. Additionally, it is possible that non-conceptual perception may help solve other problems in the philosophy of perception. Still, at least with regards to circular justification and perception, the aforementioned and only contingent benefit somehow does not seem to be worth all the fuss surrounding cognitively penetrable perception, given that perception is already a computational process.

    Perhaps some of you know of more significant reasons, but I will leave it that. I apologize if the post is too long. I hope some of you find the query interesting and worthwhile. Maybe I’m just missing something simple, and someone can enlighten me in two lines. (I will kick myself but will be happy to lay the matter to rest.)



    Geisler, Wilson S. “Visual perception and the statistical properties of natural scenes.” Annu. Rev. Psychol. 59 (2008): 167-192.

    Gilbert, Charles D., Mariano Sigman, and Roy E. Crist. “The neural basis of perceptual learning.” Neuron 31.5 (2001): 681-697.

    Goldstone, Robert L. “Learning to perceive while perceiving to learn.” Perceptual Organization in Vision: Behavioral and Neural Perspectives (2003): 233-278.

    Macpherson, Fiona. “Cognitive penetration of colour experience: rethinking the issue in light of an indirect mechanism.” Philosophy and Phenomenological Research 84.1 (2012): 24-62.

    Raftopoulos, Athanassios. “Is perception informationally encapsulated?: The issue of the theory-ladenness of perception.” Cognitive Science 25.3 (2001): 423-451.

    Raftopoulos, Athanasios. “Perceptual systems and realism.” Synthese 164.1 (2008): 61-91.

    Raftopoulos, Athanassios. “The cognitive impenetrability of the content of early vision is a necessary and sufficient condition for purely nonconceptual content.” Philosophical Psychology ahead-of-print (2012): 1-20.

    Tacca, Michela C. “Commonalities between perception and cognition.” Frontiers in Psychology 2 (2011).

    Stanford Encyclopedia of Philosophy Entries:

    Bermúdez, José and Cahen, Arnon, “Nonconceptual Mental Content”, The Stanford Encyclopedia of Philosophy (Spring 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...onconceptual/>.

    BonJour, Laurence, “Epistemological Problems of Perception”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...ion-episprob/>.

    Siegel, Susanna, “The Contents of Perception”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanfor...ion-contents/>.

    Steup, Matthias, “Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...epistemology/>.
    • 18 replies
  7. jols added a post in a topic Is reading a form of observation or something more?   

    Soleo, your examples nicely illustrate the problem, but what you're describing is essentially the observation / theory (interpretation) dichotomy, which is well known in the phil of science. Your example concerning "The light won't work", implicating theory X, Y and Z, is essentially the underdetermination of theory and scientific holism. What I am wondering is whether the interpretation and holism that occurs in reading is something above and beyond the interpretation and holism which philosophers of science already recognize that observations face.

    Yes, I read that symbolic and (to an extent iconic) signs are arbitrary, but aren't the indexical signs constrained by physical or causal relationships? I presume observations as signs would fall into the indexical category, though there may not be a fine line between these categories.

    By your comments at the end, it seems that the sign, though immaterial, is the primary thing, not the signifier or signified. Makes me wonder (haphazardly) whether this can also be applied to the phil of science: observation and theory is not what is primary but their relationships (dependencies). In a sense holism (aka Quine's web of belief) does this. Continue to bear with me: theories and observations can signify (support) and be signified (supported) by other theories and observations, perhaps in a (infinitely or circularly) recursive manner. Such an attitude could help dissolve some of the sovereign boundaries and primacy rights between observation and theory. Like the signifier or signified, neither observation or theory have primacy or even meaning in isolation: their value arises only through the relationships of their interconnected signs (dependencies). Just a thought.
    • 1
  8. jols added a post in a topic Is reading a form of observation or something more?   

    Thanks much for the posts Heretic and Soleo. Sorry about the late reply, but I wanted to look into what you both suggested, especially Soleo's references, before responding.

    Heretic, I agree with you that reading involves interpretation. But when you say that reading involves more than observation, are you also assuming that observations too are theory-dependent interpretations (as I do)? If so, I then take it that you mean the further act of interpretation that occurs in reading is a kind of interpretation that is above and beyond any kind of interpretation that occurs at what may then be properly called the observational level. This also seems intuitively correct to me. One can engage in rich and deep interpretations when reading, but the interpretation involved in scientific observations seems (or should be) comparably austere. Still at the same time, I'm having difficulty finding an effective criteria to distinguish the two types of interpretations.

    Soleo, thanks for the references. Although I haven't yet found that any of them are specifically asking the above question, they did help me understand some of the surrounding issues a little better. The Chandler reference was especially illuminating. Semiotics seems fascinating; I wish I could spend more time studying it. Some thoughts in relation to my original post:

    Some semioticians argue that there is no clear distinction between signifier and signified, and that either one can affect how the other is observed, conceived or realized. This is somewhat similar to the thesis in the phil of science that there is no clear distinction between observation and theory, and how theory can affect what is observed (the so called theory-dependency of observations).

    Semiotics seems to suggest that observation is a form of reading, instead of the other way around (as suggested by my original post). In fact in the conclusion of his intro section, Chandler states that semiotics teaches us that reality is a system of signs. In essence then (I take it) we read the world (in its many respects) and thus come to know it. All this further suggest that reading is larger than and goes beyond mere observation, for example when reading iconic or symbolic (e.g., written text) signs. (Again, this is keeping with the intuition that reading involves more than observation.)

    But this last suggestion is complicated by some other issues in semiotics. First, the potential signifier/signified distinction failure and the ensuing ability of the signified to affect the signifier imply that reading can affect how the signifier is observed and thus in a feedback manner affect what is read. That is, reading may introduce some theory-dependency into the observation of the signifier, and thus reading is not above and beyond observation in such an instance. Second, what separates observational signs from symbolic signs--physical, causal relationship between signifier and signified versus convention, learned relationship, respectively--may not hold across the board, as many observations are based on convention and learned relationships. Finally, it was unclear to me where or whether semioticians draw a line between observable and non-observable signifiers, especially in light of the fact that what is often signified by an observable signifier can itself serve as a signifier in some further capacity.

    Clearly the question of this post requires more investigation (than I have time for). I am thinking now that perhaps some answers can be found in cognitive psychology. As is usually the case with philosophy, seemingly simple questions open up Pandora boxes. Why do we even bother?
    • 1
  9. jols added a topic in Explore   

    Is reading a form of observation or something more?
    I want to ask what may be an odd question. How much of the act of reading can be considered observation (in the scientific sense)? (I did some research online and at TGL, and haven't really found any answers.)

    For example, a dictionary may define reading as an act of interpreting (or decoding) written symbols. (Interpretive because reading depends on readers' varying linguistic, social, technical and other learned background knowledge.) From this, one may conclude that reading involves (1) observation (of symbols) and (2) some further interpretive act. But if we assume that observations too are theory-dependent interpretations, the above conclusion seems less certain. Do we observe words and sentences, or only letters and symbols, or even only, say, ink marks? Where is the line between the interpretative component of reading and observation? Can such a line be drawn?

    Perhaps another way to frame the question is to ask whether symbols are read or observed? Again, is there a distinction? Or is reading just a complex form of observation?

    The beginnings of a possible answer: perhaps the background knowledge and the manner in which it is used in interpreting texts versus interpreting observations may not in the end be comparable. Though I have not been able to find an effective criteria to distinguish the two.

    I am more familiar with phil of science than I am with linguistics or semiotics, so perhaps this question has been breached. If anybody can point me to any references or have thoughts on the matter, I would appreciate it. The closest thing I could find on TGL is a post by Parody of Language, Semiotics and scientific reasoning, where he suggests a similarity between signs (which includes text symbols I assume) and observation, in that both are essentially theory-laden (signs exist in an interpretive relation with the object it signifies). I don't if he (or other semioticians) are merely suggesting a parallel between reading and observation or a stronger relation?
    • 5 replies
  10. jols added a post in a topic Strings, falsifiers and ruling on design   

    Here is a recent radio talk show (Science Friday) featuring Smolin and Brian Greene--a string theorist, arguing their respective sides.

    It is mentioned that the new high energy particle collider (Large Hadron Collider) to come on line soon may be able to provide some experimental tests of string theory. Also current and future short range tests of the gravitational force may provide a test (Eot-Wash experiments). If there are ultra-small extra dimensions, as string theory claims, then I think eventually scientists should be able to detect its effects. I don't think that Smolin holds that string theory general is not falsifiable, but Susskind's anthropic version of it. I think Smolin's present concern is that all the theoretical focus should not be on the most popular (albeit very elegant) theory while we wait for empirical judgement; we should explore other theoretical alternatives.

    If the reach of current experiments is part of the difficulty in string theory being falsifiable, then another part is that string theory is still a work in progress, such that if some current experiment doesn't detect an effect, then the string theorist knows they have to adjust some of their parameters, for example, the extra dimensions need to be smaller. In other words, falsification is not the only connection between theory and experiment. String theory is still being developed using empirical induction. And as the philosopher of science, John Norton has recently argued, there is no universal inductive schemas:

    I would also say that there are no universal schemas for any mode of connection btwn theory and experience, be it induction, deduction, falsificationism, etc. To extrapolate on Norton (and also Kuhn and Feyerabend), every scientific practice determines its own schema based on its local empirical facts. And this brings us to Hugo's concern with the demarcation criteria: What is that specific relation between theory and experience which uniquely characterizes scientific practice? You figure. As with other scientific theories, ID has to make its own case. Will it? Can it?
    • 0
  11. jols added a post in a topic Weak Underdetermination in Physics   

    Sorry, but I had some technical troubles which prevented all of the message from being posted, which I finally fixed (18-12-05).

    Okay, I have read the Lyre and Eynck article that Hugo referred to, which gave examples of underdetermination of theory (UDT) for present day gravity phenomena. The authors presented three other theories (T1, T2, T3) that they argue are empirically equivalent but theoretically distinct with Einstein's GR theory (T4). Some comments here on why I think they may not really be underdetermined for the reasons they claim.

    First, the authors point out (on p. 19) that the underdetermination is more of a "practical" UDT (in their words) rather than a strict case of UDT, since the theories are not a final theory. That is, since it seems that we have many possible candidate theories for gravity, we need to do more theoretical and experimental work to find the right one--kinda like we have many M theory candidates for quantum gravity. The essential argument here is that non-final theories are essentially subject to change, and thus two theoretically distinct theories that are empircally equivalent today, may not be in the future. As an aside, this raises the interesting question of whether a theory can ever be final?

    Second, (on p. 17) the authors state that they have established that (1) all four theories are empirically equivalent (at least presently) and (2) they are incompatible (i.e., distinct). Thus they conclude UDT exist. I don't think (1) and (2) have been established. The point of the previous paragraph is a serious caveat for (1). Also, as Eli said in a previous post, "If T1 is a different theory than T2, even they share common facts, they do not share all facts. If we say, both are consistent with same facts, probably we underestimate other facts". I will give a possible instance of this in a moment.

    In regards to point (2): First, as I mentioned in the previous post, there is no consensus criteria by which to establish whether theories are theoretically equivalent or distinct (see The criteria the authors use is the one originally drawn up by Quine (and Kuhn), that there should be a semantic (taxonomic) difference, whereby the theoretical terms of one theory cannot be straightforwardly translated into the theoretical terms of the other. But Sankey ( : SCROLL DOWN AND HIT ARTICLE 41) has argued that comparison is possible by reference, thereby circumventing semantic translation difficulties; however, even problems with the reference approach brings us back to the non-consensus on the comparison criteria. Nonetheless, even if Quine and Kuhn's criteria is the correct approach, the semantics of language is far from being firmly established to allow us to make systematic and rigorous judgements on whether translation ensues or not between theoretical terms. And this is a sticking point for the authors, as evident on p. 16 of their article:

    For distinct theories T and T' their semantics should not be translatable into each other, i.e. theoretical terms of T cannot be transformed into theoretical terms of T' (or reformulated within the context of T'), nor vice versa. Once, however, there exists a dictionary between the two theory contexts, they are clearly but two equivalent formulations. A trivial example of just switching the terms
    • 0
  12. jols added a post in a topic Weak Underdetermination in Physics   

    Yes, I think Kyle and Eli are on the right track here. See this article by Norton ( for further elucidation on this point. A quote from his article:

    "If it is possible to establish that two theories are observationally equivalent in argumentation compact enough to figure in a journal article, they will be sufficiently close in theoretical structure that we cannot preclude the possibility that they are merely variant formulations of the same theory. The argument will depend upon a notion of physically superfluous structure. Many observationally equivalent theories differ on additional structures that plausibly represents nothing physical. I will also introduce the converse notion of gratuitous impoverishment; some artificial examples of observationally equivalent theories are generated by improperly depriving structures in one theory of physical significance."

    I think Norton's argument can be developed even further.

    Also Sankey ( : SCROLL DOWN AND HIT ARTICLE 41) argues that differences in empirically equivalent theories may be purely structural and semantic and not have metaphysical bearing, and he suggests that they can be compared (are commensurable) on metaphysical grounds and, perhaps, even be shown to be the same. According to Sankey, such a metaphysical comparison can be made by adopting a version of a causal theory of reference (read Wittgenstein's picture theory of language?).

    Also, it is worth adding that the above debate turns on the problem of how to determine whether two theories are really different or merely different formulations of the same theory. Magnus, in his article, "Underdetermination and the Problem of Identical Rivals" (, points out that this is an open problem. IMHO, this problem, in turn, is dependent on another outstanding problem in the phil of science: what exactly is a scientific theory? A mathematical structure? (What about theories that are non-mathematical?) A semantic structure? A reference structure? Whatever may be the case, I am willing to bet that once (if) we solve these problems, it will be shown that observationally equivalent theories are also theoretically equivalent (vs theoretically distinct).

    Having said this, I do think that the underdetermination of theory (UDT) does occur, along with theory-ladenness. But it occurs because the observational data on any domain (or subdomain) of phenomena that a scientific theory purports to provide for is incomplete--e.g., observations are never 100% precise. Consider the common example in favour of UDT of trying to fit a curve to some data points, as pointed out by Rusty above. Consider we take the measurement of some observable at three different points in time. Many different curves can be drawn through these three points and therefore UDT entails. But why is this? For two reasons: the lack of intermediary data points and the lack of precision of each data point fails to constrain the possibility of curves. The data is incomplete. Critics of UDT will claim that the intermediary data points can easily be filled in and therefore the UDT that entailed was trivial; but alas all infinite number of data points can never be filled in and the precision of each data point can never be 100 percent, and therefore there will always be multiple curves that can fit the data. Witness how the orbital data of the outer planets fit "precisely" both Einstein and Newton's theory of gravity. (Of course the level of today's precision may rule out Newton's theory, but other theories, such as MOND can still compete with Einstein's theory.)

    If, in the above example, we assume hypothetically that only three data points exist--that is, if we were to attempt to gather additional data points, we would be empirically unable to--and the precision is 100%, then there is no sense in talking about curves to fit the data. The path of the curves between the data points is scientifically irrelevant since there is no observational evidence that corresponds to a theoretical entity of such a path. As Norton may say, the path is a superfluous structure.

    But in actuality we can gather additional points, although we haven't yet. It is this possibility that allows us to hypothesize a curve; and it is the present indeterminacy of what that possibility may yield that allows for multiple curves. The incompleteness of our observations allow space for multiple interpretations, taxonomies, semantics, theoretical structures, etc. The incompleteness also would compromise any causal theory of reference--if what I'm seeing is kinda fuzzy around the edges (b/c my observations aren't 100% precise), then I could be mistaken at times when I point to an observation. Also, Perennial's intuition that the completeness of a theory would undermine UDT is consistent with this line of reasoning. See the interview by Hugo ( for further information on these ideas.
    • 0
  13. jols added a post in a topic Gonzalo Mun   

    I really enjoyed the interview and all the comments posted since. I am not too familiar with Feyerabend's work, but the more I read about him, the more interested I become. I would also like to find out more about Mun
    • 0
  14. jols added a post in a topic Is philosophy dead?   

    Maybe there is something to be said about taking philosophy *too* seriously. I've read many of the philosophical discourses concerning science, realism, epistemology, ontology etc. Sometimes it seems that everyone is just bullshiting back and forth and getting nowhere. Whereas, in science, things are "serious". There is real empirical investigation and rigorous reasoning, i.e., the scientific method. Progress in understanding the world and technological benefits are achieved. Nonetheless, I am still interested in philosophy. But I am also very much interested in science, particularly physics.

    With that said, we can also see the counterside to the above viewpoints concerning phil and science. For example, Plato's allegory of the cave and shadows strikes a deep curiosity in anyone interested in understanding the nature of the world--it is part of the reason why the Matrix movie is so popular (besides the awesome fight scenes). I would guess that a majority of scientists today personally speculate on such philosophical issues even though they may not make it part of their professional career. In one sense, philosophy addresses the realm of curiosity that the scientific method is yet unable to address at any given time. Witness the scientification, if you will, of many branches of philosophy: natural philosophy became physics, philosophy of language became linguistics, philosophy of mind becomes psychology/cognitive science. But still, philosophy has not been completely eradicated. Epistemology and ontology remain. There will always remain a philosophical aspect to understanding (if not to science).

    Likewise, the counterpoint to the view that science is the real deal when it comes to understanding is that there are many murky aspects to its apparent steel surface: what does Newton's theory really mean (spooky action at a distance)? What is the meaning of spacetime curvature? What the hell is happening with those screwy subatomic particles? Are our scientific theories completely free of contradictions? Are theoretical and observational terms clearly defined? How do we know that so and so empirical data is not erroneous? These are the questions addressed by the Quine-Duhem thesis, the observational/theoretical distinction issue and other such issues in the philosophy of science. Truth be told, science is not completely free of bullshit (aka philosophy) either.

    The way I see it, the bottom line is that philosophy, though it may be a lot of BSing back and forth, has some genuine truth seeking merits; and science, though it may be a hard, rigorous form of inquiry, has some inescapable philosophical aspects.
    • 0