This site is supported by Nobility Studios.
Sign in to follow this  
Followers 0

Why Worry About Cognitive Penetration if Perception Is Already Computational?


19 posts in this topic

Posted

Hi,

I have a question about observational theory-dependency (or theory-ladenness), particularly concerning the issue of perceptual theory-dependency.

It is quite often debated whether perception is theory-dependent by debating whether perception is cognitively penetrable. That is, whether perceptual organs and brain systems are, in a sense, informationally encapsulated from the higher, cognitive areas of the brain, where such background theories and knowledge that could presumably influence perception reside. If brain processes in the higher, cognitive areas do not interfere with those processes in the lower, perceptual areas, then perception is cognitively impenetrable. So, while perception may influence one’s thoughts, the latter cannot influence the former. Like I said, this is a frequently debated issue, but my concern is not with the debate directly but rather with a side issue. (Jerry Fodor and Paul Churchland have famously debated this issue in the past, and more recently Athanassois Raftopoulos and other philosophers have brought up the issue again. Here at the Library, Hugo discusses the issue briefly in his broader essay on Theory-ladenness, where he also explains why the latter is an important issue in philosophy. Links for the Raftopoulos articles, along with other cited references, are at the end of this post. I’d also be happy to point interested readers to any other mentioned or related authors and articles.)

Let us assume that perception is in fact cognitively impenetrable and, thus, is not theory-dependent in the cognitive sense. Nonetheless, it is a widely held view (even by Fodor and many other proponents of cognitive impenetrability) that perception is a computational/information process in which the perceptual systems of the brain compute and process stimulus received at the sensory organs to produce the perceptual output that we ultimately experience. The perceptual system does not simply relay sensory information from the sensory organs. The computations and information processes are quite complex and involve many layers. These computations are based on certain low-level knowledge about regularities in an organism’s perceptual environment, acquired through evolutional and developmental (e.g., child development) mechanisms. This knowledge includes principles such as local proximity (adjacent elements in a scene belong together), colinearity (straight lines remain straight across a scene), cocircularity (curved lines keep a constant radius of curvature across a scene), and numerous other principles and constraints (see Raftopoulos 2001, p. 429; Gilbert 2001 pp. 691-693; Geisler 2008). This low-level knowledge is sometimes referred to as theories. So, regardless of whether perception is cognitively penetrated and theory-dependent in the cognitive sense, it is nonetheless a computational or informational process that is dependent on some kind of low-level knowledge or theories. All of the above is pretty much accepted by Fodor and his defenders. (There are, however, those who would disagree with this computational account of perception. James Gibson and Rodney Brooks, for example, hold a non-representational or ecological model of perception. See also recent discussions on direct realism.)

My question is why are proponents of cognitive impenetrability (and other philosophers who worry about theory-dependency) seemingly unconcerned or less concerned with the computational nature of perception and its low-level knowledge/theory-dependency?

Consider the following passage from a paper by Athanassois Raftopoulos (2001, p. 424) describing Jerry Fodor’s views on the matter:

“Fodor’s (Fodor, 1984) argument is that, although perception has access to these background theories [the low-level theories that perceptual computational processes are based on] and is a kind of inference, it is impregnable to (informationally encapsulated from) higher cognitive states, such as desires, beliefs, expectations, and so forth. Since relativistic theories of knowledge and holistic theories of meaning argue for the dependence of perception on these higher states, Fodor thinks that his arguments undermine these theories, while allowing the inferential and computational role of perception and its theory-ladenness.”

First, I was unaware that arguments concerning relativistic theories of knowledge and holistic theories of meaning necessarily relied on the observer’s higher cognitive states (thoughts, beliefs, etc.). I assumed that they simply relied on the assumption that perception (and observation in general) exists within the context of other theories and knowledge, which even cognitively impenetrable perception does.

Second, even if such arguments did only rely on the observer’s higher cognitive states (e.g., high-level theories), I don’t understand why they wouldn’t work just as well if perception was only dependent on the lower, non-cognitive states (e.g., low-level theories). For example, if two observers whose perceptual systems are wired differently claim to have two different and mutually inconsistent (or incommensurable) perceptual experiences, how do we judge which one is correct and, thus, which perceptual system (which set of low-level theories) is correct? How is this different from judging which of two high-level theories of perception or observation is correct? (I am not claiming that such judgements can’t be made, but only asking how they differ in the low-level and high-level cases. Another way to put it is that, if such judgements can be made with low-level theories, then why not with high-level theories—as indeed some philosophers claim?)

To continue, if competing high-level theories of perception can lead to relativism (as well as skepticism, circular reasoning, etc.), then why not competing perceptual systems? In neither case is there such a thing as empirically pure knowledge that one can use to arbitrate between the different theories or different perceptual systems. Both in the cognitively penetrable and impenetrable cases, the observer is relying on perceptual information that has been processed by the brain, and thus susceptible to bias and errors, to make reliability judgements. (Perhaps one can say the cognitively impenetrable case involves less processing and less variability in processing than the cognitively penetrable case and, thus, is less susceptible to bias and errors and, hence, relativism claims, although not immune to them. And so perhaps it makes the case against relativism (and for realism) a little easier to argue.)

To put it yet another way, even if it turned out that perception is not theory-dependent (not cognitively penetrable), are we not still left with the original epistemic problem of showing that our perceptual experiences or beliefs are true or accurate? The latter being a problem because we don’t have direct, independent access to the external world (unless one believes in direct realism). Though we may not be dependent on theories, we are still dependent on our bodies and mind, our perceptual faculties. How do we know that our perceptual faculties accurately represent the external world when our only access to that world is through our perceptual faculties? This is part of the “problem of the external world” (see Stanford Encyclopedia of Philosophy entries on “Epistemology” and “Epistemological Problems of Perception”). Attempts to resolve it have led to skepticism, idealism, and other such views, not unlike the potential consequences if it turns out that perception is in fact theory-dependent (cognitively penetrable). (I’m not claiming here that such pessimistic views are the only conclusion. It is a continuously debated issue, and realism views have also been expressed via various arguments: (abductive) inference to the best explanation, success semantics, reliabilism, etc. I was only trying to point out how the epistemic problem of perception and the issue of cognitive penetration share similar worries.)

I will now briefly list some reasons that seem to be (or could be) expressed in the literature as to why philosophers are seemingly unconcerned or less concerned with the computational nature of perception and its low-level knowledge/theory-dependency? (As a consequence of this attitude, however, these reasons—and perhaps others—are not often discussed or defended, to the best of my knowledge.)

(1) Some philosophers are just unaware or not fully aware of the computational nature of perception. Others hold a non-representational, ecological, or direct realism view of perception and, thus, reject that perception is a computational process. Either way, these philosophers believe that perception, even if cognitively impenetrable, delivers unmediated, direct factual information about the environment. I will put aside these group of philosophers, as we are interested in why philosophers who recognize and accept the computational nature of perception are, nonetheless, unconcerned.

(2) One reason seems to be that the low-level theories that perceptual computations rely on appear to represent “general truths” about our world, about “general reliable regularities about the optico-spatial properties of our world” (Raftopoulos 2001, p. 429). My question is how is such a conclusion reached. Are we not relying on perceptions produced by processes that are based on these very “truths” and “reliable regularities” to judge that the processes themselves are truthful or reliable? Is there not the possibility of circularity and relativism here as well, as with high-level theories? (Again, I am not claiming that judgements can’t be made in such a way—many philosophers offer strategies for averting or dealing with circularities when it comes to high-level theories of observation. I’m asking why in the low-level case such judgements are often portrayed as being prima facie unproblematic, unlike in the high-level case.)

(3) Another reason seems to be that low-level theories are generally fixed and hardwired, and so are not affected by our desires, beliefs, or expectations, as high-level theories can be (*). And moreover, the low-level theories are generally consistent from one human observer to another, unlike high-level theories (*). But fixedness of one’s perceptual experience and agreement (consistency) with others does not necessarily imply correctness, truthfulness, or accuracy. After all, a change in theories can sometimes be for the better, and a consensus can be mistaken. (*Recent neurological evidence on perceptual plasticity, learning, and variability may undermine these reasons to some greater or lesser extent. See, for example, Gilbert 2001.)

Some philosophers argue that the flexibility associated with the cognitive penetrability of higher level organisms evolved as a way to serve the individual needs of different agents within their lifetime (Goldstone 2003; Macpherson 2012, pp. 32-33). Otherwise, the perceptual system would have to come ready-made to represent an enormous amount of perceptual possibilities, which would have rendered it too large and unwieldy. For example, the perceptual system may come ready-made to represent faces, but not necessarily any individual face. In this sense, we can look at the high-level theories as customizable “software” extensions of an underlying perceptual “hardware”. On such a view, it makes less sense to draw a truth bearing distinction between the two; it is not the form (hardware or software) that matters, but the content and how accurately it serves to represent the environment—but see (4) below. (It is also possible that individual perceptual needs could be served via some form of low-level perceptual learning; see Goldstone 2003 and Gilbert 2001. But then that would imply that the low-level theories are not entirely fixed or hardwired. Also, bear in mind that some proponents of cognitive impenetrability would deny that individual faces or even faces, in general, constitute perceptual content; rather, it is the low-level constituent features of the face that constitute perceptual content. See Macpherson 2012, pp. 31-34 and also the Stanford Encyclopedia of Philosophy entry on “Contents of Perception”.)

(4) Some may consider low-level theory-dependency to be more objective and reliable than its high-level “evil” cousin. Low-level theories have evolved over millennia in the perceptual systems of biological organisms to be highly reliable. On the other hand, high-level theories are created by us imperfect human beings and can be fallible. But this seems somewhat unconvincing to me. After all, many high-level theories of perception (and observation more broadly) can be reliable, and the low-level theories can be mistaken at times, as in the case of visual and other perceptual illusions.

(5) Last but not least, low-level theories may avoid certain problems of reference associated with high-level theories. Raftopoulos (2008, pp. 78-80; 2012, pp. 14-15) acknowledges that perception is dependent on certain low-level theories or knowledge, but adamantly argues that they are non-conceptual in nature, unlike high-level, cognitive theories. (The non-conceptual nature of perception, in general, is a debated issue, which very much depends on what we mean by “concept”, a debated issue in itself. See Tacca 2011 and the Raftopoulos 2012 reference above and also the Stanford Encyclopedia of Philosophy entries on “Contents of Perception”, Section 6, and “Nonconceptual Mental Content”, Section 4.1.)

Now, in the non-conceptual case, just as in the conceptual case, we still face the problem of circular justification (epistemic problem of perception): how do we know that our perceptual faculties based on these low-level theories or knowledge accurately represent the external world when our only access to that world is through these very same perceptual faculties? Typically, in the conceptual case, philosophers argue around the circularity by appealing to pragmatic or aesthetic considerations. A pragmatic argument (a la success semantics) might be that observations based on some theory or perceptual system allow an organism’s interactions with its environment (to find food, reproduce, etc.) to be more successful than observations based on some other theory or perceptual system, and therefore the more successful theory or perceptual system must be the truer one. But such pragmatic (or even aesthetic) arguments are not without their problems. Particularly, some philosophers claim that pragmatic arguments, like success semantics, are plagued with certain problems of reference (see Raftopoulos 2008, pp. 66-76). And some philosophers, like Raftopoulos—and this is the point here—claim that these problems of reference can be avoided if perception is non-conceptual rather than conceptual. (Though this is debated; see SEP entry on “Nonconceptual Mental Content”, Section 4.1.) Then barring other problems with pragmatic arguments, they can be availed to circumvent the problem of circular justification when it comes to perception.

Thus, given that perception is a computational process, there seems to be some legitimate benefit to denying that perception is also cognitively penetrable (a contentious move), but only if one is also willing to deny that the remaining low-level theories of perception are conceptual in nature (possibly also a contentious move). While these moves do not eliminate the original problem of circular justification associated with theory-dependent (cognitively penetrable) perception, they may make appeals to pragmatic considerations more viable, assuming that non-conceptual perception can solve certain problems of reference that conceptual perception presumably cannot. Additionally, it is possible that non-conceptual perception may help solve other problems in the philosophy of perception. Still, at least with regards to circular justification and perception, the aforementioned and only contingent benefit somehow does not seem to be worth all the fuss surrounding cognitively penetrable perception, given that perception is already a computational process.

Perhaps some of you know of more significant reasons, but I will leave it that. I apologize if the post is too long. I hope some of you find the query interesting and worthwhile. Maybe I’m just missing something simple, and someone can enlighten me in two lines. (I will kick myself but will be happy to lay the matter to rest.)

Thanks.

References:

Geisler, Wilson S. “Visual perception and the statistical properties of natural scenes.” Annu. Rev. Psychol. 59 (2008): 167-192.

http://www.annualrev...8.110405.085632

Gilbert, Charles D., Mariano Sigman, and Roy E. Crist. “The neural basis of perceptual learning.” Neuron 31.5 (2001): 681-697.

http://itb.biologie....e/gilbert01.pdf

Goldstone, Robert L. “Learning to perceive while perceiving to learn.” Perceptual Organization in Vision: Behavioral and Neural Perspectives (2003): 233-278.

http://cognitrn.psyc...fs/carnegie.pdf

Macpherson, Fiona. “Cognitive penetration of colour experience: rethinking the issue in light of an indirect mechanism.” Philosophy and Phenomenological Research 84.1 (2012): 24-62.

http://www.gla.ac.uk...a_152508_en.pdf

Raftopoulos, Athanassios. “Is perception informationally encapsulated?: The issue of the theory-ladenness of perception.” Cognitive Science 25.3 (2001): 423-451.

http://www.athanassi...ftopCS2001a.pdf

Raftopoulos, Athanasios. “Perceptual systems and realism.” Synthese 164.1 (2008): 61-91.

http://www.athanassi...ynthese2008.pdf

Raftopoulos, Athanassios. “The cognitive impenetrability of the content of early vision is a necessary and sufficient condition for purely nonconceptual content.” Philosophical Psychology ahead-of-print (2012): 1-20.

http://www.tandfonli...089.2012.729486

Tacca, Michela C. “Commonalities between perception and cognition.” Frontiers in Psychology 2 (2011).

http://www.ncbi.nlm....les/PMC3227022/

Stanford Encyclopedia of Philosophy Entries:

Bermúdez, José and Cahen, Arnon, “Nonconceptual Mental Content”, The Stanford Encyclopedia of Philosophy (Spring 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...onconceptual/>.

BonJour, Laurence, “Epistemological Problems of Perception”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...ion-episprob/>.

Siegel, Susanna, “The Contents of Perception”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanfor...ion-contents/>.

Steup, Matthias, “Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanfor...epistemology/>.

Share this post


Link to post
Share on other sites

Posted

Jolly,

Given perception as a non-cognitive process, the tension which you seem to perceive -- uh, I mean detect or notice -- appears to come from describing non-cognitive computation as a matter of knowledge or theories. Those may well be terms of art, but maybe these terms are unnecessarily suggestive of something cognitive. Maybe there is some apparently less-cognitive way of designating that supposed computation and its development. Maybe the development can be put forth in terms of a collection of some sort of inherent sensitivities or receptivities which develop (or even atrophy) according to the environments which they encounter. In other words, the so-called low-level knowledge would not be knowledge so much as it would be a non-cognitive result of the sort of exposure to the types of stimuli for which there is some sort of inherent sensitivity. Would some such terminological modification help address the problem?

Michael

Share this post


Link to post
Share on other sites

Posted (edited)

I dunno about this...

I take someone for a walk in the wilderness, and with some people, although we have the same sensory perceptions, they see threats of all kinds, and from just about anything. Where I revel im the beauty of nature, they are in a state of near terror. In other words, they take these perceptions, which are the exact same as my perception, and interpret them completely differently. This means that the two of us "see" things completely differently, even though objectively we are seeing the same "reality."

Yet if I can show them that their interpretation is totally false; i.e. their interpretation is false. How they analyze these perceptions changes completely.

Doesn't hurt the argument if we are both talking LSD....

Since this has actually occurred in my life experience, I am to say the least sceptical that it cannot occur... :tape:

Dave

Edited by Chato

Share this post


Link to post
Share on other sites

Posted

I take someone for a walk in the wilderness, and with some people, although we have the same sensory perceptions, they see threats of all kinds, and from just about anything. Where I revel im the beauty of nature, they are in a state of near terror. In other words, they take these perceptions, which are the exact same as my perception, and interpret them completely differently. This means that the two of us "see" things completely differently, even though objectively we are seeing the same "reality."

I think the problem that’s being put forward is a bit more basic than that. If you see a mailbox, at dusk, out of the corner of your eye, and mistake it for a person, the important question isn’t whether you were right or wrong, but whether, for that one instant, you actually saw a person there. Did your mind interpret a half-completed image, or did it actively complete the image before you even became aware of it? And then the other important question is, if your mind did complete the image, are there also cases where the mind completes images but can’t be corrected by its own subsequent impressions?

Share this post


Link to post
Share on other sites

Posted

Yet if I can show them that their interpretation is totally false; i.e. their interpretation is false. How they analyze these perceptions changes completely.

The analysis would be a more cognitive matter than is the perception which is apparently being regarded (even if only by definition) as strictly non-cognitive. As Raftopoulos notes in one of the papers to which jols linked: "The input systems, or perceptual modules, are domain-specific, encapsulated, mandatory, fast, hard-wired in the organism, and have a fixed neural architecture." There is room for some plasticity here despite the descriptors "hard-wired" and "fixed", and that is what I was suggesting in my previous posting.

That plasticity does not have to be knowledge- or theory- dependent to be functional; that plasticity, after all, is being regarded as non-cognitive. The supposedly non-cognitive perception is also described as being computational, but even computational might seem closer to being cognitive than is necessary to maintain perception (again, even if only by definition) as thoroughly non-cognitive. Calculators and other computers can seem to be non-cognitively computational -- until the programming for those operations by cognitive humans is taken into account. This recommends eschewing computational for some other term such as transform.

As a matter of fact, Raftopoulos does say that the "task" of the "perceptual modules ... is to transform the proximal stimuli". He then goes on to describe the transformation as a "computation which relies on some general assumptions" in order to produce the transformation, but it is not yet at all apparent to me why there needs to be this reference to computation when putting forth a non-cognitive transformation. Then again, I have read only a small part of the references which jols provided.

Michael

Share this post


Link to post
Share on other sites

Posted

Yet if I can show them that their interpretation is totally false; i.e. their interpretation is false. How they analyze these perceptions changes completely.

The analysis would be a more cognitive matter than is the perception which is apparently being regarded (even if only by definition) as strictly non-cognitive. As Raftopoulos notes in one of the papers to which jols linked: "The input systems, or perceptual modules, are domain-specific, encapsulated, mandatory, fast, hard-wired in the organism, and have a fixed neural architecture." There is room for some plasticity here despite the descriptors "hard-wired" and "fixed", and that is what I was suggesting in my previous posting.

That plasticity does not have to be knowledge- or theory- dependent to be functional; that plasticity, after all, is being regarded as non-cognitive. The supposedly non-cognitive perception is also described as being computational, but even computational might seem closer to being cognitive than is necessary to maintain perception (again, even if only by definition) as thoroughly non-cognitive. Calculators and other computers can seem to be non-cognitively computational -- until the programming for those operations by cognitive humans is taken into account. This recommends eschewing computational for some other term such as transform.

As a matter of fact, Raftopoulos does say that the "task" of the "perceptual modules ... is to transform the proximal stimuli". He then goes on to describe the transformation as a "computation which relies on some general assumptions" in order to produce the transformation, but it is not yet at all apparent to me why there needs to be this reference to computation when putting forth a non-cognitive transformation. Then again, I have read only a small part of the references which jols provided.

Michael

Can you provide an example of an interpretation of perception which is fixed and not alterable?

Dave

Share this post


Link to post
Share on other sites

Posted

Can you provide an example of an interpretation of perception which is fixed and not alterable?

In the context of this discussion, I expect that all interpretations would be regarded as sufficiently cognitive to allow that they were all alterable. Perceptions, on the other hand, seem to be defined (at least by some of the referenced authors) so that perceptions will be regarded as not being subject to the alterability that comes from or with cognitive activity. That is my understanding to this point. Does it matter if perceptions (as defined) are not alterable by cognitive activity? Maybe not. It might just be that interpretations are immensely - possibly even immeasurably - more important.

Michael

Share this post


Link to post
Share on other sites

Posted

Can you provide an example of an interpretation of perception which is fixed and not alterable?

In the context of this discussion, I expect that all interpretations would be regarded as sufficiently cognitive to allow that they were all alterable. Perceptions, on the other hand, seem to be defined (at least by some of the referenced authors) so that perceptions will be regarded as not being subject to the alterability that comes from or with cognitive activity. That is my understanding to this point. Does it matter if perceptions (as defined) are not alterable by cognitive activity? Maybe not. It might just be that interpretations are immensely - possibly even immeasurably - more important.

Michael

No offense, but if no examples can be given, then the proposition is false.

We perceive through our cognitive interpretations of our perceptions. And these perceptions, are more or less, universal. If the perceptions are universal, yet most of us "see" things differently, then the question becomes one of importance. It's why Iogged into this thread. To see if my preconceived analysis is false.

But if no examples can be given, then I walk away believing my analysis is correct. This renders the the question moot.

Dave

Share this post


Link to post
Share on other sites

Posted

Sorry to reply so late. I was out of town for part of the weekend. I will try to be more prompt in the future.

I think some of the discussion here has veered toward the debate on whether perception is theory-dependent (in the cognitively penetrated sense). While that is quite often an interesting and heated debated and perhaps bound to occur anytime the subject of perception is brought up, in my original post, I was more concerned with the low-level, non-cognitive aspects of perception and its relation to the cognitively penetrated case. (I think Tzela was trying to point this out above. Thanks!)

I think part of the problem is terminological, as Michael suggests. What do we really mean by a “theory”? What constitutes “knowledge”? When can a system be said to be cognitive? Should knowledge only be considered in the epistemic sense, as justified, true belief? And then, how exactly do we go on and define “truth”, “justification”, and “belief”? Obviously, these are complex issues that continue to plague philosophers. How we resolve such issues will certainly have bearing on the query raised in the original post.

If one considers cognition and knowledge akin to what goes on in the higher areas of human brains, then it may indeed be overly suggestive in using terms like knowledge and theories when talking about the going ons in lesser systems. And even the lesser sophisticated term “computation” may be overly suggestive if used to describe even lesser sophisticated systems.

On the other hand, one may take a rather wide view of such terms. Perhaps there are many different levels of cognition and computations. Similarly, there may be many different levels of knowledge and, even, theories—need theories only have a symbolic structure? On such a wide view, it would not be overly suggestive using such terms to describe lesser systems; rather, by doing so, one is extending the notion of what it means to be cognitive or what a theory is, etc. Perhaps a calculator is computational but not cognitive. But what about a sophisticated computer program or robot? Perhaps these are cognitive but not conscious.

Turing back to the issue of the going ons in low-level perceptual systems, maybe terms such as “knowledge”, “theories” are more terms of art rather than technically accurate descriptors, as Michael suggests. Though philosophers refrain from using the term “cognition” when describing low-level perceptual processes, maybe they should also refrain from using these other descriptors. Michael suggests “transformations”, along with perhaps “sensitivities” or “receptivities”. Personally, at this point in the discussion, I don’t really have a problem with that. However, many philosophers would not have any problem using the term “information process” to describe what happens in low-level perceptual systems. In fact, Raftopoulos, Fodor, and many other philosophers who deny that perception is cognitively penetrable, whole-heartedly embrace and defend the information processing view of perception against non-information processing views, such as ecological or direct realism views. Unless one in fact holds one of the latter views, I don’t think “information process” or even “computation” is overly suggestive.

Maybe there is some apparently less-cognitive way of designating that supposed computation and its development. Maybe the development can be put forth in terms of a collection of some sort of inherent sensitivities or receptivities which develop (or even atrophy) according to the environments which they encounter. In other words, the so-called low-level knowledge would not be knowledge so much as it would be a non-cognitive result of the sort of exposure to the types of stimuli for which there is some sort of inherent sensitivity. Would some such terminological modification help address the problem?

As a matter of fact, Raftopoulos does say that the "task" of the "perceptual modules ... is to transform the proximal stimuli". He then goes on to describe the transformation as a "computation which relies on some general assumptions" in order to produce the transformation, but it is not yet at all apparent to me why there needs to be this reference to computation when putting forth a non-cognitive transformation. Then again, I have read only a small part of the references which jols provided.

Whatever terminology we use, one thing should be clear: that these physical or information processes are extremely complex. Failure to emphasize this point in my original post may be part of the confusion here. If one uses the terms “sensitivities” or “receptivities”, one should realize that they can involve extremely complex processes, especially in higher-level organisms. The sensitivities cannot be characterized as some simple set of physical reactions. Likewise, the “transformations” are also not simple, but complex. Though the perceptual processes may not be as complex as the higher cognitive processes of the brain, they are still quite complex. They involve an enormous number of interconnected neural networks segmented into multiple subsystems. It is not an exaggeration to call them computations. Even the retina itself does not simply receive and forward information. The connection between the photoreceptor cells and the optic nerve is not one-to-one, but involve multiple layers of networked neurons. This neural network style connection serves to perform image compression and enhancement (e.g., edge detection), among other processes, before sending the signal to the brain, where even much more sophisticated computations occur. Moreover, these computations (or transformations, if you prefer) are not neutral. The visual system, for example, is biased toward darker contrasts and fruit-like colors. The latter likely evolved as an evolutionary advantage to find fruit among the forest foliage. Such biases share a striking similarity to theory-dependent observational biases. So even if perception is not cognitively penetrated, it is still biased and not theory-neutral or transformation-neutral or “whatever descriptor”-neutral.

As one begins to read the literature about the neurological processes in perception, one begins to realize that what seems as a simple process in our day to day experiences is actually a series of extremely complex computations (or at least complex physical processes) taking place within the millions upon millions of interconnected neurons of the perceptual system. The seeming simplicity and almost instantaneous, real-time awareness of our environment belies this underlying complexity. And as one starts to appreciate these complexities, one cannot help but notice the striking similarities between these processes and the processes of other intelligent systems. It is not merely metaphoric when scientists use computer terminology, such as “algorithm”, “compression”, “predictive coding”, etc. when talking about perceptual processes. Even words like “knowledge” or “theory” may naturally come to mind.

I hope that helps. It's late. I'll try to say more later….

Share this post


Link to post
Share on other sites

Posted

My problem with your presentation is that you are discussing real material process's without giving a concrete example.

Dave

Share this post


Link to post
Share on other sites

Posted

It is not merely metaphoric when scientists use computer terminology, such as “algorithm”, “compression”, “predictive coding”, etc. when talking about perceptual processes.

Those terms actually strike me as being metaphors, as being purely (rather than merely) metaphoric. I would say that these metaphors succeed by effecting focus upon an extensively regular dynamism. The sorts of dynamism of interest with regards to perception can be understood as occurring within determinate conditions, conditions which can operate much like cognitive biases, and the visual system example is worth considering as to whether non-cognitive processes are necessarily impenetrable:

The visual system, for example, is biased toward darker contrasts and fruit-like colors. The latter likely evolved as an evolutionary advantage to find fruit among the forest foliage. Such biases share a striking similarity to theory-dependent observational biases. So even if perception is not cognitively penetrated, it is still biased and not theory-neutral or transformation-neutral or “whatever descriptor”-neutral.

We might say that - by definition - we are not able to penetrate the visual system of perception -- for instance, our visual-perception blindness to x-ray and infrared frequencies. However, we can just as well say that via cognition we become aware of such non-cognitive biases and mitigate them by devising means of transforming those frequencies so that we no longer have that visual-perception blindness. Would this serve as an example of cognitive penetration of non-cognitive perception and bias?

Share this post


Link to post
Share on other sites

Posted

My problem with your presentation is that you are discussing real material process's without giving a concrete example.

Dave

Examples of non-cognitively penetrated perception that, nonetheless, can vary from person to person: One person's visual system may be more sensitive to certain colors than another person's. Similarly, one may be more sensitive to certain sound frequencies than others. People in such situations will see and hear the world slightly differently, regardless of any cognitive penetration. Based on these perceptual experiences alone, how do these two people decide which of their perceptual experience's is more veridical? Other examples include variations in depth and distance perception. More extreme cases may include color blindness or other abnormalities or defects. In the end, we all see the world differently, not just because of any thoughts, concepts, or knowledge that we possess, but also because our biologies are slightly different. My question is how does the way in which our biologies influence perception differ from the ways in which are thoughts or cognition influence perception? Are such differences significant and, if so, in what ways? Can biological influences lead to relativism, skepticism, epistemic circularities, etc. in the same way that cognitive influences presumably can?

Share this post


Link to post
Share on other sites

Posted (edited)

It is not merely metaphoric when scientists use computer terminology, such as “algorithm”, “compression”, “predictive coding”, etc. when talking about perceptual processes.

Those terms actually strike me as being metaphors, as being purely (rather than merely) metaphoric. I would say that these metaphors succeed by effecting focus upon an extensively regular dynamism. The sorts of dynamism of interest with regards to perception can be understood as occurring within determinate conditions, conditions which can operate much like cognitive biases, and the visual system example is worth considering as to whether non-cognitive processes are necessarily impenetrable:

Perhaps you hold a view of perception closer to direct realism or ecological models? It sounds like you don't buy the information processing view. That is fine—there are certainly other philosophers who don't either—but my original query was based on the assumption that perception can be viewed within an information processing or computation paradigm (though not necessarily a cognitive or theory-dependent one). Debating whether perception is an informational or computational process can be fun, but perhaps we should start another thread for that. But I have to say that, for me, it is very hard to deny the computational view after reading the various neurological literature on perception.

But even if one deny's that perception is a computational process, but grants that it is, nonetheless, a constructed or mediated process, then my query is still addressable in a limited form. In other words, one grants that perception is some sort of physical process in which the physical stimuli received at the sensory organs of the organism is subsequently processed in order to produce the perceptual experience. Perception, then, is at least biology dependent in some significant sense. (This is as opposed to some form of direct realism or ecological model of perception, in which the information within an agent’s environment is sufficiently rich that the agent need not represent or process it further, but only pick it up and use it.)

I'm not sure whether you deny computational perception but still accept, say, biological perception, or whether you reject any form of dependence or influence and rather subscribe to direct realism type views.

The visual system, for example, is biased toward darker contrasts and fruit-like colors. The latter likely evolved as an evolutionary advantage to find fruit among the forest foliage. Such biases share a striking similarity to theory-dependent observational biases. So even if perception is not cognitively penetrated, it is still biased and not theory-neutral or transformation-neutral or “whatever descriptor”-neutral.

We might say that - by definition - we are not able to penetrate the visual system of perception -- for instance, our visual-perception blindness to x-ray and infrared frequencies. However, we can just as well say that via cognition we become aware of such non-cognitive biases and mitigate them by devising means of transforming those frequencies so that we no longer have that visual-perception blindness. Would this serve as an example of cognitive penetration of non-cognitive perception and bias?

I suppose it could. It is an interesting way of looking at it: using cognitive influences to correct for one's (faulty) biological influences, but in a passive way. We can certainly mitigate against such perceptual biases by means of artificial instruments. In fact, philosophers of science often point out that human perception is inherently biased, quirky, and error-prone, and therefore, that true scientific knowledge can only be had with artificial means of perception, where the role of human perception is very limited—reading dials, counters, computer screens, etc. But then we step out of the domain of perception to the domain of observation, which is certainly much more susceptible to theory-dependence, but not in the cognitively penetrated sense. Artificial observing instruments are designed and operated based on scientific theories. This is explicit scientific knowledge. And such observations are theory-dependent in a very explicit scientific sense. So, although we can mitigate against perceptual biases via scientific observations, we still cannot escape theory-dependency and must contend with the possibility of biases with the scientific theories. Here too, we still face the specter of relativism and circular justification, though not inevitably so, as realists would argue. In a sense, we just passed the buck.

Edited by jols

Share this post


Link to post
Share on other sites

Posted

My problem with your presentation is that you are discussing real material process's without giving a concrete example.

Dave

Examples of non-cognitively penetrated perception that, nonetheless, can vary from person to person: One person's visual system may be more sensitive to certain colors than another person's. Similarly, one may be more sensitive to certain sound frequencies than others. People in such situations will see and hear the world slightly differently, regardless of any cognitive penetration. Based on these perceptual experiences alone, how do these two people decide which of their perceptual experience's is more veridical? Other examples include variations in depth and distance perception. More extreme cases may include color blindness or other abnormalities or defects. In the end, we all see the world differently, not just because of any thoughts, concepts, or knowledge that we possess, but also because our biologies are slightly different. My question is how does the way in which our biologies influence perception differ from the ways in which are thoughts or cognition influence perception? Are such differences significant and, if so, in what ways? Can biological influences lead to relativism, skepticism, epistemic circularities, etc. in the same way that cognitive influences presumably can?

I understand what you are saying above. After all, a blind person perceives the world in a completely different context.

But none of us live in isolation. Someone who is color blind, knows they are color blind.

So, in a previous response I said "And these perceptions, are more or less, universal."

You can put an emphasis around the words, "more or less."

But I mean that. On the level of dealing with average people, the problems of reaching different conclusions, about what is perceived, is a cognitive, not a perceptual problem.

Thus, we can communicate, understand, and share, indeed, even learn with those who are both blind and deaf. Witness, Hellen Keller.

Since I've devoted quite a bit of thought to the matter, I am still wondering if there are examples where the above is not true. In other words, within the broad range of human perception, are there conclusions, that are locked out of cognitive analysis?

Dave

Share this post


Link to post
Share on other sites

Posted

Dave,

I don't think I disagree with anything you are saying, but I don't think you understand my question. The question here is not one of agreement or universality, but of truth or accuracy. How do we know that our more or less universal perceptions accurately represent the world? The problem with theory-dependency (or other kinds of dependency) is not just that variations exist but that they can exist. If perception is dependent on biology or theory, then by changing one's biology or theory, one's perceptual experience can change. This raises the question of what is the correct theory or biology that delivers veridical perception, regardless of whether theory or biology actually does change. The fact that we have the ability to change our theories and not our biologies (with exception to perceptual plasticity) does not mean that biological-dependent perception is any more veridical than theory-dependent perception.

Another way to put it: Assume that perception is in fact theory-dependent (in the cognitive sense), but that everybody in the world held the same background knowledge and theories, and therefore their theory-dependent perceptions were all in agreement. Does this mean that the epistemic problems associated with theory-dependent perception goes away? No. Similarly, any universal consistency in biological-dependent perception does not necessarily shield it from any potential epistemic worries.

Share this post


Link to post
Share on other sites

Posted

It sounds like you don't buy the information processing view.

There is certainly processing which occurs. After all, what is processing if not some form of transformation - the latter being a term towards which I have indicated a willing acceptance?

I have to say that, for me, it is very hard to deny the computational view after reading the various neurological literature on perception.

I am not as familiar with that literature; maybe there is a reason to accept computational rather than transformational. However, computation is a type of transformation, and, on the face of it, computation - outside of its use as a term of art, which is just to say in ordinary language - suggests (to me) the involvement of a somewhat cognitive process. On the other hand, transformation seems not to necessarily depend upon or have a similar cognitive process. This is all to say that if perception is to be regarded as non-cognitive, then we take better account of any bias that might be introduced by our cognitive delving into the non-cognitive by doing the best we can to avoid terminology (language being a product of cognition) which seems to sneak in some cognitive aspect. In other words, we do semantic analysis because essentially none of the terms used are necessary terms.

if one den[ie]s that perception is a computational process, but ... grants that perception is some sort of physical process in which the physical stimuli received at the sensory organs of the organism is subsequently processed in order to produce the perceptual experience. Perception, then, is at least biology dependent in some significant sense.

I'm not sure whether you deny computational perception but still accept, say, biological perception, or whether you reject any form of dependence or influence and rather subscribe to direct realism type views.

I think it would be ridiculous to insist that there is never any biological dependence.

philosophers of science often point out that human perception is inherently biased, quirky, and error-prone, and therefore, that true scientific knowledge can only be had with artificial means of perception, where the role of human perception is very limited—reading dials, counters, computer screens, etc.

True scientific knowledge can only be had with artificial means of perception, where the role of human perception is very limited? No. Scientific awareness can be enhanced by those "artificial means". Furthermore, science is much more dependent upon human cognition than it is dependent on human non-cognitive perception. I would say that the proper scientific concern with human non-cognitive processes regards the manner in which those non-cognitive processes bias (or affect) cognition.

But then we step out of the domain of perception to the domain of observation, which is certainly much more susceptible to theory-dependence, but not in the cognitively penetrated sense.

When I get a chance, I will look at some of the references you provided in order to glean the intended definitional distinction between perception and observation. Or maybe you could save me the time? In any event, on the face of it, observation would seem to be more cognitive than is perception, but I have no immediate sense of how (at least some) more cognitive observations necessarily could not be cognitively penetrated.

And such observations are theory-dependent in a very explicit scientific sense. So, although we can mitigate against perceptual biases via scientific observations, we still cannot escape theory-dependency and must contend with the possibility of biases with the scientific theories. Here too, we still face the specter of relativism and circular justification, though not inevitably so, as realists would argue. In a sense, we just passed the buck.

Now, this last part seems correct enough.But, then, have we not moved pretty far off from cognitively non-penetrable perception? Not that there is anything wrong with that.

Share this post


Link to post
Share on other sites

Posted

Dave,

I don't think I disagree with anything you are saying, but I don't think you understand my question. The question here is not one of agreement or universality, but of truth or accuracy. How do we know that our more or less universal perceptions accurately represent the world? The problem with theory-dependency (or other kinds of dependency) is not just that variations exist but that they can exist. If perception is dependent on biology or theory, then by changing one's biology or theory, one's perceptual experience can change. This raises the question of what is the correct theory or biology that delivers veridical perception, regardless of whether theory or biology actually does change. The fact that we have the ability to change our theories and not our biologies (with exception to perceptual plasticity) does not mean that biological-dependent perception is any more veridical than theory-dependent perception.

No human, of for that matter, any other species, "accurately represent the world." What we perceive is not what some other creature perceives. My dogs, "sees" the world through their olfactory sense, whereas we see it visually. What I mean is the shared human interpretation of reality.

The example I usually give to illustrate this, is the walk in the woods, where one person "sees" dangers and threats, and another sees the beauty of nature. Both persons perceive the same visual feedback, but two visions of reality are the conclusions.

Yet I have changed the cognitive interpretation of the other person so that we then see the same, more or less, reality.

Another way to put it: Assume that perception is in fact theory-dependent (in the cognitive sense), but that everybody in the world held the same background knowledge and theories, and therefore their theory-dependent perceptions were all in agreement. Does this mean that the epistemic problems associated with theory-dependent perception goes away? No. Similarly, any universal consistency in biological-dependent perception does not necessarily shield it from any potential epistemic worries.

This is raising the bar on cognition, not perception. Let us say, that my companion in my theoretical "walk in the woods" is just as aware as I am, that there are no dangers. Yet this person, driven by greed, looks upon whatever they come across as an exploitive opportunity.

"Look at all this timber. I can get $1,000 bucks a tree from this wood."

So the above person, even though one aspect of their mentality is aware enough to recognize a non threatening reality, comes to a cognitive view completely different then mine. And indeed, one can make an argument that Nature exists to be exploited.

The difference in out conclusions will not allow this person to see the unity of humanity and nature. Indeed, they might argue, there is no such thing. In other words, while neither one of us lives in fear of the forest, we take away a different world view. Mine of peace, his of money...

Changing this persons view of the world is far more difficult than the person I used in my first example. In the first case, ignorance is the cause of the fear, in the second, a world view of "what's in it for me" is far more difficult to change, because perceiving the forest is not the problem.

Dave

Dave

Share this post


Link to post
Share on other sites

Posted

It sounds like you don't buy the information processing view.

There is certainly processing which occurs. After all, what is processing if not some form of transformation - the latter being a term towards which I have indicated a willing acceptance?

I have to say that, for me, it is very hard to deny the computational view after reading the various neurological literature on perception.

I am not as familiar with that literature; maybe there is a reason to accept computational rather than transformational. However, computation is a type of transformation, and, on the face of it, computation - outside of its use as a term of art, which is just to say in ordinary language - suggests (to me) the involvement of a somewhat cognitive process. On the other hand, transformation seems not to necessarily depend upon or have a similar cognitive process. This is all to say that if perception is to be regarded as non-cognitive, then we take better account of any bias that might be introduced by our cognitive delving into the non-cognitive by doing the best we can to avoid terminology (language being a product of cognition) which seems to sneak in some cognitive aspect. In other words, we do semantic analysis because essentially none of the terms used are necessary terms.

So you accept an information processing view, but not a computational one? (Or do you mean something else by "processing"?) I don't draw too much distinction between the two, and I think most people describing perceptual systems would interchangeably use both terms. I also don't think of computation as necessarily having a cognitive connotation. After all, don't computers and calculators compute? (Of course, some might take a wide view of cognition and say that all computations are cognitive.) And the neural circuits in the brain, whether in the perceptual system or higher areas, are essentially performing computations (and perhaps something more). The controversial question or issue is whether the computations going on in the perceptual system can be considered representative of conceptual, theory-like, or cognitive-like processes.

if one den[ie]s that perception is a computational process, but ... grants that perception is some sort of physical process in which the physical stimuli received at the sensory organs of the organism is subsequently processed in order to produce the perceptual experience. Perception, then, is at least biology dependent in some significant sense.

I'm not sure whether you deny computational perception but still accept, say, biological perception, or whether you reject any form of dependence or influence and rather subscribe to direct realism type views.

I think it would be ridiculous to insist that there is never any biological dependence.

The key thing here is whether you believe this dependence merely acts to receive information or actually mediates or constructs the perceptual experience based on the received information (light stimulus upon the retina). That is, whether the biological dependency serves to process the received information in order to produce the perceptual output/experience. Direct realists, ecological psychology views, etc., only believe it acts as a receiver, which I (also) find difficult to accept. If the dependency processes the information, then epistemic questions are raised as to the truthfulness or accuracy of this processing, since naturally there can be faulty processing that delivers misinformation about the environment to the agent.

But then we step out of the domain of perception to the domain of observation, which is certainly much more susceptible to theory-dependence, but not in the cognitively penetrated sense.

When I get a chance, I will look at some of the references you provided in order to glean the intended definitional distinction between perception and observation. Or maybe you could save me the time? In any event, on the face of it, observation would seem to be more cognitive than is perception, but I have no immediate sense of how (at least some) more cognitive observations necessarily could not be cognitively penetrated.

Perception is one kind of observation, where only or mostly the human senses, with perhaps the aid of eyeglasses, hearing aids, etc., are used to observe the world. Early on, philosophers used to consider perception as the only kind of observation. Today, telescopes, microscopes, and other magnifying or amplification devices are acceptable, but now a large part of the observational process is non-perceptual, relying on non-human faculties. And then there are observations that involve processes that do more than merely amplify; they allow us to observe things that our senses cannot perceive with any mount of amplification: electron microscopes, geiger counters, radio telescopes, etc. In such cases, the role of the senses (human perception) is very limited. In these cases, many philosophers would argue that the role of theory becomes ever more paramount to ensure the validity of the observation.

Share this post


Link to post
Share on other sites

Posted

So you accept an information processing view, but not a computational one? (Or do you mean something else by "processing"?)

I do not see what is made at all clearer by adding the term information. I mean, so long as we are restricting ourselves to non-cognitive transformations/processes, why would we want to introduce a term like information which is most certainly (in ordinary language) associated which knowledge-communication/transmission/reception? It is not as if there is non-cognitive knowledge out there in the world which we are non-cognitively processing. Basically, I just find information in this context to be an unnecessary metaphor.

I find that semantic analysis is often a very important part of the scientific process. Semantic analysis is one way - in some circumstances, it is the very best way - to lay out the relevant alternative possibilities. By considering the words we use, we take account of the biases which define/limit - or which we might be introducing via - our cognitive processes.

I also don't think of computation as necessarily having a cognitive connotation. After all, don't computers and calculators compute?

Below is the explanation I previously gave regarding why it is that I think computation is readily associated with cognition:

The supposedly non-cognitive perception is also described as being computational, but even computational might seem closer to being cognitive than is necessary to maintain perception (again, even if only by definition) as thoroughly non-cognitive. Calculators and other computers can seem to be non-cognitively computational -- until the programming for those operations by cognitive humans is taken into account. This recommends eschewing computational for some other term such as transform.

The key thing here is whether you believe this dependence merely acts to receive information or actually mediates or constructs the perceptual experience based on the received information (light stimulus upon the retina). That is, whether the biological dependency serves to process the received information in order to produce the perceptual output/experience.

I regard the biological as effecting a transformation; I do not think that the biological is an always and everywhere, perfectly efficient non-transforming conduit. This is to say that all transformations are to be regarded as resulting in some bias.

If the dependency processes the information, then epistemic questions are raised as to the truthfulness or accuracy of this processing, since naturally there can be faulty processing that delivers misinformation about the environment to the agent.

Even utterly fault-free filtering would introduce questions about accuracy or completeness. The ultimate point, as you previously noted, is that bias can be effected.

Perception is one kind of observation, where only or mostly the human senses, with perhaps the aid of eyeglasses, hearing aids, etc., are used to observe the world. Early on, philosophers used to consider perception as the only kind of observation. Today, telescopes, microscopes, and other magnifying or amplification devices are acceptable, but now a large part of the observational process is non-perceptual, relying on non-human faculties. And then there are observations that involve processes that do more than merely amplify; they allow us to observe things that our senses cannot perceive with any mount of amplification: electron microscopes, geiger counters, radio telescopes, etc. In such cases, the role of the senses (human perception) is very limited. In these cases, many philosophers would argue that the role of theory becomes ever more paramount to ensure the validity of the observation.

We cognitively penetrate (or at least affect) the non-cognitive when we produce and then use such instruments. We could say that in this way we mitigate already extant biases. Alternatively, we could say that, even with such mitigation, we still have biases, or we add on different biases which we then transform cognitively, if only to renew what is essentially an often iterative process. Of course, none of this is conceptually problematic for science inasmuch as (properly circumspect) scientists proceed always believing/knowing that there is more which could be taken into account; they should believe/know that there are always some things which they are not currently taking into account, and they should believe/know that it is entirely a matter of judgment - not purely objective fact - when they ever conclude that the things which are not currently taken into account are of no particular or likely importance. Science is as much a focus on problems as it is a search for solutions, and most solutions end up producing awareness of either problem-persistence despite modifications achieved with mitigation, or those solutions make clear unanticipated new problems.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0