The Galilean Library is supported by Nobility Studios.
inductivism 'in new clothes'
Posted 15 February 2005 - 05:36 PM
[I witness that, in part at least, I agree with them, neither the mere possibility of Quinean underdetermination (strong underdetermination) nor the problem of theory ladenness seem to be insurmontable. Not even Goodman's new riddle of induction. As of now the possibility to find a 'theory of everything' without free parameters is still wide open in spite of Kuhn and Feyerabend.
Yet, at least currently, I would argue that an evolution of Popper's methodological falsificationism is the best we have (without any reference to the problem of empirical basis which is a furthter argument, even stronger, for fallibilism). My view goes even beyond Lakatos' methodology for it does not give any indications of what programmes to pursue with priority (a 'ladder of preferences') on short term in the period of 'crises' or when more, very distinct, programmes explain more or less the same facts (of course that 'ladder of preferences' is entirely orientative). But taking in account the inductivist tradition also (I am also a 'reliabilist', I think we should include reliabilism in Lakatos' methodology).
Better said a mixture of inductivism with falsificationism is the best approach in my view, the mere fact that a general inductive method cannot be justified, not yet at least, does not amount to say the same for all particular cases. So, in my opinion, there are plenty of reasons to [re]try also the inductivist path at least partially, we have some important evolutions there.]
Posted 15 February 2005 - 07:36 PM
What kind of "evolution of Popper's methodological falsificationism" do you have in mind? I think that strong underdetermination is already considered a non-problem (i will write a short essay or post explaining why soon) but the issue of theory-ladenness is a bolder conjecture on your part (again, i plan to write something on this score when there's less rugby to play). Interestingly enough, one philosopher who claimed to have been able to defeat theory-ladenness was Feyerabend - which is usually a good test to see if a critic has actually read him.
I think the "revival" of inductivism you note is due more to it never having gone away. There is something we might call "natural" about induction: it seems straightforwardly to be how we arrive at our beliefs about most things and we are thus unlikely to give it up without a considerable fight. Although Popper came along and declared that science was a deductive endeavour, his various forms of falsificationism were defeated quickly. I suppose my experience is exactly the converse to yours: time after time i see people asserting that scientific theories are characterised (usually solely) by falsifiability, which is a sure sign they haven't read Popper and seen the development of his own thought or the (contemporaneous) objections he tried and failed to deal with. However, i am probably basing this more on non-specialist sites since i'm aiming to help laymen here (at least in part).
What kind of "mixture" of falsificationism with inductivism should we aim at? What would such a methodology look like?
Posted 17 February 2005 - 11:50 AM
The problem of vtheory ladenness certainly exists yet I would argue that this is not enough to undermine the existence of the so called 'objective truth'. It is true now that the positivist view of a purely neutral system is untenable yet this does not undermine the rational justification for granting a fallible epistemological privilege to a minimal scientific method (and to the theories and statements indicated by it).
Besides the road toward truth (at least approximative truth) is wide open (though of course the first task of science is empirical adequacy), even more why shouldn't we be now on 'the right way'? Merely pointing out that we cannot be sure or very confident of that is, certainly, not enough.
My minimal methodology is basically that of Lakatos but taking in account also the 'reliabilist' traditions. Basically scientists should pursue, with priority, programmes which are theoretically and empirically progressing (Lakatos avoid to advise this). Without necessarily respecting Lakatos' negative heuristic.
If more programmes, on a certain level of priority (see further), are in this situation (at a certain time) should be chosen as the first choice that which is the simplest ontologically, preferable capable also to unify fact thought previously as unconnected. Of course if a programm situated at a lower level of preference (for example contradicting some of the statements in the body of previously accepted knowledge) proves to be progressive and more fecund (possible simpler and capable of unification) it should be chosen as the the first choice programm.
Here is a sketch of my proposal, frankly speaking I've never tried to develop the idea, it is probable old. If at a certain time there exist unsolved puzzles and apparent anomalies (it seems that very few theories can escape this problem-if there are many we deal with a crisis) our first task is to try to solve them. Even if do not nothing impede us to try other paths. Yet I would argue that we can still define a minimal methodology based also on reliabilism.
The 'ladder of preferences' (sketchy) I propose is:
1. Try first (as the first choice programm) to develop the existing programm (older paradigm) without postulating auxiliary hypotheses ('reduce' anomalies and the existing usolved 'puzzles' at the existing paradigm).
2. Try programmes proposing new hypotheses (including postulates and new theoretical constructs), even if they seem ad hoc initially, fully compatible/coherent with all previous knowledge. (the 'infinite bubbles' inflationary theory does qualify here for it explains, tentatively, additionally the so called 'fine tuning' of the universe; the neutrino hypothesis is another example, coherent with naturalism and the law of conservation of energy).
3. Try programmes which are only partially coherent with the set [previous knowledge + the basic assumptions of science (e.g. naturalism, realism)] which retain however all the basic assumptions of science and regain at limit (without loss) all the laws/'entities'/phenomenological theories inferred using Mill's methods (the most stable part of scientific knowledge); that is retain most of the theoretical constructs (at most with a minor change of attributes assigned) which appeared in a series of increasingly successful previous paradigms (some postulates/theoretical constructs could disappear altoghether if they do not belong to the above mentioned categories even if they appeared in a succession of paradigms).
4. Try programmes which contradict previous knowledge, even that inferred using Mill's methods; that is renouncing even at constructs which, for example, are of common sense (e.g. in Newton's time renouncing to accept that the Sun rotates around Earth).
5. Try programmes contradicting more basic assumptions of science (for example the conservation of energy, the uniformity of nature) or even some of the basic assumptions of science (e.g. naturalism or realism).
A scientist is not compelled to work at any of these levels (it could be used orientatively to dosage the effort however). Thus scientists could try to delvelop whatever programm they please, yet the 'ladder' indicates a clear system of preferences if more programmes are more or less equal (not neccesarily absolutely equal).
Progressive programs for example, if found, at higher levels of priority have privilege (a fallible epistemological privilege) to progressive programmes at lower levels if they are more or less equal on empirical grounds (thus such a programm at level 1 has epistemological privilege, openly fallible, to programms at level 4, for example).
At all these levels the programmes should have power of explanation (at least for some unsolved puzzles and anomalies and facts explained by previous paradigm) and should prove to be theoretically and empirically progressive on long term (offer explanations all facts explained by the previous paradigm, make new, novel, testable predictions, some 'confirmed' experimentally. Not all these requirements are compelling, though of course it is preferable to complete the programm; for example see the 'infinite bubbles' inflationary universe which is acceptable, provisionally, though basically we cannot reach other 'bubbles').
If (looking 'horizontally' at the same level of priority) more programmes have the same explanatory surplus, being more or less approximatively equally supported empirically, then we can extend things further by preferring, as the first choice, programmes having the smallest number of theoretical constructs which, preferable capable of unification. The 'confirmation' of novel predictions is an asset which has prevalence to merely explaining known facts.
Take as an example the situation in the consciousness field. We still lack there a detailed theory of mind, especially accounting for the so called subjective experiences (the hard problem) and there are more proposals having also power of explanation, basically equally supported empirically, if we do not take reliabilism in account (due also to the 'tacking' problem).
Certainly we cannot put all these on the same level of priority. First, as it is done actually, we should pursue the existing emergentist computational programm which is still progressive and has the greatest coherence with previous knowledge (other parts of science included). This in no way means that, for example, the dualistic approaches (in spite of Dennett) or 'qualia= a fundamental feature of universe' programmes should be abandoned but certainly, currently, we are rationally entitled to grant a fallible epistemological privilege to the emergentist programm.
Whilst remaining fully open to realize later that, for example, a dualistic approach will become more successful; or, at least, to the possibility to accept that something more than the neural network of the brain is implied by subjective experiences if the actual approach will become stagnant or degenerative on long run in spite of the absence of better alternatives).
Returning at generalities in my view, in some particular cases scientists are even entitled to believe that some of the very successful theories are at least approximativley true. Here 'belief' does not involve being sure of that or sufficient reasons (implying no way back) no, fallibilism is retained at all times. Yet, certainly, it implies more than merely preferring those theories to all existing alternatives.
To conclude a minimal method based on most of falsificationism but which to take reliabilism and Bayesianism in account is our best 'tool' so far to make sense of observed realities. I am aware that some people will not agree (very few anyway) yet there are no good reasons currently to deny at least the existence of a minimal method. Underdetermination, theory ladenness or the fact that experiments can only be interpreted are not enough. Not yet, at least.
Besides denying the existence of a minimal method is harmful for it leaves the door way open to pure relativism. Why should then only scientists be entitled to set standards in their field would ask, rightly, a relativist. Basically everyone would be as entitled as them to propose standards as much as there exist explanatory power, as in the case of religions for example. Happily the agreeement we observe in the scientific community (of course total agreement is impossible) cannot be attributed only to social features, we have very good reasons to think that something more, 'objective' is involved here...
Posted 17 February 2005 - 08:44 PM
The move to include a degree of fallibilism (implicit in Lakatos, i think) is quite subtle: the obvious reply to Lakatos' distinction betweeen progressive and degenerating research programmes was to point to historical examples that had degenerated for significant periods of time and yet still proven methodologically useful eventually. Like the methodological falsificationist, you award an epistemological privilege in a strictly fallibilist fashion, which avoids this historical defeater.
As i explained in my essay, however, it's hard to see how this can ever be in error. If a degenerating programme proves to be successful even where it appears to violate parsimony relative to others or is less ostensibly fecund, we can say that this kind of situation is a risk taken by any fallibilist stance. Nevertheless, it leaves us unable to answer the question "when should a scientist shift his/her efforts to just such a degenerating programme?" We have no methodological guidance to give because we have fallibilistically opted for the progressive programme(s). In the absence of scientists continuing to work on degenerating programmes, we can of course expect no progress in these areas and hence no challenge to the fallible privilege granted. Conversely, the success of a degenerating programme only demonstrates fallibility, which we have already granted. You can see the difficulty here, i hope.
You talk of privileges and "first choice programmes" but the main objection to theoretical pluralism is that resources are limited and that in practice we cannot proliferate theories. Even where we have several choices, your ladder of preferences seems to me to fail to exclude any conceivable theory. While i think it's fair to say that your ordering reflects the way most people proceed in solving any problem (not just developing and evaluating scientific theories), it is hard to see how we can grant any privilege to the actual ordering except in an arbitrary fashion and by begging the question. For example, there is - ceteris paribus - no reason to suppose that continuing to work on an extant programme is more likely to succeed than one involving the rejection of what you call "basic assumptions", unless we presuppose that the former possesses a high degree of verisimilitude. Even if we do, of course, our fallibilist stance can never be wrong - nor can it exclude the occasional (or systematic) reversal of your ladder.
I think this doesn't follow. The denial of a minimal method need not imply that all methodologies are equal. For example, we could say that instead there are certain themata, together with metaphysical and methodological principles that form a non-prescriptive list, much like yours but without any hierarchy. We cannot infer that these characterise science or that they will continue to prove fruitful in future as they apparently have thus far (at various times and in various measures), of course, but we can suggest that ceteris paribus clauses give us enough reason to employ them. (That is, we should prefer the simplest theory other things being equal; where other things are not equal, we can try to work on reducing the influence of other factors, and so on.) This is a kind of methodological verismilitude, i suppose.
Posted 18 February 2005 - 08:04 PM
Posted 18 February 2005 - 08:52 PM
In general, the point of what i suggested is that a non-prescriptive list makes no demands of scientists (i.e. the Popperian duty to reject a falsified theory) but instead gives some criteria, some or all of which we say characterise a scientific or proto-scientific programme. If none of them are seen in a proposed programme then the result should not be that it must be dismissed as unscientific but rather we should question why anyone is interested in it at all. Regardless of the problems with all demarcation criteria, if a theory makes no novel predictions, is internally inconsistent, unverifiable in principle, unfalsifiable in principle, unparsimonious, offers no possibilities of unification, and so on, then who would be advocating it?
Hopefully the subtlety of the distinction i'm making is clear...
Posted 19 February 2005 - 03:41 PM
Shouldn't we think, even here, in the context given by the actual state of affairs? This could count safely as a justification for adopting a fallible hierarchical system, whilst retaining fallibilism of course. In other words there is more 'positive' justification, though not at all decisive, to prefer a weak form of reliabilism in the form of a hierarchical system, now at least.
The theory ladenness problem is far from looking decisive, there are very few examples of true underdetermination, with the existence of clearly distinct formalisms, and anyway it seems less plausible that two such programmes will remain equal irrespective of data (or that there are more, very distinct, candidates at the 'theory of everything' status).
Anyway 'reliabilism' is rather a standard in modern science, in QM's field where we have a genuine underdetermination (currently) for example all successful field theories beyond the standard formalism of QM are framed around the Copenhagen Interpretation and further enhancements though there are also other, equal empirically, interpretations of the standard formalism.
The tendence is to mantain it during the process of further theory devising, focusing primarily to it as a matter of fact, exactly because it has the greatest coherence with other still accepted theories (General Relativity-SR more exactly-is one of them).
New methodological principles dependent of context, adding to the existing ones, are certainly allowable but from what we observe now they only add 'new layers' to the previously accepted knowledge. For example Chalmer's 'first person' method do not seem to undermine the status of the emergentist hypothesis of mind, on the contrary it might even reinforce it. Not to mention the fact that it is, at limit, only a version of the usual 'third person' method (usual intersubjectivism applied in cognitive sciences).
Secondly don't your view fail to take in account the coherence criterion, among the different sciences included, as a good indicator to choose between different programmes, even for adopting a programme which was degenerative for a long time before? For example Prout's programme was degenerative for a long time but when the atomic hypothesis became mature enough scientists understood why this had happened, subsequently it became progressive.
Again there are more reasons currently to prefer such very coherent programmes, the 'negative' arguments put forward so far are not enough in my opinion, no more than the arguments brought forward by some who claim that reductionism is impossible (in the philosophy of mind field).
Posted 19 February 2005 - 05:18 PM
Posted 21 February 2005 - 02:34 PM
To be clear I do not deny the fact that progressive programmes, highly coherent with previous knowledge and with other parts of science, might seem only progressive on available evidence. Yet the concept of rationality (having at base empirical adequacy and coherence) is historically linked, dependent on the existing arguments, even the so called sufficient reasons are so. While I cannot say that we have sufficient reasons to pursue with necessity progressive programmes having a great coherence with previous knowledge (this is why scioentists are free to pursue whatever programm they wish) I'd argue that we have more reasons, currently at least, to have a greater confidence in them if progressive than in programmes only partially coherent with existing knowledge if they are more or less equal on empirical adequacy.
This implies more than a mere non prescriptive list, in my view of course, which implies a much more degree of subjectivism (basically letting the impression that notwithstanding our list the programmes are basically on the foot of equality). Let's think only at the atomic hypothesis. If for example nontrivial changes in modern science would be at the level seen in the history of science or that simpler programmes would prove systematically be inferior on long run to more complex ones then I'd say that a prescriptive list would be more adapted. Unfortunately this does not happen, for the moment at least.
The situation is more or less the same as in the problem of realism vs idealism. I'd argue that currently at least we just have more reasons to grant a fallible epistemological privilege to to realism: this in no case implies renouncing falsificationism (a fallible epistemological privilege does not involve being sure that realism is true or that it is more probable to be true) but certainly implies more than merely preferring realism to idealism for all our practical purposes. It is fully justifiable to accept realism as the first choice programm, granting it a fallible epistemological privilege based on non empirical reasons we have 'pro' realism (valid on long run).
Otherwise if we propose merely a non prescriptive list it could be understood that basically idealism and realism are on the same level of privilege because we do not have the sufficient reasons to make a clear difference between them (basically once and forever). This approach is mistaken in my view for, as I've underlined, justification is is temporally dependent being defined as a function of all arguments 'pro' and 'con' at a certain period of time (preferable stable on long periods). It seems to me that the actual situation in science is best modeled by a hierarchical system, minimal, even the history of science can be regained at limit from it. This in no way involve too strong commitments, it only means that a unified minimal scientific method is our best 'tool' so far to make sense of the observed facts.
Posted 21 February 2005 - 02:39 PM
The overestimation of social factors seem to me a mistake. No, no implication that a sociological dimension is not involved (well known for example in the case of Copenhagen Interpretation vs pilot wave interpretation of QM) yet everything reduces at fruitfulness finally. For example even if the pilot wave interpretation had won the 'battle' in 1927 if it would not have proved frutiful on long term scientists would have certainly switched back to Copenhagen Interpretation as the first choice when trying to extend the limits of science (the physical meaning of existing quantum field theories is compatible with Bohr's view, there are no known alternatives based around Bohm's interpretation in spite of the continuous research even in this direction).
Sure no one says that the pilot wave interpretation is wrong (for example I am a supporter of it) but certainly we can keep a degree of objectivity based on existing arguments which recommend as the first choice, for the moment, copenhagenism. Indeed the resources are limited but the hierarchical model can easily cope with that by indicating on which direction (orientatively) should the bulk of effort be directioned.
The idea is that even if we are now on a 'wrong branch' currently there are more reasons to pursue with priority the path opened by Bohr's Interpretation. This no way means that there is no way back, indeed there is no good reason now to think that 'wrong ways' can remain progressive and coherent on long term.
I am not so concerned with when to shift the effort toward other programs or that other good programmes will not be found, there are always plenty of mavericks around and anyway scientists are entitled to search other paths as secondary choices, there are no limits here. Besides we can be on the 'right way' toward truth though we do not have sufficient reasons to grant to such a programm a perpetual epistemological privilege (maybe never), why not? What I see really wrong is a too strong commitment to prevalent views which could really block alternative research (scientists fear to follow non popular path which could hamper their carrier) which we witness currently in many parts of science but which the hierarchical system I propose is able to avoid easily.
Posted 21 February 2005 - 02:44 PM
Posted 21 February 2005 - 03:48 PM
Posted 21 February 2005 - 08:45 PM
I think i understand you much better now. When you say "justification is is temporally dependent being defined as a function of all arguments 'pro' and 'con' at a certain period of time (preferable stable on long periods)", i can see where you think i differ, insofar as i might seem to be demanding justification of a stronger kind. I don't see that this is the case, though; if anything, the flaw in my approach lies in having "too much" concern for making a mistake. That is, i'm worried about Type I errors (rejecting a true hypothesis) while you're more interested in avoiding Type II (failing to reject a false hypothesis) and therefore proceeding with the fallible justification we have for a progressing programme. As you also said, this may come from my studies in the history of science: therein we find a multiplicity of examples of "Type I errors" (not exactly, but you see what i mean, i'm sure), especially if we retrospectively apply various philosophies of science in order to see what might have happened. The alternative, of course, is to accept that we may make these errors but to press on with the justification we have to hand. I think there is something to be said for both.
What do you mean by everything reducing to fecundity in the end? Here again i think we have the same separation: we cannot infer from short-term (given the lengths of time some thema and theories have lasted for, this is undetermined) fruitfulness that this will last or that another programme may not prove equally fecund, if given the chance. It seems that you want to say "indeed, but we have to go with what we have at a particular time" whereas i'm worried that we might be wasting time or - more importantly - restricting the resources given to another programme. This is the point: we can grant a fallible privilege to a programme on the basis of its temporally dependent justification, but how can we be sure that another programme - if given a share of the funding, laboratory time, manpower, etc - could not do as well, if not better? By making this fallible decision we affect the future prospects of both programmes, such that the comparison of fecundity is not really a fair one. Perhaps the continued existence of mavericks goes some way to addressing this issue, but i'm not sure that it's enough.
(Note that the edits function is disabled currently while we work on a technical problem. Hopefully i can set it back again soon.)
Posted 24 February 2005 - 02:41 PM
I think there are good reasons to argue that even on short term scientists do have a clear system of preferences if we look at the situation in modern sciences. Scientists usually prefer to push further programmes that proved successful previously / try to regain them at limit without any ontological loss. For example cosmologists do not treat very seriously the prospect to find alternatives to standard inflationary theory, not yet, or to discard the so called 'false vacuum' (this form of Higgs field is a prediction in the Standard Model but it still possible that it could not exist in reality). Indeed basically this is the only real contender we have so far here in spite of the fact that scientists have a weaker degree of confidence in it than in other programmes in physics which are much more complete / prolific (GR or QM, the standard formalism, for example).
The problem of quarks is again very interesting. In the 60-th scientists inferred the existence of a 'jungle' of (strongly interacting) 'elementary' particles something puzzling for they expected nature to be very simple at its most basic level. Well first basically no one tried programmes challenging the existence at least of some of those alleged elementary particles, after all inferred indirectly, the experiments (and their accepted interpretation) 'confirming' them were considered crucial.
Secondly many leading scientists, most notably Murray Gell Mann (the 'inventor' of quarks after all), considered them only mathematical models initially preferring to pursue, as the first choice, programmes which did not introduce a new ontological level. This in spite of the obvious simplicity of the quark model. For example Gell Mann's first choice programm at the time saved the elementary particle status for the strongly interacting particles discovered in the 60-th with the expense of having to propose no less than 30 new elementary particles!
It was the surprising discovery (interpretation of an experiment, considered crucial) in 1974 of a new particle (J/psi) which finally made the scientific community to look much more carefully at the quark model (which is not to say that there were not scientists working at that programm previously). This in spite of the fact that initially no more than 5 different interpretations explaining the results of that experiment were offered which did not involved the introduction of a new level ontological layer . Indeed the J/psi particle could be interpreted most naturally as bound state of a fourth quark called 'charm' and its anti particle (the 'charm' quark was first proposed by Glashow et al when interpreting the results of a series of other collisions but at the time it was not considered a 'crucial' interpretation).
But what really convinced the scientific community to accept the quark model was the discovery in the following years of a series of particles which fit perfectly the quark model open with the J/psi particle (later new quarks were inferred). The conclusion is that basically immediately after the time when the quark model became more successful than its existing alternatives the scientific community switched to it as the first choice programm deserving to be pursued further (all other programmes were either stagnant or very instable, having to be changed very often, needing the introduction of auxiliary assumptions).
I dare to say that even the classic now geocentrism vs heliocentrism problem could be retroactively reduced to the same pattern. Indeed what really justified the adoption of the heliocentric model was the great success of classical mechanics (for the invention of theories there is no limit; so it is conceivable to renounce even at the fact that the Sun does not rotate around Earth or the ptolemaic assumption that celestial bodies move only in circles) though Kepler's laws could not be deduced from it (we need auxiliary hypotheses for a strict deduction), anyway they are regained at limit. Before that certainly a geocentric system should have been considered as, still, the first choice programm (Galileo's observations with the telescope could still be challenged reasonably by claiming the lack of a theory explaining its functioning).
Returning at the situation in modern science we could ask rightly are we now on the 'right way' toward truth? Well the Standard Model, the best we have currently, has no fewer than 23 free parameters and some unsolved puzzles, besides it falls well short of the degree of unification scientists dream of. Basically there are no good reasons now to think that it is approximatively true (that is we are on the right track toward truth; an interesting question here is whether we should consider Newtonian mechanics as approximatively true?, following Popper I'd say that no ).
Yet neither are there good reasons to think that we are not on the 'right track'. Basically there is no real contender to the Standard Model and basically there is no good reason to expect a non cumulative change of paradigm in the future (seeing the records of modern science) The best approach I see is to grant a fallible epistemological privilege to the Standard Model and to a unified minimal scientific method, the best 'tool' so far to make sense of the observed realities (though, clearly, far from the strict algorithmic method still dreamed by some inductivists).
Finally, to use an allegory, have we to reject something by fear of being a 'Trojan Horse' (based on the ancient history/greek mithology when such facts were common) or should we accept it (based on modern history) whilst taking all the precautions which can keep us free of all possible dangers (in the case of science always retaining fallibilism when granting a fallible epistemological privilege to a minimal scientific method)?
If we keep in account also that basically there is no sufficient reason to think that 'wrong ways' could remain progressive highly coherent on long run we could fear only a limited detour (anyway strongly balanced by the mavericks still working on alternatives or even scientists pursuing as secondary choices other programmes way before the main view run into troubles, the 'detour' might be very limited indeed, possible non existent).
PS: I would really need the edit feature, usually I do not stay too much to compose my posts, I have the bad habit to edit them later if I observe some mistakes ).
Posted 24 February 2005 - 06:13 PM
What about String Theory, isnīt that considered like a worthy "real contender"?
Posted 25 February 2005 - 09:30 AM
The development of quarks via investigation of the four forces is indeed quite fascinating. In his study, Pickering explained just what you say; viz., that particle physics went from a picture of nature as simple, with everything ultimately consisting in an arrangement of three basic building blocks (protons, neutrons and electrons), to a "bloated" ontology wherein the number of fundamental particles spiralled upwards. Many of the experiments "confirming" their existence did so only indirectly (as you again point out) by inferring them via extant theoretical assumptions (much like Pauli positing the neutrino in order to save the classical conservation laws). Thus we find a "simple" model with some anomalies and limitations replaced by one with a more complex ontology but fewer (apparent) difficulties. We can see why Bohr was opposed to the application of Ockham's Razor, at least.
Although resources were shifted in later years, it is here we see the rise of "big science": team of physicists working with bubble chambers, particle accelerators and complicated detection apparatus. Philosophy of science also shifted insofar as it began to look at a third area: theory, experiment and instrument. What happens to the traditional requirement of scientific method that any test be repeatable when the equipment required to do so is such that it may take years to design, plan and construct a single experiment? More importanly, perhaps, are the lists of potential factors that may influence a result, such that a decision must be taken as to whether the experiment has confirmed or disconfirmed a theory or instead been distorted by these other factors. On large-scale experiments with limited possibilities for repetitions, this becomes an important strand to the investigation.
The issue of geocentrism vs. heliocentrism (and geostaticism vs. geokineticism) retains some of these features but not all. For example, it is hard to see anything analogous to the theological and political aspects; although the early quantum theorists struggled with thematic issues like continuity vs. discontinuity that we could characterise as fundamental, there does not seem to have been any similar concern at work within the community of quark theorists. I think it is a mistake to say that the success of classical mechanics proved the eventual vindication of heliocentrism, since there were again more complex matters at work (Newton's anti-trinitarianism, for example, perhaps contributing to the slowness of the Church's removal of Galileo's works from the Index).
It seems that the current (and recent) situation in physics is better disposed to explain the attraction of instrumentalism than anything else. It is difficult to fit the development and practice of science over the past hundred years, say, into any theoretical model or to emphasise any potential demarcation criterion as influential. I think the decision to stick with a programme in spite of unresolved problems and anomalies is indicative of the role of tenacity rather than the lack of any contender (even where this may be an accurate description of the situation).
Although we may suppose that there is "no sufficient reason to think that 'wrong ways' could remain progressive", i think an argument to that effect would be harder to defend. The concern i raised previously was the very real possibility that a 'true way' could remain regressive for a non-specific period of time sufficient to bring about its rejection. I'm not sure what you mean about Trojan Horses.
Posted 28 February 2005 - 01:31 PM
I think we have currently the justification needed for such a stance, though of course it cannot be considered sufficient, in the sense of no way back. No one will laugh at us in the future even if we follow now with priority programmes that are 'dead ends' in reality; a careful study producing the 'picture' of the whole historical context, with all the existing arguments pro/con at this time, will convince all would be rational persons that there is nothing to laugh about.
As I've already pointed out what is really damaging is dogmatism based on current 'evidence' ( there is no way back, we are sure that we are on the right way toward truth, no serious non cumulative paradigm shift is possible). Or a minimal methodology of the type I proposed is always fallible regarding the possibility of serious non cumulative shifts to appear (involving much more than the mere disappearance of some minor attributes of some unobservables, theoretical constructs). By 'weighing' all arguments 'pro' and 'con' such a stance I'd argue that currently there are more reasons to not fear a definitve loss of a 'good way', that is such a skepticism is too strong now.
Of course my view is not infallible. Actually, if we look if more attention, does there exist (now at least) a clear 'winner'? I really doubt, from what I see some inductivists (among them many scientists not only philosophers of science) still believe that there does exist the justification (we have yet to find) for an universal, algorithmic, inductive method taking us to the truth. Finally I think this is a good thing for the philosophers of science otherwise who would bother to think at these problems?
Posted 01 March 2005 - 01:52 PM
Of course the premise of inductivists is that today's science is much more carefully conducted, much more reliable, way less likely to give false results. Yet I find this approach reasonable, at least partially justified. For example it's clear, in my view, that a methodology which takes into account both the quantitative and qualitative aspects is superior to one which takes in account only quantitative aspects (the part that a modern universal inductive method can lead us to the truth is not yet justified however).
If we accept knowledge as being justified true belief then, looking retroactively, we should concede that many paradigms before 1900-th did not really deserve to be labeled knowledge (objective knowledge). I find this approach not satisfactory for at the time there was justification in their favor, being deservedly the first choice programmes at their time. In my view they fully deserved to be considered knowledge at the time, notwithstanding that fully accepted as fallible, even if we judge them through the light of modern science.
Thus my proposal is to define knowledge as merely justified belief to take into account both subjective and historical contexts. If we take in account only the so called 'objective' arguments (implying inter-subjectivity) and mere logical arguments, not taking in account the purely subjective ones, we can define 'objective' knowledge as justified belief, for which we have the most 'pro' and the fewest 'con' arguments, stable also on long enough term.
Sure if we look retroactively there were plenty of problems with those hypotheses (phlogiston etc) but at the time they were the most adequate empirically and there were no crucial 'con' arguments (well, better said, as today, few anomalies do not really put pressure on a research programm). This fully deserve a fallible epistemological privilege at the time of their 'glory' (that is provisional scientific knowledge status) for geocentrism, phlogiston and so on, even if we look at them through the prism of actual science.
By the way this is a weak point even for the non prescriptive list approach. What is knowledge (scientific knowledge) if the programmes on the list are supposed, basically, to be on equal foot?
Posted 01 March 2005 - 03:24 PM
On the other hand, the notion that we should not call knowledge, what we retrospectively see as errors is too heavy a burden, since that sort of knowledge seems to require certainty which is impossible . Should many of our current theories turn out to be false, should we declare them devoid of knowledge? But how could we get knowledge without the profeliration of theories? That is, a sphere of dialogue or conflict seems essentially in getting to whatever knowledge we're looking for.
This also is in agreement with my post on Feyerabend on values, where if we "knowledge" is defined in such a sense that it accords with various values of science, we can have "knowledge" even if it turns out to be false.
For example, the "bloated ontology" that Hugo refered to. What this can be a case of is a recognition that what we previously thought was comprehensiveness is not, so, but this overarching value, this strive for comprehensiveness allows one to correct previous accounts. This may conflict with another value such as simplicity, but this is good, in that it prevents the dogmatic adherence to particular systems because of their perceived "simplicity." In essence, these values, will oscilliate in significance and hopefully, prevent the ruling out of certain theories because they do not strictly adhere to a particular one.
Nevertheless, it seems the reasoning behind errors is more important than the errors themselves so that in of itself is valuable. For example, what Aristotle thought of weight or the ideas against heliocentrism. That these episodes complicate empiricism is, in my mind, a valuable lesson learned.
Posted 01 March 2005 - 03:37 PM
Although this issue of what constitutes (scientific) knowledge is an interesting one, which we could pursue here, i think this objection is little more than the worst kind of anachronism. A scientist (or anyone) can only work with the conceptual tools and his/her disposal (or else develop new ones) and justify theory choice by using them. The notion that the "knowledge" arrived at by early scientists was hasty only demonstrates an ignorance of the complex ways they arrived at it.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users