This site is supported by Nobility Studios.


  • Content count

  • Joined

  • Last visited

Community Reputation

173 Excellent

About Peter

  • Rank
  • Birthday 05/17/1959

Contact Methods

  • Website URL

Profile Information

  • Gender: Male
  • Location: Up from Pluck the Crow Point

Recent Profile Visitors

36,673 profile views

Peter's Activity

  1. Peter added a post in a topic History   

    One reason someone might want to learn history would be a dissatisfaction with the present state of human affairs. If the past is different from the present, then the future can be too. Similarly, if you're dissatisfied with nature, you might want to study science. While you may not be able to defy nature, science can at least tell you what limits are really set by nature and what are only contingent on your present circumstances - nuclear reactors don't defy the laws of nature, but they were still inconceivable 100 years ago. It could be said that dissatisfaction is the engine of history and of science since someone completely satisfied with their lot would feel little inclination to change things or to torment themselves with endless wondering about how or why things came to be so or what might be instead.

    One question that arises from the above (assuming we still have an appetite for tormenting ourselves with such questions), is whether there can be a state of human affairs where no dissatisfactions of this kind arise: a condition in which science has reached perfection and history comes to an end. If we feel uneasy with that idea (perhaps only because what is equilibrium to one person is not necessarily equilibrium to another), then perhaps we should ask what value does the study of history bring beyond that of entertaining us with stories?
    • 0
  2. Peter added a post in a topic History   

    "Upgrades your Bullshit Detector to version 2.0 when people start talking about controversial politics" 'kin A!
    • 0
  3. Peter added a post in a topic Philosophy: its influence or influences   

    If that's the case, one might wonder how philosophy ever got a reputation for being anything more than a crock of shit. Maybe it's good shit that smells even shittier than all the other bad shit. Or maybe philosophers are the people who, when they talk shit, REALLY talk shit.

    And then there's trolling. Surely we have to recognise that Socrates was one of the greatest forum trollers of all time. 2500 years later, the threads are still running and Socrates himself is long, long gone.

    How the gods must be laughing at us.
    • 1
  4. Peter added a post in a topic A Shameless Worship of Heroes   

    Surely this dilemma (if that's what it is) only persists if you think that greatness is objective. I prefer a subjective view of greatness: if experiencing the work keeps alive in me a belief that I may one day still achieve something great, then the work is great to me. The personal characteristics of the artist/author only matter to the extent that they affect that. And of course any possibility of greatness from me us just as subjective. If I thought nazism was great but found that reading Heidegger did not inspire belief that I could go on to greater heights of nazism, then of course I would have no further use for his books even if all I hear from others is that he was the greatest c20 philosopher.
    • 0
  5. Peter added a post in a topic A decade of TGL   

    I've only been here for about half the life of TGL and have never been a prolific contributor, but I'd like to record my thanks to the many people here who've helped me learn something - whether they knew it or not. Particular mention to Paul, of course, for staying with his ten-year mission, to Scotty for keeping the dilithium furnaces stoked, and other members of the crew including Michael, davidm and that mysterious heretic character. Here's to many more years of high quality (even if only low volume) discussion.
    • 4
  6. Peter added a post in a topic The Concept of Decline of the West   

    Re your comparison of history with "the sciences" - isn't science itself a set of historical accounts of observations made and rationalizations therefrom (theories)? Of course, the theories work. Or if they don't, we replace them with new ones. But when we say the theories work, we mean we have technologies that work and whose working is explained by scientific theory. The theory gives us confidence to try out untested variations on technology; confidence coming from the fact that the theory says the variations will work. If they do work, we tell ourselves that the theory is correct and if they don't, we look around for a new theory. Either way, it's technology that works (or not). Scientific theory is the "myth" that drives us to try out one type of imaginable innovation rather than another. In that sense, scientific theories are what drives history.
    • 3
  7. Peter added a blog entry in Nature is not a Book   

    Peer Review “Randomness” – A Case for Deliberation
    I’ve been reading about the NIPS Experiment. Calm down at the back there. NIPS stands for Neural Information Processing Systems. It’s all very serious and you can read about the experiment [url=""]here[/url] and [url=""]here[/url].
    In essence, the experiment aimed to examine the process by which papers are accepted or rejected by peer review committees for conference presentation. Obviously, it’s all to do with scientific quality and the scientific community is built around a common understanding of what that means. Or is it?
    The NIPS experimenters split their panel of conference peer reviewers into two committees. Most of the papers went to one committee or the other for review, but 10% of them (166 papers) were reviewed by both committees without the members knowing which papers they were. It was then possible to see how similar the two committees were in their evaluation of those papers. A full write-up of the results is still to come, apparently, but [url=""]Eric Price has revealed the essence[/url].
    The committees disagreed in their evaluation of 43 of the 166 papers. Naïvely, you might think that’s not too bad. They disagreed on 25.9% of cases, so they must have agreed on 74.1%. However, Eric Price points out that the committees were tasked with a 22.5% acceptance rate which means that the number of disagreements was larger than the number of acceptances each committee was expected to make. This means that most (more than half) of the papers accepted by either committee were rejected by the other.
    Price considers a theoretical model which treats the peer review process as a combination of “certain” and “random” components. He assumes that there will be some papers that every reviewer agrees should be accepted (acceptance is certain) and some that everyone agrees should be rejected (rejection is certain). For the rest, Price’s model assumes that committee members make their decision by (metaphorically at least) flipping a coin. This is the random component and the level of randomness in peer review is the proportion of papers that get this treatment. The divergence in reviewing committees’ decisions seen in the NIPS experiment imply that there is quite a lot of this coin-flipping randomness in peer review; perhaps more than most people thought.
    Is this “randomness” in reviewers assessments a cause for concern? Price points out that “consistency is not the only goal” and, indeed, it can arise for reasons that are not necessarily welcome. For instance, unanimously accepted papers may simply be feeling the benefit of appearing under the name of well-connected authors that reviewers favour for reputational reasons. Conversely, papers that reviewers unanimously reject may just be suffering the penalty of pursuing unfashionable research topics that reviewers see as a drain on funding for more popular topics. It may well be that it is precisely in the “random middle” – between the certain acceptances and certain rejections – that we see peer review at its best.
    But how can it be any good if it’s random? The truth is, it’s pretty implausible that it really [i]is[/i] random. I don’t see much reason to believe that peer reviewers actually flip coins and as [url=""]humans are not good random number generators[/url], it seems unlikely that conceptual flipping of imaginary coins would produce genuinely random results. What really goes on in this middle zone is not random at all. Rather, it’s a process of deliberation where each reviewer considers a variety of factors and makes a decision on the basis of balancing those factors. Even having made the decision, the reviewer probably still feels a fair degree of uncertainty as to whether it was the right one.
    Because reviewers are usually allowed to decide for themselves which factors to consider in their deliberations, there is a good deal of variation between reviewers as what factors they consider. Putting it more formally, the [i]weight[/i] they give to each factor is not prescribed. What’s more, there’s no guarantee that even individual reviewers will attach the same weight every time: the same reviewer could reach different conclusions about the same a paper considered under different circumstances.
    In short, the degree of “randomness” seen in the NIPS experiment undermines one of the cornerstone assumptions of the peer review process – that reviewers share a coherent common notion of what qualities to value in a paper. Instead, it suggests that the criteria that reviewers use in practise are quite divergent. If this is the case, it is hard to see how peer review could possibly be “fair”. Certainly, steps such as making reviewers comments and identities open to authors would seem to miss the point. What is more in order is a dialogue over the criteria used to evaluate research in the first place and whether traditional peer review has any useful role to play in this. [img][/img]

    • 0
  8. Peter added a post in a topic What books are you reading now?   

    Interesting list. Not that I've read any of them! I'd be particularly interested to hear what you think of Rheinberger's Epistemic Things, either now or in due course.
    • 1
  9. Peter added a blog entry in Nature is not a Book   

    Another Kind of Scientific Literature

    This is a lawsuit filed by Wayne State University cancer researcher Fazlul Sarkar making claims of defamation against the authors of anonymous posts published on the PubPeer online journal club website. It’s quite a read. Ivan Oransky has published a commentary here. Meanwhile, PubPeer has been sent a notice of subpoena to produce evidence that would allow the authors of the PubPeer comments to be identified.
    The lawsuit claims that the comments, which largely concern the origins of gel images appearing in papers from Sarkar’s lab, effectively accuse Sarkar of research misconduct. Sarkar claims damages related to losses he suffered resulting from the decision by University of Mississippi to rescind a very lucrative job offer it had made to him.
    As of this writing, Sarkar has never been found responsible for research misconduct. However, it’s hard to understand Mississippi’s decision unless the people there thought there was substance to the damaging implications of the PubPeer comments. Clearly, those comments have to be taken seriously as part of the literature of science whose effect (and function) is to blunt the confidence readers have in certain peer-reviewed papers.
    Equally, as one intended function of Sarkar’s lawsuit must be to resharpen that confidence and thereby influence what people actually believe about science, it must also be regarded as another kind of scientific literature.

    • 0
  10. Peter added a blog entry in Nature is not a Book   

    Not Open Naming

    I just saw this tweet from Brian Glanz.
    ‘We need to defend “#openscience” from misappropriation’ implies that while the term “openscience” may stand for openness, use of the term itself is not open. If it can be misappropriated, then it has been or can be ‘properly’ appropriated elsewhere. In effect, Glanz implies that the term “openscience” is itself proprietary.
    Now I don’t suppose he wants anyone to think that it ‘belongs’ to some person or organization, but rather that when we see it used, we should reasonably be able to expect it to stand for certain things – a particular idea of open science. Of course, that idea has to come from some person or persons in particular and to have currency, it has to be an idea that is accepted within a particular community. Once they become accustomed to using it in a particular way, they may feel aggrieved when they find the term being used by others in contrary ways. Particularly so when that use appears to be an attempt by those others to gain credibility for themselves through using the term in a way that associates them with the currency afforded to the term by the community that established its use first.
    It may well be that this is what was intended by the people behind the website that Glanz cites, Frustratingly, the site seems to be offline as I type this, but earlier viewings revealed it as a showcase for various ‘alternative’ science viewpoints. The only one with which I can claim any familiarity is that of Rupert Sheldrake. Sheldrake is a reminder that even among those with the trappings of “proper” scientists (Sheldrake has a PhD in biochemistry from Cambridge University and has published many research papers in plant biochemistry) there lurks a certain dissatisfaction with scientific materialism. While I wouldn’t put money on Sheldrake and his ilk knocking the materialist axioms of science off their cultural pedestal, I equally doubt that such heretical attitudes are going to disappear. Science as we know it (open or otherwise) is a product of our culture in our historical era. It reflects our preferences and prejudices at least as much as it reflects nature. Some day, humanity will abandon science, either because interest in material reality wanes to the point of insignificance or because new and presently unsuspected ways of relating to it are discovered.
    Glanz’s own Open Science Federation site characterizes open science as “proper science” that is “by anybody and for everybody”[*]. Evidently, the anybody has to subscribe to somebody‘s idea of what is proper. The question is: who is that somebody? If it’s the same as the everybody, then there may (probably will) be disparate ideas of what is ‘proper’ science. Who adjudicates in any disagreement over that and from where do they get their authority?
    By invoking the need to “defend the good name of science from pseudoscience”, Glanz has implied that the Open Science Federation represents just such an adjudicator. But why? If everything is to be open, everybody will have the information they need to decide for themselves what they should believe. Every hypothesis is grist to the mill. Only by investing time and effort investigating it can you know it’s not right. To be sure, there’ll be cranks who keep coming back with the same old discredited or unsubstantiated stuff, but even then, being reminded of some “crazy” idea in a new context may be the spark that sets someone’s imagination off in a fruitful direction.
    Shoring up the boundaries between “scientific” knowledge or discussion and knowledge or discussion generally is not a fruitful way forward for open science.

    • 0
  11. Peter added a blog entry in Nature is not a Book   

    Lunar Mission Two: The Search for More Money
    An ambitious project to launch a crowd-funded lunar mission was announced today. A British company, Lunar Missions Ltd., intends to send a probe to the south pole of the moon in 2024. Its mission will include drilling a borehole at least 20 metres into the lunar surface. It is hoped that it will collect lunar rock samples that have lain undisturbed by solar radiation or meteorite impact since the moon formed some 4.5 billion years ago. This may help us understand how the moon and earth were formed and shed light on the practicality of a permanent manned lunar base.
    Perhaps more remarkable than this scientific mission is the funding for the project which is expected to come from voluntary public subscriptions. Lunar Missions’s initial funding round is being run as a KickStarter crowdfunding campaign that the company hopes will yield $950,000 (£600,000) in a month. At that point “we will know if the project can move forward”, says Lunar Missions’s press release. The initial funding will allow the company to establish a management team to take the project to the next stage which will involve further rounds of crowdfunding. To attract pledges, the company offers each subscriber their own “digital memory box” in a time capsule to be buried in the moon as part of the lunar mission. Lunar Missions hopes that 1% of the global population who can afford to will eventually support the project, yielding revenues of £3billion ($4.6 billion).
    The Lunar Mission One lander will have to be designed during the project, but it is suggested that the launch vehicle could be a SpaceX Falcon 9. Given that subscribers will be able to send their DNA to the moon as strands of hair, the payload is likely to include two or three kilograms of human hair.
    While the lunar mission itself is clearly still a tad speculative, Lunar Missions also intend to use pledged funding to develop an educational project. Billed as “one of the most exciting and ambitious academic undertakings in history”, this will be a digital record of life on earth as submitted by the public. Presumably that will come cheap.
    Most important of all, Lunar Missions have the media angle covered with Brian Cox and Angela Lamont on board and a glitzy CGI video of what the space craft might look like once they’ve got round to designing it.
    Lunar Mission One is a fascinating and very ambitious idea. It will be interesting to see how far they get. If an entire space mission really can be financed without government or corporate backing, it raises the question of why any other area of scientific research would consider such support necessary.

    • 0
  12. Peter added a blog entry in Nature is not a Book   

    Resolving the Tensions in ‘Open Science’
    When some subject attracts controversy, there is more to it than mere disagreement. Disagreement need not lead to controversy if the disagreeing parties understand and have learned to live with each other’s point of view. Controversy arises when there is some unresolved tension to be worked out.
    The subject of ‘open science’ still attracts controversy because there is no settled coexistence of ‘open’ and ‘closed’ models of science. There is disagreement over just what the “open” in open science should be taken to mean and over what type or degree of openness is the best for science. Those who are enthusiastic about greater openness tend to focus on themes of transparency, accountability, fairness in getting research published and, of course, “free” access to data. Those who still feel skeptical about open science tend to focus on the need to maintain standards of quality and reliability. Because the open science debate largely remains one that is conducted by science professionals for science professionals, tension arises over the extent to which the opening up of science should be allowed to disrupt the established norms of professionalised scientific practise.
    One area where the effects of this tension can be see is in attitudes to the opening of peer review of research reports. A recent high-profile retraction of scientific papers that apparently drove one of the researchers involved to suicide, led to calls to open up the processes of peer review[*], but the editor of the journal concerned said that, while this had been considered, “the disadvantages — which include potential misinterpretations and the desire of many referees to keep their comments confidential — have prevented the journal from embracing this”[*]. Clearly, there are conflicting motivations here. Regardless of the effects on overall research quality, a major barrier to opening up peer review is the perceived desire of referees to preserve the established norm of anonymity.
    In practise, peer review is a process of negotiation between the authors of a proposed research report, editors of the journal to which it has been submitted and reviewers selected on the basis that they are well-informed representatives of the eventual audience for the report. Authors want to get their report published in a journal with a ‘brand’ reputation that attracts the right sort of reader (people who’ll cite the paper, basically). Editors want papers that will reinforce the journal’s reputation for bringing out quality publications of interest to its readership.
    Peer review is widely identified as a cornerstone of quality assurance in institutional science, most people readily admit that it has very obvious faults. Review is entrusted to a small number of individuals whose competence and trustworthiness are judged only subjectively by the editors. While reviewers are supposedly chosen on the basis that they possess a strong understanding of what quality means in relation to the relevant field of research and have a commitment to seeing it maintained, they may have other motives as well, such as getting to see new research results before everyone else or even seeking to influence what results others get to see. Another effect of institutional peer review is that acceptance of a paper for publication itself signals to readers that the work described is worthy of their attention and that the conclusions drawn by the authors are respectable. Individual readers are free to take contrary views, of course, but by doing so, they risk marking themselves as outsiders or even cranks if it’s not evident that many others feel the same way. Even when a post-publication debate takes place on the significance of a paper, there is not usually any mechanism for making the content of the debate a necessary part of reading the paper itself. The interpretations negotiated during the peer review process and set out in the published paper remain the ‘official’ position unless it turns out that the paper contains errors or misdemeanours serious enough to warrant retraction of the paper.
    No doubt, there are circumstances where complete retraction is appropriate, but in many cases a discussion of what seems wrong and what remains good about the research report might be quite possible. There are plenty of reasons to believe that far more papers are in need of this kind of evaluation than are ever retracted [*]). There is at least one online forum (PubPeer) that tries to provide this kind of facility. But, it is notable that the people who make PubPeer say they have collectively decided to remain anonymous in order to avoid “circumstances in which involvement with the site might produce negative effects on their scientific careers”[*]. Clearly, there is real tension over the idea of open peer review where just anyone can criticise a research report and be identified for doing so.
    Perhaps this tension will only resolve itself when an ‘open’ model of science abandons the idea of authoritative research statements as represented by the ‘scientific paper’ altogether and instead sees results only as stimulus to imagination that engenders debate and motivates further research action.

    • 0
  13. Peter added a post in a topic Bullying of sexual/gender minorities   

    Mmm... one of the best things you can pick up on a long train journey. With mayonnaise and a pickled gherkin, obviously. And only one at a time.
    Oh.. hang on... that's a BLT...
    • 1
  14. Peter added a post in a topic A question for PP   


    Is it particularly important to you to be able to use the word "willing" in a sense that refers specifically to will that is not free?

    If you equate this unfree willing to being "open to doing something" (which in my view does not in itself rule out freedom in the becoming open to doing), why not just call it that and avoid the confusion that some of the rest of us obviously experience when you speak of people being willing to do things while saying they have no free will?
    • 0
  15. Peter added a post in a topic A question for PP   

    I suppose your perception of magic results from the supposition of a "desired effect", but while the person-agent making the signal may indeed signal "desire" this need be of no importance in our assessment of the effect itself. This is the point at which Darwinian principles can be referred to. If the signal is of a type that is transmitted with some regularity and is regularly followed by certain processes ("effects") that do not occur so regularly in the absence of a signal of that kind, and the effects themselves are then regularly followed by circumstances in which more signals of that kind are made (and are much less regular when the "effect" is absent), then we can say that the signal contributes to its own "survival". The signal tends to beget circumstances which themselves tend to beget more occurrences of that type of signal. Saying that the effects are "desired" is a kind of shorthand for that.

    Of course, you or I may say subjectively feel something that we call "desire". This "what it is like to feel desire" as distinct from a mere predisposition to the making of more signals is the mystery for me.
    • 0