This site is supported by Nobility Studios.
  • entries
    66
  • comments
    37
  • views
    15,117

About this blog

Entries in this blog

When we turn to scientific experts, what do we expect from them?

For some, the preference will be for the Absolute Truth of universal natural laws that unavoidably govern everything we do. Such a preference naturally entails commitment to a material reality that is independent of human affairs. It also entails commitment to that reality being amenable to empirical investigation and, moreover, to such investigation being the good and proper aim of science. Such commitments engender certain expectations of science and hence of scientific experts. These expectations I will here refer to as a realist expectations of science.

What other types of expectation are possible? People of a pragmatic frame of mind may feel that the realist view is too stringent to be useful. They may point to the perpetual imperfection of scientific knowledge; the fact that even the the most firmly established of scientific principles may need revision in the light of future discoveries. They may say that we can never be sure that any of the “natural laws” proposed by science really are natural laws but that they can nevertheless often provide useful solutions to technical problems because they allow us to make predictions with a fair degree of confidence. Being pragmatic, they may say that this is the real value of science and should be the proper basis of the expectations we place upon scientific expertise. Under this view, the job of science is to provide instruments with which we may make good bets, based on experience, on the likely outcome of any chosen course of action. Accordingly, the expectations thereby placed on scientific expertise may be characterised as “instrumentalist”. Note that the holder of such views need not necessarily deny the existence of natural laws or science’s ability to discover them. It is the expectation placed upon science (and hence the basis for deciding how much to spend on it) that is being deemed instrumentalist here.

Another type of critic may yet remain dissatisfied. They may be concerned that even if a given theory does give us confidence in certain types of prediction, it may not be the best theory. Its predictions may be less accurate than those of another or it may be able to make predictions only in a narrower range of circumstances than another.

What might experts themselves make of all this? One example is to be seen in this article by Daniel Sarewitz. Sarewitz is a scientific expert. He is invited to sit on consensus committees that are asked to “condense the knowledge of many experts into a single point of view that can settle disputes and aid policy-making”.

Clearly, the sponsors of such committees place a premium on consensus and therefore favour the realist or at least instrumentalist view of science. Sarewitz, however, distrusts consensus, saying:

“the discussions that craft expert consensus, however, have more in common with politics than science”

Indeed, he apparently sees the search for consensus as unscientific:

“the very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. Consensus is for textbooks; real science depends for its progress on continual challenges to the current state of always-imperfect knowledge”

Here we see what is apparently an advocacy of the idealist view of science. Most striking is the contrast drawn between the certainty (implied consensus) of the textbooks on the one hand and “real science” on the other. Real science is taken to inherently in the domain of controversy. Sarewitz concludes by proposing a role for science in policy making that is quite distinct from the consensus-seeking norm of current practises:

“science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice”

“Unlike a pallid consensus, a vigorous disagreement between experts would provide decision-makers with well-reasoned alternatives that inform and enrich discussions as a controversy evolves, keeping ideas in play and options open”

Should the job of experts be to ensure the broadest range of considerations in public deliberation? Science is then not the impartial arbitrator, but simply one way of safeguarding the impartiality of those whose deliberations do make the (political) arbitration.

Source

Tt is no doubt a good thing that we hear of scientific fraud, academic plagiarism and medical malpractice in the name of research, but we would do well to remember that these are just sophisticated names for age-old theft and assault. Can we really hope for better science if we can't hope for better people?

View the full article

It is a mistake to think that artworks that borrow scientific terms or imagery are inspired by science. Rather, it is the non-scientific or even anti-scientific mythologies that grow up around science that provide the substrate on which new artworks grow.

Source

Following on from my earlier post, I want to look at the concept of the expert in some more detail. The ultimate question to ask about experts is about the benefits the rest of us receive by referring to them, but to put that in context, and because Michael Pearl actually asked the question, I shall first say a few things about what experts are.

Wikipedia is probably as good an indicator as any of what is generally understood by ‘expert’. There, we read:

“An expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by their peers or the public in a specific well-distinguished domain.”

The essential features of an expert are therefore, first, the possession of a facility for sound judgement in some specified field of concern and, second, a recognition of this by others. The expert is a distinguished person and it might be pertinent to ask how someone acquires, or achieves or is imbued with such a distinction.

The English word ‘expert’ appears to derive from the same Latin root as ‘experience’ [»]. In that, there is an implication that whatever expert authority is accorded to any individual is dependent upon that individual’s experience. Certainly, none of us are born evidently in possession of any special knowledge or skill or any powers of judgement. These things appear with growing experience and any distinguished powers that appear seem to be largely related to the types of experience lived by the individual concerned. In particular, the extent to which that individual’s experience includes practice of the skill concerned. It is true that there are innate differences between individuals in their mental and physical capabilities and it follows that there will be innate differences in their ability to have certain types of experience and in the extent to which any given level of experience leaves them permanently imbued with soundness of judgement or any particular skill. A concert pianist, for example, might be said to demonstrate levels of practical expertise that are forever unattainable by most of us. However, in these discussions, I wish to focus on scientific experts and it is not at all clear that scientific ability is innate. Certainly, if there are among us a special class of human beings born with an innate predisposition to scientific distinction, our systems for training scientists would not seem to be very efficient in recognising it. Thus, while innate differences may play some part in determining who develops powers of judgement worthy of an expert, experience (including education and training) are probably the really decisive factors. In summary, it may be fair to say that experts (scientific experts at any rate) are made, not born.

An implication of this is that while the value we place on another’s expertise is based on their experience as it differs from our own, it is not experience that need necessarily forever remain outside our own. Given the time and opportunity, we could acquire equivalent experience ourselves and therefore presumably also the powers of judgement that come with it. From this, it follows that the expert’s soundness of judgement is not something that we absolutely cannot acquire for ourselves. Rather, it is something for which we refer to another because we would rather use our own time for other things. In effect, the use of expert opinion is a way of saving time and effort. In this context, it becomes clear that in any relationship in which one party seeks the opinion of another as an expert, it is the party seeking the opinion, not the expert, who is the instigator. The expert is the subordinate and should provide opinions that serve the other party’s interests as defined by that other party. It is not the place of the expert to presume or define what that other party’s interests are.

To be continued ...

Source

On finally facing the Enquiry Centre, I saw that it consisted of three parts, representing successive stages in its evolution. The oldest section exhibited the brutalist style typical of many British university buildings of the 1960s and early ’70s. McKean and Walker describe it as “a muscular medieval fortress”, and its concrete buttresses and small windows…b.gif?host=anglosaxonmonosyllable.wordpr


View the full article

Timb Hoswell's "The Blake Feyerabend Hypothesis" is an intriguing work that makes a case for taking William Blake seriously as a seminal figure in the philosophy of knowledge and suggests an interesting synthesis with the philosophy of Paul Feyerabend.

Source

I’m grateful to the fine folks at the Bubble Chamber for the pointer to this video of Susan Haack’s seminar “Six Signs of Scientism” at the University of Western Ontario:

The audio quality is not good, but her arguments appear to be largely the same as those she makes in this paper (thanks to “Adult Child” for that pointer).

Professor Hack has plenty of interesting and thought-provoking things to say. These include a nice discussion of demarcation and a quick history of the word “scientism” which was not originally pejorative. Like most people today, though, Haack does use the word pejoratively and her definition of scientism is a good one:

“a kind of over-enthusiastic and uncritically deferential attitude towards science”

However, it’s a little different from the one I tend to:

“the idea that scientific progress requires the existence of current scientific institutions”

Then there’s the definition offered up at Wikipedia:

“the idea that natural science is the most authoritative worldview or aspect of human education, and that it is superior to all other interpretations of life”

Maybe the differences are ultimately more linguistic than semantic, but if one feels the need to be pejorative, it’s probably a good idea to know what one is trying to be pejorative about. Would it be the presumed superiority of enquiry over other occupations; the presumed superiority of one method of enquiry over others; the presumption that enquiry is only valid if conducted by individuals with particular qualifications; or the presumption that those qualified to conduct valid enquiry should be treated as inherently more valuable than those who are not?

What do you think?

Source

This started as a comment on something davidm wrote in one of his comments on Hugo Holbling's blog post Doubt and Disunity, but it rambled too much so I just put it here.

Your comments are welcome!

I don’t know why we should think of the consensus in question as a political rather than a scientific consensus. I’m not sure how meaningful the distinction is anyway, given that we know that politics (and other cultural factors) play a role in science. But, to take a simplified example, if ten scientists get together and investigate whether the solar system is geocentric or heliocentric, and then at the end of their inquiry report that they have achieved a consensus that the solar system is heliocentric, how is that not a scientific consensus? Note also that these scientists would not be claiming that the solar system is heliocentric because they say so; they would be claiming that they have achieved a consensus on heliocentrism because that’s just what the best available evidence shows is most likely to be true.

If you took a group of scientists today and asked them to "investigate whether the solar system is geocentric or heliocentric", they might well tell you that the heliocentric/geocentric debate is outmoded. Modern cosmology places neither the sun nor the earth at the centre of the universe and, moreover, that in a proper analysis of orbital motion, each body orbits the common centre of mass, not one the other.

Nevertheless, the stuff of astronomical science is still essentially what it always has been: observations about the position, brightness and shape of points or bodies of light in the sky at certain times when viewed from certain places. Nobody observes or measures "heliocentrism". Galileo's observations of the heavens broadly still broadly stand today, but heliocentrism is no longer a useful or interesting doctrine.

The coining of a word like heliocentrism (an "-ism") is a call to closure; an implication that we now know all we need to know and that no further investigation is required. It was part of the drive to resolve the question, is the Church of Rome the final authority on the physical constitution of the world or isn't it?

Now, what does any of that have to do with "global warming" or the subject of disunity and doubt in science?

Just as 16/17th century astronomers didn't measure or detect "heliocentrism", so today's climate scientists don't measure or detect "global warming". They measure the temperature, or the percentage of carbon dioxide in the air, or the amount of ice on earth's surface and other specific parameters at certain places at certain times. They try to identify trends or patterns in the data. They attempt to formulate models that describe those trends. The models, in turn, allow predictions that can drive further research. But those predictions can also be used to rationalize certain courses of action outside of scientific investigation. The political debate revolves around whose proposed actions (or abstention therefrom) should prevail and become policy.

Someone may discern an upward trend in temperatures over a certain period of time and decide that "global warming" is a good name for it. As a shorthand way of referring to such a trend, global warming is still a scientifically useful concept because it indicates further paths of investigation. However, just because it is a shorthand for the trend in historical data, it doesn't tell us anything about the future. It may precipitate or provide rationale for certain types of belief about the future, but it doesn't tell us anything.

Nevertheless, if we believe, for this or any other reason, that global climate is likely to change in economically damaging ways in the coming decades and we want to do something about it, we need rationale for the actions we propose. "Global warming" is only a strong rationale if it is a "fact". And those who are most motivated to instigate specific courses of action rationalized by global warming are most motivated to state that global warming is a fact. Of course, declaring that global warming is a fact in order to rationalize a certain type of action, is to call for closure to some extent - "We already know enough, let's get on and do something about it!" Invoking the unity or consensus among scientists helps rationalize that call. Of course, it is one thing for scientists to be in a state of consensus about (1) the basic observations on which the conclusion of global warming is based; another for that consensus to extend to (2) the trends that may be discerned in those data; yet another for scientists to agree about (3) how much reliance we may place in using those trends to make projections into the future; and then quite another again for them to be (4) in a state of consensus about what the broader economic consequences may be if such projections turn out to be correct.

It matters a great deal if scientists are not in broad agreement about (1). If the veracity of the data is in doubt, then hypotheses and conjectures cannot be supported (or refuted) by them. On the other hand, consensus among climate scientists with regard to (4) hardly matters since they are no more qualified in that regard than anyone else.

Scientists have an all-too-human tendency to not only report to the rest of us on what observations they have made of the world, but also to play at being 'masters of reality' with a monopoly on the interpretation of those observations. If we do not pay proper attention to the very different role of speculation in those two activities then there will be confusion about the importance and meaning of consensus among scientists.

The Brain and Behavior Blog Contest at science3point0.com drew my attention to this post on NeuroDojo which compares the way original research papers, press releases and textbooks treat the same research topic.

It seems there’s an assumption running through this: that the research paper is primary and that the press release and text books are there to support it.

However, one can just as easily say that each type of document comes from its own genre and serves its own distinct purposes: the paper serves to justify receipt of the grant that funded the research and promotes the professional reputations of the authors among other professional researchers, improving the peer review chances of their next grant application; the press release serves to grab a few seconds of attention from the general public and underpins public sympathy for funding of this type of research; a textbook is a saleable product serving the needs of the undergrad student market that also underpins the academic reputation of its authors.

Seen that way, Spruston’s PR policy is sensible rather than hype. Textbook simplifications are also prudent: students with exams to pass want to get to grips with their subject quickly and not be distracted by exceptions.

It’s also worth noting that the process of simplification seen in textbooks starts in research papers. There will almost always be experimental details that for one reason or another get simplified in (or even omitted from) the paper because the authors judge them to be insignificant. Chances are, there are other investigators who have seen evidence of the phenomenon described by Sheffield et al., but who were looking for something else and didn’t consider it worth writing up.

I will certainly be posting more on this in my ‘Ecology of Literatures‘ series.

Source

Judith Curry posts this pointer to the National Academies Press book “On Being a Scientist: A Guide to Responsible Conduct in Research“.

It’s not clear to me why anyone thinks scientists would be in special need of ethical instruction. Surely, the common virtues of honesty, responsible use of resources and respectful attitudes towards one’s colleagues, as would be desirable in any walk of life, should guide scientists well enough? All the more remarkable that this book seems to be directed at postgrads. Undergraduates aren’t in need of ethical guidance? Or is it that those who move on into postgraduate research careers encounter norms of behavior that leave something to be desired as examples of ethics?

Coincidentally, Praj has just posted a quote from Evelyn Fox Keller:

“Science is first and foremost a domain of opportunism”

Might that be a clue?

Source

Responses to SAPE

By Peter,

Science as a Public Enterprise (SAPE) is an initiative of the Royal Society who intend it to “ask how scientific information should be managed to support innovative and productive research that reflects public values”. The matter is being overseen by a “high-level working group” who have issued a call for evidence. The preferred route of response is by way of this form of set questions.

I’m not sure I’ve got that much in the way of material evidence to submit, but I couldn’t resist framing a few replies to their questions. In the spirit of openness, I’m publishing my draft replies here before submitting them to the RS. I welcome comments, criticisms and questions.

SAPE: What ethical and legal principles should govern access to research results and data? How can ethics and law assist in simultaneously protecting and promoting both public and private interests?

Research results and data are essentially documented accounts of certain experiences. Experiences in themselves are necessarily private, but documents containing accounts of them can be sold, shared or published. Arguably, the exchange of information through the distribution of such documents is the essence of science (as opposed to anecdotal knowledge), but that doesn’t automatically mean there’s an obligation to exchange on terms dictated by others.

Conventionally (in law), such documents are ‘works’ and any copyright or other rights as may exist in them belong to the authors, or in the case of works made for hire, to the author’s employer. One may sometimes hear the idea that research data can be owned being described as “absurd” or “preposterous”, but it is simply a matter of acknowledging that the carrying out of research has a cost and that the party who bears that cost has some priority in deciding how the results should be used. Only rather rarely nowadays is that cost really borne by the scientists carrying out the research as they are usually compensated by being paid for their time.

In line with that, research conducted by scientists in industry is generally regarded as works made for hire without problems. With academic scientists, the situation is more complex since some research funding (possibly the majority nowadays) is made in support of a particular research project. As such, it could be argued that the research belongs to the party who paid for it and that it is a work made for hire. However, some academic research is still funded on the basis of supporting individual researchers, a department or institute and the ownership of this would need to be the subject of local agreement.

The choice of who has access to research results and on what terms should ultimately be that of the owner of the rights in them, although provision should be made to legally oblige the owner to allow access or prohibit the owner from allowing access in certain cases where the public interest would otherwise be damaged. Under such circumstances, the owner should be entitled to just compensation for any consequential losses.

SAPE: How should principles apply to publicly-funded research conducted in the public interest?

The rights in any research entirely funded by the public should be automatically assigned to the public. The decision on whether to publish or otherwise distribute then lies ultimately with the public and any decision made on the public’s behalf has to be made in the public interest by accountable representatives. It follows that any assignation back into private hands should itself be made in the public interest.

SAPE: How should principles apply to privately-funded research involving data collected about or from individuals and/or organisations (e.g. clinical trials)?

The key thing here is “data collected about or from individuals and/or organisations” regardless of whether the research was privately or publicly funded. Information about legal persons that is not already publicly available should be used for research only with the emphatic consent of each individual concerned and with express statement of the limits to use. Ethical review should be made of the process used to gain such consent. Publication or other distribution of documents containing such information can then only occur within the limits set by the subjects’ consent.

SAPE: How should principles apply to research that is entirely privately-funded but with possible public implications?

If publication of the research would allow members of the public to avoid otherwise likely positive harm, then obligatory publication should be possible subject to the proprietors of the research receiving just compensation for any consequential losses. Simply asserting that publication would benefit the public interest should not be sufficient to force obligation to publish.

SAPE: How should principles apply to research or communication of data that involves the promotion of the public interest but which might have implications from the privacy interests of citizens?

I’m not sure this is the kind of question that can be answered in general terms. Clearly “the public interest” does not uniformly match up against everyone’s individual interests. Conflicts can only be decided in a case-by-case basis. Obviously, there has to be a right to petition for redress for anyone who feels “the public interest” is being interpreted in a way that infringes their private interests.

SAPE: What activities are currently under way that could improve the sharing and communication of scientific information?

“Scientific information” can mean two things: information in the data that scientists collect or produce; and information in knowing what scientists say the data mean. With regard to the first, the so-called “open data” initiative stands to make the biggest difference by advocating the indiscriminate online publication of “all” data (as opposed to only data selected to illustrate points made in scientific papers). There would appear to be a strong case that this will allow better use of data though there are problems around the costs of digitising certain types of information and standardization of data formats. There are also issues around how a commitment to open data conflicts with the career interests of professionalized scientists, or could encourage overenthusiastic publication of data without the owner’s consent. Overall, developing working open data initiatives would appear to be an area deserving of funding support in its own right.

SAPE: How do/should new media, including the blogosphere, change how scientists conduct and communicate their research?

The ease with which information can be published online means that freely available, informal, non peer-reviewed publications (such as blogs) containing scientific data and/or discussion as to what the data mean could proliferate and eventually displace traditional scientific journals as the preferred source of scientific information. This might make scientific information more easily available to those not working within industry or academia. The withering authority of the peer-reviewed journals might also result in less readiness to take the integrity of published data on trust and lesser readiness by individuals to fall in with ‘leading’ opinions with regard to data interpretation. Science could come to be seen less as a quest for truth – a set of limits within which all things must operate, and more as a search for pointers to what might be possible – a set of priorities for the next step in one’s own on-going investigations.

SAPE: What additional challenges are there in making data usable by scientists in the same field, scientists in other fields, ‘citizen scientists’ and the general public?

The general problem associated with making data suitable for use by others than those who generated them (“cross-use”), is that data on their own are meaningless. One must also know how they were produced, what they represent. Information relating to this is sometimes called “metadata”.

Cross-use of data by scientists in the same field can be relatively unproblematic in this technical sense because there is usually an implicit shared understanding of the experimental and observational fashions in the field. Only brief summary details of what the data represent are necessary for understanding. Of course, cross-use by scientists in the same field presents the greatest political/economic problems in professionalized science because it is in this case that the scientists from whom the data originated face the greatest risk of losing priority over its interpretation.

The problems of the quality of available metadata in allowing cross-use of data become more acute if we wish to allow cross-use by scientists in other fields or those working outside professionalized science. The quantity of necessary metadata can become substantial. It will be necessary not only to develop standardized formats for shared data, but also for metadata, to allow commensurability between data sets.

SAPE: What might be the benefits of more widespread sharing of data for the productivity and efficiency of scientific research?

In principal, making data available can contribute by reducing the number of occasions on which investigations are carried out into something that is already known and by increasing the number of occasions in which a given data set is used to answer different questions from those for which it was originally compiled. It does not, however, do this on its own. Before the availability of shared data can make a difference, one has to understand what range of possible data types might be helpful in advancing one’s own project, know how to look for them and know how to use the data once found.

SAPE: What might be the benefits of more widespread sharing of data for new sorts of science?

By encouraging the re-examination of existing data sets from new perspectives, data sharing could help the development of scientific investigation along lines not previously envisaged. Data sharing also encourages a “new sort of science” in another sense: it separates the collection of data from the theoretical interpretation of data. Traditionally, possibly as a legacy of the self-funded “gentleman scientist” era, many types of science have been carried out in a way that implicitly understands the collection of data and their interpretation to be the work of one person or a small group of people all working together (the “authors”). Of course, this vision of science has, since the mid-twentieth century, been in competition with “big science” where research projects are managed efforts carried out by organized division of labour between large numbers of research workers. This probably began with the Manhattan project. More recent examples might be the human genome project and the Large Hadron Collider project. More recently still, we have seen the rise of what might be called “stakeholder science” where scientific debates have taken place outside professionalized science, sometimes with direct participation of groups and individuals from outside professionalized science. Examples might be that of climate science and “global warming” or the debate around the science of autism and use of the MMR vaccine. Data sharing may encourage and accelerate the growth of these “styles” of science. A long term effect might be that professionalized scientists are seen less as “expert witnesses” and more as “knowledge workers” and that scientific problems are not seen as self-contained but rather as part of broader political or economic problems.

SAPE: What might be the benefits of more widespread sharing of data for public policy?

Data sharing might help crystallize ideas about what specific investigations need to be made to answer questions relating to policy formulation. It might be easier to discover if such investigations have already been made. However, without a warranty as to the authenticity of the data (problematic in a “free” sharing ethos), it would be unwise to take such data on trust. Specific audited studies would still have to be commissioned in most cases.

SAPE: What might be the benefits of more widespread sharing of data for other social benefits?

As above, it might help crystallize ideas about what specific investigations need to be made, but without some kind of trusted third party to act as guarantor of data authenticity, it might be unwise to take data on trust.

SAPE: What might be the benefits of more widespread sharing of data for innovation and economic growth?

As above

SAPE: What might be the benefits of more widespread sharing of data for public trust in the processes of science?

Initially, the availability of data additional to that used to illustrate the points made in research papers and even so-called “negative” data could improve public trust. In the longer term, a proliferation of data from unverified sources could have the opposite effect.

SAPE: How should concerns about privacy, security and intellectual property be balanced against the proposed benefits of openness?

The desire for privacy, security and ability to earn a return for making one’s original ideas available to others are natural human concerns. Irresponsible data sharing could easily lead to transgressions of those concerns. Everything depends on the data sharers behaving responsibly. This, in turn, depends on them understanding first that being a scientist does not in any way diminish one’s common ethical obligations as a citizen, and second, on understanding to whom (as well as to the public generally) they owe specific obligations in any particular case. This would include parties to whom the data relate directly and parties who funded the research either expressly or by way of funding the supporting infrastructure. Ethical frameworks of this kind already exist for research that involves studies of human subjects (such as clinical trials) and suitable components of these could be extended to cover other types of research.

SAPE: What should be expected and/or required of scientists (in companies, universities or elsewhere), research funders, regulators, scientific publishers, research institutions, international organisations and other bodies?

Following the above, subscription to an ethical framework that sets out guidelines for understanding who all the stakeholders are (individuals and collectives)  in a given research project and what obligations one owes to each of them.

SAPE: Other comments?

Science as a public enterprise can achieve and should aim to achieve more than just opening up access to scientific information. Science is presently understood largely as a professionalized activity that produces data in accordance with self-referring criteria of evaluation with funding from well-financed organizations (business corporations, governments, charities). Whether the funding is public or private, the scope of scientific investigations is formulated in response to standardized or homogenised concepts of market demand or public interest. In this model, science is effectively closed to the disparate minority interests of individuals or small collectives. At present, such minorities have to pursue their interests without the benefits of scientific investigation or have to reformulate their interests to align them with the homogenized or standardized ‘administrative’ interests of the organizations. A task for open science is to lower the financial and cultural entry barriers to instigating scientific research projects in response to disparate minority interests. Some of the requirements would be:

  • To help “lay” people articulate their interests in terms that reveal what types of scientific information could inform them. Informal learning initiatives such as Peer to Peer University might help with this.
  • To find which parts of those investigations have already been performed or are in common with the interest requirements of other parties.
  • To encourage instigation of short-lived, low overhead “collective experimentation” networks to carry out research projects with component parts distributed across multiple sites.
  • To encourage more specialized scientists to become “citizen scientists” prepared to engage with disparate citizen interests rather than see them as a funding opportunities for scientific interests.

Source

When some subject attracts controversy, there is more to it than mere disagreement. Disagreement need not lead to controversy if the disagreeing parties understand and have learned to live with each other’s point of view. Controversy arises when there is some unresolved tension to be worked out.

The subject of ‘open science’ still attracts controversy because there is no settled coexistence of ‘open’ and ‘closed’ models of science. There is disagreement over just what the “open” in open science should be taken to mean and over what type or degree of openness is the best for science. Those who are enthusiastic about greater openness tend to focus on themes of transparency, accountability, fairness in getting research published and, of course, “free” access to data. Those who still feel skeptical about open science tend to focus on the need to maintain standards of quality and reliability. Because the open science debate largely remains one that is conducted by science professionals for science professionals, tension arises over the extent to which the opening up of science should be allowed to disrupt the established norms of professionalised scientific practise.

One area where the effects of this tension can be see is in attitudes to the opening of peer review of research reports. A recent high-profile retraction of scientific papers that apparently drove one of the researchers involved to suicide, led to calls to open up the processes of peer review[*], but the editor of the journal concerned said that, while this had been considered, “the disadvantages — which include potential misinterpretations and the desire of many referees to keep their comments confidential — have prevented the journal from embracing this”[*]. Clearly, there are conflicting motivations here. Regardless of the effects on overall research quality, a major barrier to opening up peer review is the perceived desire of referees to preserve the established norm of anonymity.

In practise, peer review is a process of negotiation between the authors of a proposed research report, editors of the journal to which it has been submitted and reviewers selected on the basis that they are well-informed representatives of the eventual audience for the report. Authors want to get their report published in a journal with a ‘brand’ reputation that attracts the right sort of reader (people who’ll cite the paper, basically). Editors want papers that will reinforce the journal’s reputation for bringing out quality publications of interest to its readership.

Peer review is widely identified as a cornerstone of quality assurance in institutional science, most people readily admit that it has very obvious faults. Review is entrusted to a small number of individuals whose competence and trustworthiness are judged only subjectively by the editors. While reviewers are supposedly chosen on the basis that they possess a strong understanding of what quality means in relation to the relevant field of research and have a commitment to seeing it maintained, they may have other motives as well, such as getting to see new research results before everyone else or even seeking to influence what results others get to see. Another effect of institutional peer review is that acceptance of a paper for publication itself signals to readers that the work described is worthy of their attention and that the conclusions drawn by the authors are respectable. Individual readers are free to take contrary views, of course, but by doing so, they risk marking themselves as outsiders or even cranks if it’s not evident that many others feel the same way. Even when a post-publication debate takes place on the significance of a paper, there is not usually any mechanism for making the content of the debate a necessary part of reading the paper itself. The interpretations negotiated during the peer review process and set out in the published paper remain the ‘official’ position unless it turns out that the paper contains errors or misdemeanours serious enough to warrant retraction of the paper.

No doubt, there are circumstances where complete retraction is appropriate, but in many cases a discussion of what seems wrong and what remains good about the research report might be quite possible. There are plenty of reasons to believe that far more papers are in need of this kind of evaluation than are ever retracted [*]). There is at least one online forum (PubPeer) that tries to provide this kind of facility. But, it is notable that the people who make PubPeer say they have collectively decided to remain anonymous in order to avoid “circumstances in which involvement with the site might produce negative effects on their scientific careers”[*]. Clearly, there is real tension over the idea of open peer review where just anyone can criticise a research report and be identified for doing so.

Perhaps this tension will only resolve itself when an ‘open’ model of science abandons the idea of authoritative research statements as represented by the ‘scientific paper’ altogether and instead sees results only as stimulus to imagination that engenders debate and motivates further research action.b.gif?host=anglosaxonmonosyllable.wordpress.com&blog=11998391&post=2439&subd=anglosaxonmonosyllable&ref=&feed=1

Source

In an earlier article, I briefly mentioned the two basic points of view from which funding for scientific research can be justified: funding for specific scientists on the grounds that one wants a vibrant research culture; or funding for specific research projects on the grounds that one wants answers to scientific questions. I was therefore pleased to discover this draft paper: Incentives and Creativity:  Evidence from the Academic Life Sciences by Pierre Azoulay, Joshua Graff Zivin and Gustavo Manso in which the authors compare the effects of these finding policies on scientists’ behaviour. The study compared scientists funded by the Howard Hughes Medical Institute (HHMI) with a comparable group of “Early Career Prize Winners” (ECPWs) whose research was supported by National Institutes of Health (NIH) funding. HHMI funds “Investigators” as individuals. In the words of the HHMI website: “HHMI urges its researchers to take risks, to explore unproven avenues, and to embrace the unknown—even if it means uncertainty or the chance of failure.” According to the paper, HHMI is tolerant of failure in the early stages and provides its recipients with access to peer group feedback on their research throughout. The ECPW control group was chosen as having similar overall research accomplishments to those in the HHMI group prior to their receiving HHMI support, but who subsequently went on to perform research with NIH funding. In contrast to HHMI, NIH awards grant funding in support of specific research project proposals and researchers who fail to achieve project goals are unlikely to have their grants renewed.

The authors found that HHMI-funded scientists were more strongly represented than those in the ECPW control group as authors of both the most highly cited publications and of the most rarely cited. In other words, HHMI-funded scientists had both more successes and more failures than the control group. Moreover, there was a greater proliferation of keywords associated with the publications of the HHMI-funded researchers after their appointment than in controls. Altogether, these observations are taken as an indication that because the consequences of short-term failure were ameliorated for them, HHMI-funded scientists were more willing to take the risks associated with a more exploratory and serendipitous approach than their project-funded counterparts. To put it another way, HHMI scientists were able to be more “creative”. That is, they were freed from the constraints of having to answer questions contractually agreed at the outset and allowed to answer questions of their own choosing; preferably, one presumes, questions that no-one else had before thought of asking.

I suspect that a lot of scientists will enjoy hearing this, which might be taken as some kind of justification of the “Haldane Principle“. However, the authors stress that their study should not be taken as a criticism of the NIH or of project-oriented funding. They point out the difficulties in making investigator-oriented funding work on a larger scale and of the need for ready political accountability in decisions involving the distribution of public funding. Analogous constraints apply in corporate R&D where accountability to shareholders has to be maintained.

There are interesting implications here for the criteria (“metrics”) used to evaluate individual scientists’ performance in academia and in industry. In academia, there are grumblings about the limitations of using publications and “impact factors” as measures of a researcher’s worth. In industry, there are alternating waves of enthusiasm for either “blue skies” or nose-to-the-grindstone project-managed research. However, consideration of successful and unsuccessful attempts at innovation (see for instance Why Innovation Fails by Carl Franklin or How Breakthroughs Happen by Andrew Hargadon) suggest that no matter how good they are as scientists, researchers who give birth to innovation that actually changes how people live have their thinking realistically attuned to the needs of subsequent commercial development even as they think up those previously unthought-of questions. The best researchers give their best when they’re embedded in a culture that gives them incentives to be exploratory that are also tied in to the business aims of the institution in which they work.

Source

Redefining Science

By Peter,

On his Labcoat Life blog, Khalil A. Cassimally considers the problems of defining science. He considers in turn theories backed by evidence, the search for truth, finding new and unexpected things, the scientific method and finally settles for the British Science Council’s definition of “the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”.

However, there’s another side to science that none of this addresses: the existence of scientific institutions. Science can be seen as a kind of knowledge or a way of acquiring knowledge, but the word is also used in connection with various institutions: universities, funding organizations, journals, learned societies, professional bodies. These support the work of professional scientists in various ways and also set norms of scientist’s behaviour. It could be interesting to consider the extent to which the value anyone places on the work of professional scientists depends on their association with these institutions. For instance, are research results published in Nature seen differently than they would be if the same results were published simply on the researcher’s personal self-hosted website? If so, in what way and why?

If we accept that science lends itself to both types of understanding (a kind of knowledge; a kind of institution), then we should consider the relationship between them. The simplest kind of relationship would be one of correspondence such that knowledge originating from a scientific institution is necessarily deemed to be scientific and visa versa. If that is not the case, then either significant amounts of scientific knowledge originate from outside the scientific institutions, or significant amounts of knowledge originating from the scientific institutions are not scientific. In the first case, we would then have to ask why we would give special attention to the knowledge originating from scientific institutions and in the second, we would have to ask why we would regard knowledge originating from scientific institutions as being especially reliable.

Of course, it’s possible to admit that the correspondence between kinds of knowledge and kinds of institution is not perfect and that both of the situations referred to above occur to some extent. Defenders of the institutions may say that they perform some supportive or supplementary function; that we get better quality science or better value science by virtue of the institutions being there. To take my example above of journals vs. website publishing, it may be said that work published in Nature is more worthy of attention than work published on a personal website because research published in Nature has been peer-reviewed. However, the reviewers generally have to base their opinions entirely on what they see in the submitted manuscript. Their ability to decide the scientific strength of the work described is not really any more than the general readership would be able to decide for themselves if the manuscript was published anyway. We don’t need peer-reviewed journals to help us decide what is scientifically valid and what isn’t, because we are just as well able to decide this as the reviewers on the strength of what we can see – the manuscript. For a prestigious journal, the number of scientifically valid manuscripts received will generally exceed the number that can be published. The decision on what to publish is an editorial one based on what scientifically valid work is furthermore deemed to be important or worthy of our attention.

The effects of peer review are not limited to determining what gets published in journals. Peer review of one kind or another runs through most scientific institutions. It determines not only what gets published but what research proposals are deemed worthy of funding when the number of scientifically valid proposals exceeds the scope of available funds; who gets a job when several scientifically qualified candidates are available; and which scientists receive honours and which don’t. Clearly, whatever the value of scientific institutions in safeguarding proper standards of scientific knowledge, another important effect they have is to promulgate a sense of what is important within the body of scientific knowledge. This leads to questions of who makes those choices and on what basis?

Source

Contrary to Steven Pinker's recent attempt to rehabilitate "scientism", I argue that the word should stand for a persistent belief that the trustworthiness of institutionalised science is a matter of fact rather than something that needs to be subject to continuous empirical re-evaluation.

Source

I’ve been reading about the NIPS Experiment. Calm down at the back there. NIPS stands for Neural Information Processing Systems. It’s all very serious and you can read about the experiment [url="http://inverseprobability.com/2014/12/16/the-nips-experiment/"]here[/url] and [url="http://mrtz.org/blog/the-nips-experiment/"]here[/url].
In essence, the experiment aimed to examine the process by which papers are accepted or rejected by peer review committees for conference presentation. Obviously, it’s all to do with scientific quality and the scientific community is built around a common understanding of what that means. Or is it?
The NIPS experimenters split their panel of conference peer reviewers into two committees. Most of the papers went to one committee or the other for review, but 10% of them (166 papers) were reviewed by both committees without the members knowing which papers they were. It was then possible to see how similar the two committees were in their evaluation of those papers. A full write-up of the results is still to come, apparently, but [url="http://mrtz.org/blog/the-nips-experiment/"]Eric Price has revealed the essence[/url].
The committees disagreed in their evaluation of 43 of the 166 papers. Naïvely, you might think that’s not too bad. They disagreed on 25.9% of cases, so they must have agreed on 74.1%. However, Eric Price points out that the committees were tasked with a 22.5% acceptance rate which means that the number of disagreements was larger than the number of acceptances each committee was expected to make. This means that most (more than half) of the papers accepted by either committee were rejected by the other.
Price considers a theoretical model which treats the peer review process as a combination of “certain” and “random” components. He assumes that there will be some papers that every reviewer agrees should be accepted (acceptance is certain) and some that everyone agrees should be rejected (rejection is certain). For the rest, Price’s model assumes that committee members make their decision by (metaphorically at least) flipping a coin. This is the random component and the level of randomness in peer review is the proportion of papers that get this treatment. The divergence in reviewing committees’ decisions seen in the NIPS experiment imply that there is quite a lot of this coin-flipping randomness in peer review; perhaps more than most people thought.
Is this “randomness” in reviewers assessments a cause for concern? Price points out that “consistency is not the only goal” and, indeed, it can arise for reasons that are not necessarily welcome. For instance, unanimously accepted papers may simply be feeling the benefit of appearing under the name of well-connected authors that reviewers favour for reputational reasons. Conversely, papers that reviewers unanimously reject may just be suffering the penalty of pursuing unfashionable research topics that reviewers see as a drain on funding for more popular topics. It may well be that it is precisely in the “random middle” – between the certain acceptances and certain rejections – that we see peer review at its best.
But how can it be any good if it’s random? The truth is, it’s pretty implausible that it really [i]is[/i] random. I don’t see much reason to believe that peer reviewers actually flip coins and as [url="http://scienceblogs.com/cognitivedaily/2007/02/05/is-17-the-most-random-number/"]humans are not good random number generators[/url], it seems unlikely that conceptual flipping of imaginary coins would produce genuinely random results. What really goes on in this middle zone is not random at all. Rather, it’s a process of deliberation where each reviewer considers a variety of factors and makes a decision on the basis of balancing those factors. Even having made the decision, the reviewer probably still feels a fair degree of uncertainty as to whether it was the right one.
Because reviewers are usually allowed to decide for themselves which factors to consider in their deliberations, there is a good deal of variation between reviewers as what factors they consider. Putting it more formally, the [i]weight[/i] they give to each factor is not prescribed. What’s more, there’s no guarantee that even individual reviewers will attach the same weight every time: the same reviewer could reach different conclusions about the same a paper considered under different circumstances.
In short, the degree of “randomness” seen in the NIPS experiment undermines one of the cornerstone assumptions of the peer review process – that reviewers share a coherent common notion of what qualities to value in a paper. Instead, it suggests that the criteria that reviewers use in practise are quite divergent. If this is the case, it is hard to see how peer review could possibly be “fair”. Certainly, steps such as making reviewers comments and identities open to authors would seem to miss the point. What is more in order is a dialogue over the criteria used to evaluate research in the first place and whether traditional peer review has any useful role to play in this. [img]https://pixel.wp.com/b.gif?host=anglosaxonmonosyllable.wordpress.com&blog=11998391&post=2536&subd=anglosaxonmonosyllable&ref=&feed=1[/img]

Source