This site is supported by Nobility Studios.
  • entries
  • comments
  • views

About this blog

Entries in this blog

In his personal blog, TGL member Praj asks if scientific thinking is like reading comprehension. He gives the example of a paragraph about American football. Being British and not a follower of the game, but nevertheless (I believe) fully literate in the English language, I can confirm that understanding the example is not merely a matter of general English comprehension. Some of the jargon terms such as "receiver", "training camp", "running back", "season", "backup" and "offensive line" may evoke some kind of sense to any reader of English, but one really has to know about football to fully understand them in this piece. Likewise, some of the slang terms of American reportage, such as "get off the ground", "getting on track" or "banged up" have literal or conventional meanings in general English usage, but are used metaphorically here.

Praj worries about whether or not "scientific thinking" necessarily requires scientific knowledge, particularly in relation to "the issue" of global warming. I think you need to start by being clear about what you mean by the issue of global warming. I can see at least five distinct issues here:

(1) the issue of how much reliance we can place on data that indicate global climate has changed rapidly in recent decades;

(2) the issue of how much reliance we can place on climate models that predict the implications for future climate change;

(3) the issue of how much confidence we can have in the efficacy of any policy designed to reverse or adapt to such predicted climate change;

(4) the issue of how well we think we understand the economic and political consequences of such policies or their failure;

(5) the issue of how people in whose name such policies are made value the presumed benefits of the policy as opposed to the presumed risks of any alternative.

I'd say that issue (1) is essentially scientific. To know what the measurements actually represent and the practical limitations of the measurement techniques used, requires specialist technical knowledge. When that is drawn from a shared pool of individual experiences of related technical knowledge, then you could call it "scientific thinking".

Issue (5), on the other hand, isn't scientific at all. Any individual or group of individuals is equally entitles to make a judgement against what it autonomously sees as its own interests.

Issues (2), (3) and (4) make a gradual transition between those two positions.

For the policy maker, the problem is to assess the need for a policy which takes into account (1) and (2), consider all technical possibilities for a technical response (2, 3 and 4) and then prioritize those for policy adoption (4 and 5).

Taking all those various (and even contradictory) interests into account to come up with a solution that is sufficiently acceptable to a sufficient number of parties to stand a chance of actually working, is the policy makers job, and I don't envy them it.

In May 2010, Science magazine published an article by Philip Kitcher, in which he reviewed a selection of books relevant to the science and politics of global warming. These included books by climate scientists expressing their frustration at the reluctance of successive American governments to take up strong policies on climate change. This reluctance is taken to be one example of a series of cases (health effects of tobacco smoke being another) in which eminent scientists pushed American government policy away from the path indicated by scientific consensus by casting doubt on the evidence on which that consensus was based. This they did, apparently, without themselves being active as researchers in the relevant field.

Kitcher starts his article by presenting contrasting views on the relative value of free and open debate on the one hand and reliance on expert opinion on the other in guiding democratic decision making. In favour of open debate is the view that truth alone will withstand questioning and criticism and that open debate can therefore be relied upon to indicate the ‘correct’ decision, given enough time. A frequent criticism of that view, however, is that in the real world decisions have to be made urgently and the time available for debate is limited. Under such circumstances, unscrupulous parties may express endless trivial or frivolous doubts about any proposal they dislike to ensure that their own proposal is the only one that still looks strong when the time comes to decide. Open debate then becomes an open door to the ethic of might is right. Limiting the debate to those judged to possess comprehensive and impartial knowledge and understanding of the relevant facts is seen as a way of obtaining a much more controlled debate that can lead to the best possible decision in time-limited circumstances.

Kitcher doesn’t directly express his support of one view over the other and sometimes it’s not altogether clear whether his statements are his own views or what he takes to be those of the authors of the books he’s reviewing. Nevertheless, one might reasonably come away with the feeling that his bias is toward reliance on expert opinion. Here are a few quotes:

“genuine democratic participation in the issues can only begin when citizens are in a position to understand what kinds of policies promote their interests. To achieve that requires a far clearer and unmistakable communication of the consensus views of climate scientists” (p2)

“Serious democracy requires reliance on expert opinion.” (p2)

“messages have been distorted and suppressed because of the short-term interests of economic and political agents” (p2)

“They have used their stature in whatever areas of science they originally distinguished themselves to pose as experts who express an "alternative view" to the genuinely expert conclusions that seem problematic to the industries that support them or that threaten the ideological directions in which their political allies hope to lead.” (p2)

“It is an absurd fantasy to believe that citizens who have scant backgrounds in the pertinent field can make responsible decisions about complex technical matters” (p3)

On reading these, a number of questions occurred to me:

  • Why would citizens have difficulty understanding their own interests and the policies that promote them?
  • Why would understanding one’s own interests depend on understanding a consensus among scientists?
  • What is “serious” democracy (as opposed to mere democracy) and why would it have to rely on expert opinion?
  • Is there any reason why calling interests “short term” would make them any less worthy of respect?
  • Is there any reason why the interests of “economic and political agents” would be any less worthy of respect than those of any other social entity?
  • Why would the pursuit of citizens’ interests necessarily have to boil down to technical matters?
  • How, except through special pleading, would it be established that climate scientists’ messages are more objective or more relevant to the common good?
  • How would the public tell real experts from those who merely pose as experts?

In articles to follow this one, I want to look at the role of experts in democratically accountable decision making. The decision making issues around climate change make for a particularly interesting context in which to have this discussion because the stakeholders include practically everyone and their interests are therefore highly disparate; the scientific evidence, although extensive, is still contentious; the need to act quickly may be acute; and the potential for useful unilateral action by any individual actor is extremely limited. The problem therefore refuses to stay within the norms of any established political system or subculture.

This started as a comment on something davidm wrote in one of his comments on Hugo Holbling's blog post Doubt and Disunity, but it rambled too much so I just put it here.

Your comments are welcome!

I don’t know why we should think of the consensus in question as a political rather than a scientific consensus. I’m not sure how meaningful the distinction is anyway, given that we know that politics (and other cultural factors) play a role in science. But, to take a simplified example, if ten scientists get together and investigate whether the solar system is geocentric or heliocentric, and then at the end of their inquiry report that they have achieved a consensus that the solar system is heliocentric, how is that not a scientific consensus? Note also that these scientists would not be claiming that the solar system is heliocentric because they say so; they would be claiming that they have achieved a consensus on heliocentrism because that’s just what the best available evidence shows is most likely to be true.

If you took a group of scientists today and asked them to "investigate whether the solar system is geocentric or heliocentric", they might well tell you that the heliocentric/geocentric debate is outmoded. Modern cosmology places neither the sun nor the earth at the centre of the universe and, moreover, that in a proper analysis of orbital motion, each body orbits the common centre of mass, not one the other.

Nevertheless, the stuff of astronomical science is still essentially what it always has been: observations about the position, brightness and shape of points or bodies of light in the sky at certain times when viewed from certain places. Nobody observes or measures "heliocentrism". Galileo's observations of the heavens broadly still broadly stand today, but heliocentrism is no longer a useful or interesting doctrine.

The coining of a word like heliocentrism (an "-ism") is a call to closure; an implication that we now know all we need to know and that no further investigation is required. It was part of the drive to resolve the question, is the Church of Rome the final authority on the physical constitution of the world or isn't it?

Now, what does any of that have to do with "global warming" or the subject of disunity and doubt in science?

Just as 16/17th century astronomers didn't measure or detect "heliocentrism", so today's climate scientists don't measure or detect "global warming". They measure the temperature, or the percentage of carbon dioxide in the air, or the amount of ice on earth's surface and other specific parameters at certain places at certain times. They try to identify trends or patterns in the data. They attempt to formulate models that describe those trends. The models, in turn, allow predictions that can drive further research. But those predictions can also be used to rationalize certain courses of action outside of scientific investigation. The political debate revolves around whose proposed actions (or abstention therefrom) should prevail and become policy.

Someone may discern an upward trend in temperatures over a certain period of time and decide that "global warming" is a good name for it. As a shorthand way of referring to such a trend, global warming is still a scientifically useful concept because it indicates further paths of investigation. However, just because it is a shorthand for the trend in historical data, it doesn't tell us anything about the future. It may precipitate or provide rationale for certain types of belief about the future, but it doesn't tell us anything.

Nevertheless, if we believe, for this or any other reason, that global climate is likely to change in economically damaging ways in the coming decades and we want to do something about it, we need rationale for the actions we propose. "Global warming" is only a strong rationale if it is a "fact". And those who are most motivated to instigate specific courses of action rationalized by global warming are most motivated to state that global warming is a fact. Of course, declaring that global warming is a fact in order to rationalize a certain type of action, is to call for closure to some extent - "We already know enough, let's get on and do something about it!" Invoking the unity or consensus among scientists helps rationalize that call. Of course, it is one thing for scientists to be in a state of consensus about (1) the basic observations on which the conclusion of global warming is based; another for that consensus to extend to (2) the trends that may be discerned in those data; yet another for scientists to agree about (3) how much reliance we may place in using those trends to make projections into the future; and then quite another again for them to be (4) in a state of consensus about what the broader economic consequences may be if such projections turn out to be correct.

It matters a great deal if scientists are not in broad agreement about (1). If the veracity of the data is in doubt, then hypotheses and conjectures cannot be supported (or refuted) by them. On the other hand, consensus among climate scientists with regard to (4) hardly matters since they are no more qualified in that regard than anyone else.

Scientists have an all-too-human tendency to not only report to the rest of us on what observations they have made of the world, but also to play at being 'masters of reality' with a monopoly on the interpretation of those observations. If we do not pay proper attention to the very different role of speculation in those two activities then there will be confusion about the importance and meaning of consensus among scientists.

I’m grateful to the fine folks at the Bubble Chamber for the pointer to this video of Susan Haack’s seminar “Six Signs of Scientism” at the University of Western Ontario:

The audio quality is not good, but her arguments appear to be largely the same as those she makes in this paper (thanks to “Adult Child” for that pointer).

Professor Hack has plenty of interesting and thought-provoking things to say. These include a nice discussion of demarcation and a quick history of the word “scientism” which was not originally pejorative. Like most people today, though, Haack does use the word pejoratively and her definition of scientism is a good one:

“a kind of over-enthusiastic and uncritically deferential attitude towards science”

However, it’s a little different from the one I tend to:

“the idea that scientific progress requires the existence of current scientific institutions”

Then there’s the definition offered up at Wikipedia:

“the idea that natural science is the most authoritative worldview or aspect of human education, and that it is superior to all other interpretations of life”

Maybe the differences are ultimately more linguistic than semantic, but if one feels the need to be pejorative, it’s probably a good idea to know what one is trying to be pejorative about. Would it be the presumed superiority of enquiry over other occupations; the presumed superiority of one method of enquiry over others; the presumption that enquiry is only valid if conducted by individuals with particular qualifications; or the presumption that those qualified to conduct valid enquiry should be treated as inherently more valuable than those who are not?

What do you think?


In his continuing discussion of basic, applied and “Jeffersonian” science, Praj Kulkarni says:

… scientists push a narrative that the people in power clearly, and thankfully, don’t believe.
” [discussion about the value of basic
. applied science] “
are not really about changing funding patterns. They’re more about changing scientists’ priorities and the culture of academia.

I largely agree with that. A key issue here is that “science” is often understood differently by those who pay for it and those who do it.From the point of view of those who pay for it, the primary aim of funding basic scientific research in academia is to provide education and training. Scientific research in academic settings is mainly a way of training the next generation of researchers needed by industry. Replenishing the supply of academics will use up only a small proportion of academically trained researchers. Using universities to train researchers for industry is effectively an outsourcing of researcher training and is a good solution only so long as it’s more cost effective than each individual business running its own researcher training programs. Focusing the research on ‘basic’ science keeps the training neutral, not biased to the specific needs of one employer over another.In academia itself, the view persists that the primary aim of academia is to provide a protected environment in which academic scientists can carry out research in basic science (or, more accurately, oversee low-paid postgrads and post-docs carrying out research). Scientific education, as I know it anyway (UK; biomedical), does little to dispel that and some private sector grant funding organizations (e.g. HHMI in the US, Wellcome Trust in UK) positively encourage it by providing funding for specific scientists rather than projects. In fact, it may be surprising how much this attitude survives even among scientists employed in industry.The process of becoming a scientist involves not only the accumulation of specialist knowledge and technical skills, but also induction into a scientistic culture to which the PhD is a rite of initiation. Scientists are encouraged to see themselves as a professional elite, by which I mean that the cultural codes around someone being identified as a “scientist” direct people to evaluate that person on that scientistic basis rather than on the basis of that individual’s personal characteristics as evident right there and then.


I’ve just read Praj’s post “Scientists as a special interest” in which he considers the question asked by Matthew Nisbet: “as a matter of social responsibility, do scientists have an obligation to accept that reductions in scientific spending are necessary to preserve social programs?”

Surely, as long as we see this question as one of priority of science over social programs or vice versa, we’re on the path straight back to the “egghead” vs. “anti-intellectual” brickbats that have a lot to do with selling newspapers (or online attention) and very little to do with intelligent discussion. The arguments that can be made for one type of spending over the other become blurred by the fact that social programs are often predicated on quasi-scientific propositions as to the economic benefits consequent to certain courses of social action while scientific research programs are sometimes justified on the sociologically unexamined predicates that they will make previously unimagined (and therefore nebulously-defined) solutions to economic problems available at some unspecified time in the distant future. Science and social action are both special cases, but each special in its own way. Each justifies itself by seeking to persuade that some bright future awaits us if only we spend on this thing now.

Put like that, it’s just a gamble, of course, and naturally there are ways of managing such projects: breaking down into small discrete steps each of which can be evaluated over a short term so that the possibility of support withdrawal is always there if things go awry. Given that such techniques are so well established, one wonders why separate science and social programs even exist. What reason, apart from entrenched vested interests, is there not to allow individual proposals in either arena to compete for the same common pool of funding?

That said, perhaps there is a basis for making science (as opposed to technology R&D) a special case. While technology R&D seeks working and affordable solutions to specific technical problems, the job of ‘pure’ science might be taken to describe the observations made in some specific circumstances, but in a theoretical context that allows that abstracts their meaning and allows one to divine relevance within them for as broad a range of circumstances as possible. In this way, commonalities between observations from diverse circumstances between which no relationship was previously perceived become visible and new unsuspected areas for technological development become evident. Seen that way, it is a strength, not a weakness, of pure science that its consequences are unforeseeable. But being unforeseeable, its consequences cannot be used as justification. Could it be that our commitment to science is not so much a sign of commitment to making things better, but only a commitment to unforeseeable change?


  1. Back in the 1970s Alan Chalmers published what was to become a popular introduction to the philosophy of science called What is this Thing called Science? The title implied an assumption that science is a ‘thing’, distinct from non-science, from pseudo-science, from bad science and from anti-science.
  2. The idea that science is a definite thing and therefore the question that begs it probably seems quite natural to most people. Certainly, that’s how we hear the word being used both by scientists and non-specialists.
  3. Being concerned with the philosophy of science, Chalmers’ book had to take as its subject that use of the word “science” that is amenable to philosophic investigation. For most philosophers of science, that has largely taken the form of a concern with knowledge and the claims made for science as the way to especially reliable, authoritative or just plain good old true knowledge. The philosophical problem has largely been understood as one of epistemological demarcation: can we come up with an understanding of “science” that explains how we can recognise or produce such superior knowledge?
  4. Progressing from a realisation that mere correspondence to the facts isn’t good enough (facts are all in the past or present, but science only really becomes interesting when it talks of the future), the focus has been on scientific theories and how they can be distinguished from things that may look like theories but aren’t scientific.
  5. Karl Popper’s principle of falsifiability almost inevitably sits at the centre of things here. Based on the acceptance that no finite amount of corroboration can finally establish the truth of a theory and also the surmise that a single contradictory observation would finally establish falsehood, it seemed like a good answer for a while. It is still rated by some as the best there is, but it turns out that one can rarely, if ever, say with absolute certainty that a given observation conclusively refutes a given theory. Certainly not if the theory is at all an interesting one that makes bold predictions.
  6. Indeed, scientists’ response to observations that might be taken to refute a favoured theory was often to investigate auxiliary theories that would allow them to discount the apparent refutation. Further, scientists often maintained allegiance to apparently unfalsifiable theories because those theories nevertheless provided a fecund conceptual framework for further experimental investigations.
  7. At the same time, some theories produced by enterprises not generally described as “scientific” nevertheless met the suggested epistemological criteria.
  8. Thus, knowledge that was judged epistemologically scientific did not necessarily correspond that well to the theories of what is colloquially called “science”.
  9. Chalmers concluded his book by saying (second edition) “the question that constitutes the title of this book is a misleading and presumptuous one”. He doubted that a general characterization of science can be established or defended. Other philosophers have variously opined that the demarcation problem is intractable (to philosophy at any rate) or that it is a non-problem.
  10. So much, then, for the philosophy of science and epistemological demarcation.
  11. But what if we continue to think that What is this Thing called Science? really is an interesting question, even if not one that philosophers can answer.
  12. Another approach to the question comes from sociology. For sociology, the uses of the words “science”, “scientist” or “scientific” may be taken as normative and miscorrelation between the attempted demarcation and colloquial use of the words cannot occur. On the other hand, sociology, strictly understood, does not (indeed cannot) say anything about the veracity of any claims that science produces a special kind of knowledge. It can only tell us what those claims are and how they come to be made. This is true even of the so-called “strong program” sociology of scientific knowledge which asserts not only that society determines who gets to be called a scientist and how these people relate to others and to each other, but also the choice and manner of expressing the knowledge they produce.
  13. While sociology may tell us who actually values science, what makes them do it and how that valuation may manifest itself, it cannot tell us why we should value science.
  14. For that reason, the sociological approach has perhaps even less to commend it than the philosophical to someone intent on knowing What is this Thing called Science?
  15. As for scientists themselves, while it is not uncommon for them to have some appreciation of the epistemological philosophy of science (usually a caricature of Popper) and to make any number of informal homespun conjectures as to the sociology of science, neither the philosophy nor the sociology of science as formal academic disciplines seem to be of much use to them.
  16. Is there a way of approaching the question What is this Thing called Science? that keeps on the right side of epistemology and of the colloquial use of the word “science” and addresses the issue in terms that are at least acknowledged by scientists as being relevant to how they actually work?
  17. One problem is that science comprises a wide range of disciplines, each with its own sociology and standards of epistemology. However, scientists of all disciplines frequently mention “the literature”. The literature is the common shared resource to which all scientists contribute and to which all refer when they wish to know what their peers are up to. To be sure, each scientific discipline has its own literature, but equally, the literature is the medium through which scientists of different disciplines often first become aware of each other’s ideas.
  18. “The literature” therefore is a tangible quantity, through which each scientific discipline defines itself but which also provides cohesion to the entire enterprise of science. Moreover, the structure of the literature (what one might call its external structure) reflects the sociology of science while analysis of the “internal” structure may be expected to reveal the epistemology. When we refer to scientific experts, we are looking not only for their first-hand knowledge of experiment and observation, but also a comprehensive command of the literature in the field.
  19. In the posts to follow this one, I intend to look at the structure of the scientific literature as a way into answering the question What is this Thing called Science?



By Peter,

Seeing how people in the past imagined their future that is now our present is always entertaining and sometimes informative. The author of the Found0bjects blog has posted a set of illustrations taken from ‘Drugs’ published as a volume of … Continue reading →b.gif?


When some subject attracts controversy, there is more to it than mere disagreement. Disagreement need not lead to controversy if the disagreeing parties understand and have learned to live with each other’s point of view. Controversy arises when there is some unresolved tension to be worked out.

The subject of ‘open science’ still attracts controversy because there is no settled coexistence of ‘open’ and ‘closed’ models of science. There is disagreement over just what the “open” in open science should be taken to mean and over what type or degree of openness is the best for science. Those who are enthusiastic about greater openness tend to focus on themes of transparency, accountability, fairness in getting research published and, of course, “free” access to data. Those who still feel skeptical about open science tend to focus on the need to maintain standards of quality and reliability. Because the open science debate largely remains one that is conducted by science professionals for science professionals, tension arises over the extent to which the opening up of science should be allowed to disrupt the established norms of professionalised scientific practise.

One area where the effects of this tension can be see is in attitudes to the opening of peer review of research reports. A recent high-profile retraction of scientific papers that apparently drove one of the researchers involved to suicide, led to calls to open up the processes of peer review[*], but the editor of the journal concerned said that, while this had been considered, “the disadvantages — which include potential misinterpretations and the desire of many referees to keep their comments confidential — have prevented the journal from embracing this”[*]. Clearly, there are conflicting motivations here. Regardless of the effects on overall research quality, a major barrier to opening up peer review is the perceived desire of referees to preserve the established norm of anonymity.

In practise, peer review is a process of negotiation between the authors of a proposed research report, editors of the journal to which it has been submitted and reviewers selected on the basis that they are well-informed representatives of the eventual audience for the report. Authors want to get their report published in a journal with a ‘brand’ reputation that attracts the right sort of reader (people who’ll cite the paper, basically). Editors want papers that will reinforce the journal’s reputation for bringing out quality publications of interest to its readership.

Peer review is widely identified as a cornerstone of quality assurance in institutional science, most people readily admit that it has very obvious faults. Review is entrusted to a small number of individuals whose competence and trustworthiness are judged only subjectively by the editors. While reviewers are supposedly chosen on the basis that they possess a strong understanding of what quality means in relation to the relevant field of research and have a commitment to seeing it maintained, they may have other motives as well, such as getting to see new research results before everyone else or even seeking to influence what results others get to see. Another effect of institutional peer review is that acceptance of a paper for publication itself signals to readers that the work described is worthy of their attention and that the conclusions drawn by the authors are respectable. Individual readers are free to take contrary views, of course, but by doing so, they risk marking themselves as outsiders or even cranks if it’s not evident that many others feel the same way. Even when a post-publication debate takes place on the significance of a paper, there is not usually any mechanism for making the content of the debate a necessary part of reading the paper itself. The interpretations negotiated during the peer review process and set out in the published paper remain the ‘official’ position unless it turns out that the paper contains errors or misdemeanours serious enough to warrant retraction of the paper.

No doubt, there are circumstances where complete retraction is appropriate, but in many cases a discussion of what seems wrong and what remains good about the research report might be quite possible. There are plenty of reasons to believe that far more papers are in need of this kind of evaluation than are ever retracted [*]). There is at least one online forum (PubPeer) that tries to provide this kind of facility. But, it is notable that the people who make PubPeer say they have collectively decided to remain anonymous in order to avoid “circumstances in which involvement with the site might produce negative effects on their scientific careers”[*]. Clearly, there is real tension over the idea of open peer review where just anyone can criticise a research report and be identified for doing so.

Perhaps this tension will only resolve itself when an ‘open’ model of science abandons the idea of authoritative research statements as represented by the ‘scientific paper’ altogether and instead sees results only as stimulus to imagination that engenders debate and motivates further research action.b.gif?


Following on from my earlier post, I want to look at the concept of the expert in some more detail. The ultimate question to ask about experts is about the benefits the rest of us receive by referring to them, but to put that in context, and because Michael Pearl actually asked the question, I shall first say a few things about what experts are.

Wikipedia is probably as good an indicator as any of what is generally understood by ‘expert’. There, we read:

“An expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by their peers or the public in a specific well-distinguished domain.”

The essential features of an expert are therefore, first, the possession of a facility for sound judgement in some specified field of concern and, second, a recognition of this by others. The expert is a distinguished person and it might be pertinent to ask how someone acquires, or achieves or is imbued with such a distinction.

The English word ‘expert’ appears to derive from the same Latin root as ‘experience’ [»]. In that, there is an implication that whatever expert authority is accorded to any individual is dependent upon that individual’s experience. Certainly, none of us are born evidently in possession of any special knowledge or skill or any powers of judgement. These things appear with growing experience and any distinguished powers that appear seem to be largely related to the types of experience lived by the individual concerned. In particular, the extent to which that individual’s experience includes practice of the skill concerned. It is true that there are innate differences between individuals in their mental and physical capabilities and it follows that there will be innate differences in their ability to have certain types of experience and in the extent to which any given level of experience leaves them permanently imbued with soundness of judgement or any particular skill. A concert pianist, for example, might be said to demonstrate levels of practical expertise that are forever unattainable by most of us. However, in these discussions, I wish to focus on scientific experts and it is not at all clear that scientific ability is innate. Certainly, if there are among us a special class of human beings born with an innate predisposition to scientific distinction, our systems for training scientists would not seem to be very efficient in recognising it. Thus, while innate differences may play some part in determining who develops powers of judgement worthy of an expert, experience (including education and training) are probably the really decisive factors. In summary, it may be fair to say that experts (scientific experts at any rate) are made, not born.

An implication of this is that while the value we place on another’s expertise is based on their experience as it differs from our own, it is not experience that need necessarily forever remain outside our own. Given the time and opportunity, we could acquire equivalent experience ourselves and therefore presumably also the powers of judgement that come with it. From this, it follows that the expert’s soundness of judgement is not something that we absolutely cannot acquire for ourselves. Rather, it is something for which we refer to another because we would rather use our own time for other things. In effect, the use of expert opinion is a way of saving time and effort. In this context, it becomes clear that in any relationship in which one party seeks the opinion of another as an expert, it is the party seeking the opinion, not the expert, who is the instigator. The expert is the subordinate and should provide opinions that serve the other party’s interests as defined by that other party. It is not the place of the expert to presume or define what that other party’s interests are.

To be continued ...


What does it profit us to discover some truth if we have no practical use for it? Conversely, if we derive practical benefits from some piece of knowledge, why would we worry as to its truth?

In this article, Boaz Miller discusses knowledge and practical interests and considers arguments for the assertion that “whether a person knows a certain claim depends not only on the truth of the claim and the evidence she possesses to support this claim, but also on facts about her practical interests and social values“.

One thing I do have difficulty with here (and maybe this is because I haven’t read the books that Boaz cites in his post) is the relationship between knowing and believing. Boaz considers the argument that “if believing a certain claim gives you sufficient reason to act on your belief, then this belief is knowledge“. Does this mean that knowing something involves acting as though it is true, possibly in a way that involves bearing a cost if one turns out to be mistaken, whereas believing means merely asserting that something is true without putting it to the test in a way that might incur a cost if one turns out to be wrong?

Either way, I think these considerations are important in the way we talk about and use scientific knowledge. Even if one believes that science can tell us some ultimate and objective truths about the world, one has to concede that many theories held today will eventually turn out to be wrong in some way (even if only in the fine details). Nevertheless, one frequently has to decide on actions to be taken right now and the theories one has right now are all one has to go on. Therefore, one will act as though one believes the theories to be true, even if one also believes that they will most likely turn out not to be strictly true at some later date. The level of confidence one has in doing that will depend on how much information one has about the extent to which the theories involved have been tested (and the extent to which one believes that information, of course). If one believes the theory to be good enough for one’s immediate practical needs, then one can proceed to act as though it is true, regardless of whether one believes it to contain some fundamental objective truth about the world or not.


Judith Curry posts this pointer to the National Academies Press book “On Being a Scientist: A Guide to Responsible Conduct in Research“.

It’s not clear to me why anyone thinks scientists would be in special need of ethical instruction. Surely, the common virtues of honesty, responsible use of resources and respectful attitudes towards one’s colleagues, as would be desirable in any walk of life, should guide scientists well enough? All the more remarkable that this book seems to be directed at postgrads. Undergraduates aren’t in need of ethical guidance? Or is it that those who move on into postgraduate research careers encounter norms of behavior that leave something to be desired as examples of ethics?

Coincidentally, Praj has just posted a quote from Evelyn Fox Keller:

“Science is first and foremost a domain of opportunism”

Might that be a clue?


The Brain and Behavior Blog Contest at drew my attention to this post on NeuroDojo which compares the way original research papers, press releases and textbooks treat the same research topic.

It seems there’s an assumption running through this: that the research paper is primary and that the press release and text books are there to support it.

However, one can just as easily say that each type of document comes from its own genre and serves its own distinct purposes: the paper serves to justify receipt of the grant that funded the research and promotes the professional reputations of the authors among other professional researchers, improving the peer review chances of their next grant application; the press release serves to grab a few seconds of attention from the general public and underpins public sympathy for funding of this type of research; a textbook is a saleable product serving the needs of the undergrad student market that also underpins the academic reputation of its authors.

Seen that way, Spruston’s PR policy is sensible rather than hype. Textbook simplifications are also prudent: students with exams to pass want to get to grips with their subject quickly and not be distracted by exceptions.

It’s also worth noting that the process of simplification seen in textbooks starts in research papers. There will almost always be experimental details that for one reason or another get simplified in (or even omitted from) the paper because the authors judge them to be insignificant. Chances are, there are other investigators who have seen evidence of the phenomenon described by Sheffield et al., but who were looking for something else and didn’t consider it worth writing up.

I will certainly be posting more on this in my ‘Ecology of Literatures‘ series.


Over at Research Cycle Research Daniel Mietchen has posted some interesting comments on the opening of science. Obviously, a lot of the contention around open data stems from tensions inherent in professionalized science itself. Very few scientists would be content with (or could earn a good living by) merely collecting or producing data. They want to interpret the data too; to say what the data mean. And beyond that, it’s nice to be right; to be the one to have the last word on what the data mean.

Now, one can always say “relax – truth will out!” But that takes time. In fact, it may take forever for “the” truth to come out. Even the best scientific theory can expect to find itself one day again under scrutiny when someone decides that what had traditionally been written off as experimental noise is actually a sign of a real and important effect or when it has to be reconciled with a newer, fashionable theory originating from another field.

Meanwhile, the world won’t wait. People want to know what the data mean right now. So, there’s demand (a market, if you will) for a proposed version of what truth could be and the scientist wants to be the one seen to be doing the proposing. In view of that, keeping one’s data ‘closed’ is prudent. It stops others from coming out with their own versions of what the data mean. Likewise, publishing one’s own account in a ‘good’ journal that gets people’s attention, easily trumps considerations of open access. If not, the ready availability of DIY web publishing would have put the journals out of business by now.

My point here is that perhaps the very idea of “scientist” as a vocation or something that offers a distinct career path is bound up with ‘closed’ notions of science. That while the opening of science may well be good for science understood as the rapid and reliable development of understanding of how the world works, it is not necessarily at all good for science when understood as a way of earning a living. Open science poses an ethical question for anyone who identifies him- or herself as a “scientist”: which is more important to you – developing reliable knowledge in the best way possible even if that means having to do something else as well in order to earn a living, or building a career that depends on being perceived as an expert to whom others defer?


Responses to SAPE

By Peter,

Science as a Public Enterprise (SAPE) is an initiative of the Royal Society who intend it to “ask how scientific information should be managed to support innovative and productive research that reflects public values”. The matter is being overseen by a “high-level working group” who have issued a call for evidence. The preferred route of response is by way of this form of set questions.

I’m not sure I’ve got that much in the way of material evidence to submit, but I couldn’t resist framing a few replies to their questions. In the spirit of openness, I’m publishing my draft replies here before submitting them to the RS. I welcome comments, criticisms and questions.

SAPE: What ethical and legal principles should govern access to research results and data? How can ethics and law assist in simultaneously protecting and promoting both public and private interests?

Research results and data are essentially documented accounts of certain experiences. Experiences in themselves are necessarily private, but documents containing accounts of them can be sold, shared or published. Arguably, the exchange of information through the distribution of such documents is the essence of science (as opposed to anecdotal knowledge), but that doesn’t automatically mean there’s an obligation to exchange on terms dictated by others.

Conventionally (in law), such documents are ‘works’ and any copyright or other rights as may exist in them belong to the authors, or in the case of works made for hire, to the author’s employer. One may sometimes hear the idea that research data can be owned being described as “absurd” or “preposterous”, but it is simply a matter of acknowledging that the carrying out of research has a cost and that the party who bears that cost has some priority in deciding how the results should be used. Only rather rarely nowadays is that cost really borne by the scientists carrying out the research as they are usually compensated by being paid for their time.

In line with that, research conducted by scientists in industry is generally regarded as works made for hire without problems. With academic scientists, the situation is more complex since some research funding (possibly the majority nowadays) is made in support of a particular research project. As such, it could be argued that the research belongs to the party who paid for it and that it is a work made for hire. However, some academic research is still funded on the basis of supporting individual researchers, a department or institute and the ownership of this would need to be the subject of local agreement.

The choice of who has access to research results and on what terms should ultimately be that of the owner of the rights in them, although provision should be made to legally oblige the owner to allow access or prohibit the owner from allowing access in certain cases where the public interest would otherwise be damaged. Under such circumstances, the owner should be entitled to just compensation for any consequential losses.

SAPE: How should principles apply to publicly-funded research conducted in the public interest?

The rights in any research entirely funded by the public should be automatically assigned to the public. The decision on whether to publish or otherwise distribute then lies ultimately with the public and any decision made on the public’s behalf has to be made in the public interest by accountable representatives. It follows that any assignation back into private hands should itself be made in the public interest.

SAPE: How should principles apply to privately-funded research involving data collected about or from individuals and/or organisations (e.g. clinical trials)?

The key thing here is “data collected about or from individuals and/or organisations” regardless of whether the research was privately or publicly funded. Information about legal persons that is not already publicly available should be used for research only with the emphatic consent of each individual concerned and with express statement of the limits to use. Ethical review should be made of the process used to gain such consent. Publication or other distribution of documents containing such information can then only occur within the limits set by the subjects’ consent.

SAPE: How should principles apply to research that is entirely privately-funded but with possible public implications?

If publication of the research would allow members of the public to avoid otherwise likely positive harm, then obligatory publication should be possible subject to the proprietors of the research receiving just compensation for any consequential losses. Simply asserting that publication would benefit the public interest should not be sufficient to force obligation to publish.

SAPE: How should principles apply to research or communication of data that involves the promotion of the public interest but which might have implications from the privacy interests of citizens?

I’m not sure this is the kind of question that can be answered in general terms. Clearly “the public interest” does not uniformly match up against everyone’s individual interests. Conflicts can only be decided in a case-by-case basis. Obviously, there has to be a right to petition for redress for anyone who feels “the public interest” is being interpreted in a way that infringes their private interests.

SAPE: What activities are currently under way that could improve the sharing and communication of scientific information?

“Scientific information” can mean two things: information in the data that scientists collect or produce; and information in knowing what scientists say the data mean. With regard to the first, the so-called “open data” initiative stands to make the biggest difference by advocating the indiscriminate online publication of “all” data (as opposed to only data selected to illustrate points made in scientific papers). There would appear to be a strong case that this will allow better use of data though there are problems around the costs of digitising certain types of information and standardization of data formats. There are also issues around how a commitment to open data conflicts with the career interests of professionalized scientists, or could encourage overenthusiastic publication of data without the owner’s consent. Overall, developing working open data initiatives would appear to be an area deserving of funding support in its own right.

SAPE: How do/should new media, including the blogosphere, change how scientists conduct and communicate their research?

The ease with which information can be published online means that freely available, informal, non peer-reviewed publications (such as blogs) containing scientific data and/or discussion as to what the data mean could proliferate and eventually displace traditional scientific journals as the preferred source of scientific information. This might make scientific information more easily available to those not working within industry or academia. The withering authority of the peer-reviewed journals might also result in less readiness to take the integrity of published data on trust and lesser readiness by individuals to fall in with ‘leading’ opinions with regard to data interpretation. Science could come to be seen less as a quest for truth – a set of limits within which all things must operate, and more as a search for pointers to what might be possible – a set of priorities for the next step in one’s own on-going investigations.

SAPE: What additional challenges are there in making data usable by scientists in the same field, scientists in other fields, ‘citizen scientists’ and the general public?

The general problem associated with making data suitable for use by others than those who generated them (“cross-use”), is that data on their own are meaningless. One must also know how they were produced, what they represent. Information relating to this is sometimes called “metadata”.

Cross-use of data by scientists in the same field can be relatively unproblematic in this technical sense because there is usually an implicit shared understanding of the experimental and observational fashions in the field. Only brief summary details of what the data represent are necessary for understanding. Of course, cross-use by scientists in the same field presents the greatest political/economic problems in professionalized science because it is in this case that the scientists from whom the data originated face the greatest risk of losing priority over its interpretation.

The problems of the quality of available metadata in allowing cross-use of data become more acute if we wish to allow cross-use by scientists in other fields or those working outside professionalized science. The quantity of necessary metadata can become substantial. It will be necessary not only to develop standardized formats for shared data, but also for metadata, to allow commensurability between data sets.

SAPE: What might be the benefits of more widespread sharing of data for the productivity and efficiency of scientific research?

In principal, making data available can contribute by reducing the number of occasions on which investigations are carried out into something that is already known and by increasing the number of occasions in which a given data set is used to answer different questions from those for which it was originally compiled. It does not, however, do this on its own. Before the availability of shared data can make a difference, one has to understand what range of possible data types might be helpful in advancing one’s own project, know how to look for them and know how to use the data once found.

SAPE: What might be the benefits of more widespread sharing of data for new sorts of science?

By encouraging the re-examination of existing data sets from new perspectives, data sharing could help the development of scientific investigation along lines not previously envisaged. Data sharing also encourages a “new sort of science” in another sense: it separates the collection of data from the theoretical interpretation of data. Traditionally, possibly as a legacy of the self-funded “gentleman scientist” era, many types of science have been carried out in a way that implicitly understands the collection of data and their interpretation to be the work of one person or a small group of people all working together (the “authors”). Of course, this vision of science has, since the mid-twentieth century, been in competition with “big science” where research projects are managed efforts carried out by organized division of labour between large numbers of research workers. This probably began with the Manhattan project. More recent examples might be the human genome project and the Large Hadron Collider project. More recently still, we have seen the rise of what might be called “stakeholder science” where scientific debates have taken place outside professionalized science, sometimes with direct participation of groups and individuals from outside professionalized science. Examples might be that of climate science and “global warming” or the debate around the science of autism and use of the MMR vaccine. Data sharing may encourage and accelerate the growth of these “styles” of science. A long term effect might be that professionalized scientists are seen less as “expert witnesses” and more as “knowledge workers” and that scientific problems are not seen as self-contained but rather as part of broader political or economic problems.

SAPE: What might be the benefits of more widespread sharing of data for public policy?

Data sharing might help crystallize ideas about what specific investigations need to be made to answer questions relating to policy formulation. It might be easier to discover if such investigations have already been made. However, without a warranty as to the authenticity of the data (problematic in a “free” sharing ethos), it would be unwise to take such data on trust. Specific audited studies would still have to be commissioned in most cases.

SAPE: What might be the benefits of more widespread sharing of data for other social benefits?

As above, it might help crystallize ideas about what specific investigations need to be made, but without some kind of trusted third party to act as guarantor of data authenticity, it might be unwise to take data on trust.

SAPE: What might be the benefits of more widespread sharing of data for innovation and economic growth?

As above

SAPE: What might be the benefits of more widespread sharing of data for public trust in the processes of science?

Initially, the availability of data additional to that used to illustrate the points made in research papers and even so-called “negative” data could improve public trust. In the longer term, a proliferation of data from unverified sources could have the opposite effect.

SAPE: How should concerns about privacy, security and intellectual property be balanced against the proposed benefits of openness?

The desire for privacy, security and ability to earn a return for making one’s original ideas available to others are natural human concerns. Irresponsible data sharing could easily lead to transgressions of those concerns. Everything depends on the data sharers behaving responsibly. This, in turn, depends on them understanding first that being a scientist does not in any way diminish one’s common ethical obligations as a citizen, and second, on understanding to whom (as well as to the public generally) they owe specific obligations in any particular case. This would include parties to whom the data relate directly and parties who funded the research either expressly or by way of funding the supporting infrastructure. Ethical frameworks of this kind already exist for research that involves studies of human subjects (such as clinical trials) and suitable components of these could be extended to cover other types of research.

SAPE: What should be expected and/or required of scientists (in companies, universities or elsewhere), research funders, regulators, scientific publishers, research institutions, international organisations and other bodies?

Following the above, subscription to an ethical framework that sets out guidelines for understanding who all the stakeholders are (individuals and collectives)  in a given research project and what obligations one owes to each of them.

SAPE: Other comments?

Science as a public enterprise can achieve and should aim to achieve more than just opening up access to scientific information. Science is presently understood largely as a professionalized activity that produces data in accordance with self-referring criteria of evaluation with funding from well-financed organizations (business corporations, governments, charities). Whether the funding is public or private, the scope of scientific investigations is formulated in response to standardized or homogenised concepts of market demand or public interest. In this model, science is effectively closed to the disparate minority interests of individuals or small collectives. At present, such minorities have to pursue their interests without the benefits of scientific investigation or have to reformulate their interests to align them with the homogenized or standardized ‘administrative’ interests of the organizations. A task for open science is to lower the financial and cultural entry barriers to instigating scientific research projects in response to disparate minority interests. Some of the requirements would be:

  • To help “lay” people articulate their interests in terms that reveal what types of scientific information could inform them. Informal learning initiatives such as Peer to Peer University might help with this.
  • To find which parts of those investigations have already been performed or are in common with the interest requirements of other parties.
  • To encourage instigation of short-lived, low overhead “collective experimentation” networks to carry out research projects with component parts distributed across multiple sites.
  • To encourage more specialized scientists to become “citizen scientists” prepared to engage with disparate citizen interests rather than see them as a funding opportunities for scientific interests.


In an earlier article, I briefly mentioned the two basic points of view from which funding for scientific research can be justified: funding for specific scientists on the grounds that one wants a vibrant research culture; or funding for specific research projects on the grounds that one wants answers to scientific questions. I was therefore pleased to discover this draft paper: Incentives and Creativity:  Evidence from the Academic Life Sciences by Pierre Azoulay, Joshua Graff Zivin and Gustavo Manso in which the authors compare the effects of these finding policies on scientists’ behaviour. The study compared scientists funded by the Howard Hughes Medical Institute (HHMI) with a comparable group of “Early Career Prize Winners” (ECPWs) whose research was supported by National Institutes of Health (NIH) funding. HHMI funds “Investigators” as individuals. In the words of the HHMI website: “HHMI urges its researchers to take risks, to explore unproven avenues, and to embrace the unknown—even if it means uncertainty or the chance of failure.” According to the paper, HHMI is tolerant of failure in the early stages and provides its recipients with access to peer group feedback on their research throughout. The ECPW control group was chosen as having similar overall research accomplishments to those in the HHMI group prior to their receiving HHMI support, but who subsequently went on to perform research with NIH funding. In contrast to HHMI, NIH awards grant funding in support of specific research project proposals and researchers who fail to achieve project goals are unlikely to have their grants renewed.

The authors found that HHMI-funded scientists were more strongly represented than those in the ECPW control group as authors of both the most highly cited publications and of the most rarely cited. In other words, HHMI-funded scientists had both more successes and more failures than the control group. Moreover, there was a greater proliferation of keywords associated with the publications of the HHMI-funded researchers after their appointment than in controls. Altogether, these observations are taken as an indication that because the consequences of short-term failure were ameliorated for them, HHMI-funded scientists were more willing to take the risks associated with a more exploratory and serendipitous approach than their project-funded counterparts. To put it another way, HHMI scientists were able to be more “creative”. That is, they were freed from the constraints of having to answer questions contractually agreed at the outset and allowed to answer questions of their own choosing; preferably, one presumes, questions that no-one else had before thought of asking.

I suspect that a lot of scientists will enjoy hearing this, which might be taken as some kind of justification of the “Haldane Principle“. However, the authors stress that their study should not be taken as a criticism of the NIH or of project-oriented funding. They point out the difficulties in making investigator-oriented funding work on a larger scale and of the need for ready political accountability in decisions involving the distribution of public funding. Analogous constraints apply in corporate R&D where accountability to shareholders has to be maintained.

There are interesting implications here for the criteria (“metrics”) used to evaluate individual scientists’ performance in academia and in industry. In academia, there are grumblings about the limitations of using publications and “impact factors” as measures of a researcher’s worth. In industry, there are alternating waves of enthusiasm for either “blue skies” or nose-to-the-grindstone project-managed research. However, consideration of successful and unsuccessful attempts at innovation (see for instance Why Innovation Fails by Carl Franklin or How Breakthroughs Happen by Andrew Hargadon) suggest that no matter how good they are as scientists, researchers who give birth to innovation that actually changes how people live have their thinking realistically attuned to the needs of subsequent commercial development even as they think up those previously unthought-of questions. The best researchers give their best when they’re embedded in a culture that gives them incentives to be exploratory that are also tied in to the business aims of the institution in which they work.


Redefining Science

By Peter,

On his Labcoat Life blog, Khalil A. Cassimally considers the problems of defining science. He considers in turn theories backed by evidence, the search for truth, finding new and unexpected things, the scientific method and finally settles for the British Science Council’s definition of “the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”.

However, there’s another side to science that none of this addresses: the existence of scientific institutions. Science can be seen as a kind of knowledge or a way of acquiring knowledge, but the word is also used in connection with various institutions: universities, funding organizations, journals, learned societies, professional bodies. These support the work of professional scientists in various ways and also set norms of scientist’s behaviour. It could be interesting to consider the extent to which the value anyone places on the work of professional scientists depends on their association with these institutions. For instance, are research results published in Nature seen differently than they would be if the same results were published simply on the researcher’s personal self-hosted website? If so, in what way and why?

If we accept that science lends itself to both types of understanding (a kind of knowledge; a kind of institution), then we should consider the relationship between them. The simplest kind of relationship would be one of correspondence such that knowledge originating from a scientific institution is necessarily deemed to be scientific and visa versa. If that is not the case, then either significant amounts of scientific knowledge originate from outside the scientific institutions, or significant amounts of knowledge originating from the scientific institutions are not scientific. In the first case, we would then have to ask why we would give special attention to the knowledge originating from scientific institutions and in the second, we would have to ask why we would regard knowledge originating from scientific institutions as being especially reliable.

Of course, it’s possible to admit that the correspondence between kinds of knowledge and kinds of institution is not perfect and that both of the situations referred to above occur to some extent. Defenders of the institutions may say that they perform some supportive or supplementary function; that we get better quality science or better value science by virtue of the institutions being there. To take my example above of journals vs. website publishing, it may be said that work published in Nature is more worthy of attention than work published on a personal website because research published in Nature has been peer-reviewed. However, the reviewers generally have to base their opinions entirely on what they see in the submitted manuscript. Their ability to decide the scientific strength of the work described is not really any more than the general readership would be able to decide for themselves if the manuscript was published anyway. We don’t need peer-reviewed journals to help us decide what is scientifically valid and what isn’t, because we are just as well able to decide this as the reviewers on the strength of what we can see – the manuscript. For a prestigious journal, the number of scientifically valid manuscripts received will generally exceed the number that can be published. The decision on what to publish is an editorial one based on what scientifically valid work is furthermore deemed to be important or worthy of our attention.

The effects of peer review are not limited to determining what gets published in journals. Peer review of one kind or another runs through most scientific institutions. It determines not only what gets published but what research proposals are deemed worthy of funding when the number of scientifically valid proposals exceeds the scope of available funds; who gets a job when several scientifically qualified candidates are available; and which scientists receive honours and which don’t. Clearly, whatever the value of scientific institutions in safeguarding proper standards of scientific knowledge, another important effect they have is to promulgate a sense of what is important within the body of scientific knowledge. This leads to questions of who makes those choices and on what basis?


“Science is about sharing and should be accessible to everybody”

… so says the caption at the start of this video made by OpenScienceSummit. That statement is not necessarily meant to be a comprehensive description of science, of course, but it does presumably reflect the priorities of those who advocate Open Science. As such, it is striking that it omits to mention the idea that science is about producing reliable, empirically tested knowledge that confers practical benefits. Possibly, that is seen as something that can be taken for granted. It is the social nature of science that still has to be argued for.

While it is certainly possible for someone working in isolation to produce empirically tested knowledge that confers practical benefits, it is also fairly obvious that sharing of ideas and observations allows for a greater diversity of hypotheses to consider and a greater range of experience to test them against. Likewise, a greater diversity of perspectives and ingenuity can only result in greater overall practical benefit being derived from any given expression of scientific knowledge. As soon as there is any sense of competitive urgency or ambition about the scope of production of empirically tested knowledge that confers practical benefits, it is advantageous to work socially.

How can Open Science encourage or optimise the benefits of this social aspect of science?

Science, as we understand it today, is largely produced by professional scientists with specific kinds of education and training, working with equipment and facilities not usually found outside the world of professionalized science. To a large degree, the principal influences on their choice of research problem and the principal audience for their reporting of results come from within that world. If we are interested in allowing those from outside the world of professionalized science to realise the greatest overall practical benefit from scientific research, we need to see to it that the choice of research problems and the reporting of results are done in ways that take into account the perspectives of people from outside that world. One question for advocates of Open Science is therefore whether Open Science helps achieve that aim.

The term Open Science has been used in connection with a variety of concerns and its overall meaning arises as a summary of those various contexts. The short Wikipedia entry on Open Science describes it as “the umbrella term of the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge”.

A few readily-found examples of initiatives branding themselves as Open Science include: the OpenScience Project, “dedicated to writing and releasing free and Open Source scientific software”; the Open Science Grid, which “advances science through open distributed computing”; and the Open Science Directory, “a global search tool for all open access and special programs journal titles”. A blog post at the Open Science Project answers the question “What, exactly, is Open Science?” with: “transparency in experimental methodology, observation, and collection of data; Public availability and re-usability of scientific data; Public accessibility and transparency of scientific communication; Using web-based tools to facilitate scientific collaboration”. Open Science Summit, answers the question “What is Open Science?” with “Science in the 21st century using distributed innovation to address humanity’s greatest challenges”.

The Open Science Federation aims “to improve science communications” with the participation of “open source computer scientists and citizen scientists, science writers, journalists, and educators, and makers of and advocates for Open Data, Open Access, and Open Notebooks”. The federation’s own contribution appears to mainly involve encouraging the use of blogs and online social networking media. From examples like these, it is possible to identify several specific areas on concern to Open Science advocates. Altogether, I have identified the following specific topics each of which seems to have a significant following among Open Science advocates:

Open Data

Readiness to makes one’s data available to others is fundamental to good scientific practise. It bolsters confidence in one’s conclusions and allows alternative interpretations to be developed. The Open Data principle attempts to consolidate this into express obligations, first to make all data (including so-called “negative” data from studies or experiments that seemed to lead nowhere) available and, second, to make it available on terms that allow others to reinterpret and re-use it freely. This provides the possibility of allowing as many uses and interpretations of a given dataset as possible. There are limitations, of course. Some datasets will be proprietary and there is not always a clear boundary between “data” and anecdotal accounts of observations. More seriously, the effective and efficient use of datasets requires standardization of comprehensive metadata and some standardization in formatting of datasets themselves. These requirements are well advanced in some fields of science, but much less so in others where significant investment in standardization would be required to realise the benefits of Open Data, even if willingness to make data available is well established. Establishing standards for all types of data is a significant undertaking and may even confound the openness of scientific enquiry by placing constraints on the type of data to be collected. At the same time, they assist in establishing commensurability between datasets collected in differing contexts which could help diversify scientific enquiry. Until those considerations are addressed, professionalized scientists are likely to choose problems for study in much the same way whether they have a view to making their data open or not.

Open Source

For some advocates of Open Science, the term is largely synonymous with an insistence on making scientific software Open Source. In fields of science where very large datasets are generated, data analysis may rely on specialist software. Publishing the source code of such software allows the manipulations it carries out to be properly understood by everyone with an interest in knowing, helps the discovery and elimination of coding errors and could accelerate the development of new or improved software to extend and diversify possible analysis. It is really just one aspect of the long-established scientific principle of disclosing one’s methods. While the term “Open Source” need not imply any more than disclosure of the source code, it is often accompanied by an expectation of freedom to use as well. This ensures that the above benefits are realised as broadly as possible. However, the Open Source principle does not in itself do anything to open up choice of what datasets are desirable in the first place.

Open Access

Allowing free access to and distribution of scientific literature helps make the results of research more widely available. Although Open Access journals or, failing that, do-it-yourself web publishing must by now be possible for just about any professionalized scientist. Nevertheless, for many (most?), the priority remains is to publish in a ‘good’ journal even if that means libraries and individuals will be charged hefty subscription fees. This priority is professional, not scientific: a ‘good’ journal will attract readers and enhance one’s CV better than any of the open options. Much is said by Open Access advocates of how this professionalism limits access to science by those who could use it from outside the circles of professionalized science. Another possible effect of this ‘professionalist channelling’ of scientific publication into ‘good’ journals that is much less discussed lies in the uniformity of perception of the value of topics to be researched. Researchers gear their research priorities to what will get published in a ‘good’ journal. This effect will be pretty much the same whether the ‘good’ journals are open access or not.

Open Proposals

In principle, the idea of opening up the drafting of research proposals presents an excellent opportunity for “lay” people to participate in deciding the direction of scientific research. It changes the relationship between publicly-funded professional scientists and “society” from one in which the existence of a class of professionalized scientists is seen as a public good in its own right to one in which the public good stems entirely from the extent to which professional scientists brings their specialist knowledge and expertise to bear on problems selected by the public. This does represent quite a shift from the way things are generally done at present, of course. Not least, it requires a change in the way that the world of professionalized science sees itself. Even scientists who are apparently committed to “the re-use and re-purposing … of scientific knowledge through collaboration between the scientific community and the wider society” go on to represent that collaboration like this. The research world is portrayed as something separate, even remote, from “society”. Research provides society with information through education and publishing and is itself influenced by society through “policy”, volunteer work and “citizen science” (more about that below). Behind this type of representation is a tacit assumption of research (professionalized science) leading an autonomous existence, almost as though science were itself a natural phenomenon outside the bounds of human society. In contrast to that, there is the view of science as a set of customs followed by a set of people in pursuit of relating to others and making a living for themselves within society.


In his book Reinventing Discovery, Michael Nielsen features the Polymath Project, initiated by Cambridge University mathematician Tim Gowers, as an example of crowdsourcing in science. Gowers used the reach of online social networking to swiftly form a virtual community of people interested in collaborating on developing a mathematical proof. This community was informal and non-professional. Amateurs could join in on practically the same basis as professionals. Individuals contributed only as much as they wanted to. A critical contribution might come from anyone at any time. They had a solution in only a few weeks.

Presumably, at each step of the way, the size of the community was enough to ensure that someone would already be thinking along the right lines to suggest the next step. One wonders whether an entirely professional collaboration, probably of fewer people and united as much by considerations of professional or expert status as by their interest in a specific mathematical problem, might have been keener to preserve orthodoxy in their thinking and would have taken longer.

A deprofessionalized crowdsourcing approach of the type exemplified by Polymath might also be used in experimental science to choose specific hypotheses for investigation, to design experiments that properly test a chosen hypothesis, or to evaluate hypotheses against data. Potentially, then, crowdsourcing could open science up to participation by non-professionals and, by that token, to some extent, direction by non-professionals. However, one has to question how far this could progress before running into difficulties. To what extent does Polymath’s success stem from it having been instigated by an individual of Gowers’ status? His reputation meant that a lot of people were already ‘watching’ (i.e. reading his blog) when he first started it. It also gave a kind of credibility to his selection of problem to work on. The collaboration formed quickly because a lot of people were aware of it as soon as it was announced and because Gowers’ involvement gave them confidence that the project would go somewhere. Gowers’ choice of problem seems to have been made on the basis that there was academic mileage in it. In other words, his career priorities would be served by it. That wasn’t necessarily true of other participants. Indeed, for non-academic participants there was little to gain other than the amusement value of participating itself. To be sure, Gowers gave up the kind of exclusivity of authorship he might have had from a more conventional way of working, but he retained “authorship” of the choice of problem to be tackled in the first place. To what extent can we expect a similar crowdsourcing approach to work for just anyone who has a problem they feel unable to solve themselves? Further difficulties become visible when one considers what might happen when the proposed problem is being solved in pursuit of some further practical purpose. How might a crowdsourcing collaboration go if working on a problem connected with potential responses to global climate change or mass vaccination proposals?

Science Communication/Public Understanding of Science

The world of professionalized science consists largely of networks of people who have undergone extensive formal science education and training and who talk to one another using specialized language. It’s difficult for an outsider who wants informed answers to specific questions to just dip into the primary scientific literature and get what they want. This is not only because there is a specialized vocabulary to learn, but also because the professional scientific literature follows the research priorities of professional scientists. If the questioner does not frame his or her question in relation to those priorities, it will be hard to relate what is found in the literature to the question, even if the vocabulary has been mastered. Science Communication and Public Understanding of Science are attempts to bridge this gap by training scientists and journalists to explain science in ‘ordinary’ terms. These initiatives could, in principal, foster a general understanding of science that could help members of the “lay” public articulate their interests into proposals for research. In practise, however, much of what is produced under these headings at present is either aimed at persuading the public that the projects of professionalized science are aligned with their interests and therefore worthy of public funding support, or are aimed at showing how existing scientific knowledge can inform government policy decisions. If Science Communication and Public Understanding of Science are limited to communicating the research priorities of professional scientists to the public and understanding how science can inform the decision-making of professional politicians, then they ‘open’ science only by providing a window through which the public can gaze as an essentially passive audience. They don’t open the way to direct public involvement in driving local research priorities.

Volunteer Science/Citizen Science

The term “Citizen Science” is used to mean different things by different people. To some, it means professional scientists recruiting members of the public to assist with data collection or data analysis. In some online research projects this has involved large numbers of informal volunteer researchers. Accordingly, I would prefer to call it Volunteer Science. While Volunteer Science certainly allows the public to get involved in research, it is rather like crowdsourcing in that most such projects so far seem to rely on direction by professionals. It remains to be seen whether online networks of ‘lay’ people who have a common civic interest or problem thought to be amenable to scientific investigation can recruit professional scientists.

Such engagement of professional scientists with those outside the world of professionalized science is described by Jack Stilgoe in Citizen Scientists – Reconnecting Science with Civil Society. Stilgoe’s Citizen Scientists are “people who intertwine their work and their citizenship, doing science differently, working with different people, drawing new connections and helping to redefine what it means to be a scientist”. The Citizen Scientist is motivated by a sense of engagement with civic interests that not only permeates, but actually drives his or her research interests. Research priorities are chosen not on the basis of what maintains one’s reputation and status within a professionalized science community linked by a professional interest in science, but rather on the basis of how they contribute to the needs and ambitions of a civic community linked by place or civic tradition.

How might Open Science contribute to the advance of Citizen Science?

Opening Up Open Science: The Possibility of Civic Research

We have seen that the concept of Open Science encapsulates a variety of initiatives, each of which encourages more openness, closeness or collaboration between the various parties involved in the scientific enterprise. Advocates of Open Science generally argue that the benefit of these developments is that they will accelerate the advance of science. In Reinventing Discovery, Michael Nielsen looks forward to “a new era of networked science that speeds up discovery” and assures us that this “will deepen our understanding of how the universe works and help us address our most critical human problems”. Inherent in that is the idea that science can and will (eventually) answer every question. Maybe so. But who decides which questions we tackle first? As I’ve tried to argue above, most of the initiatives of Open Science leave that question open to the status quo. In effect, that means professional scientists acting within the culture of professionalized science itself in conjunction with government or other organizations that sponsor them. There is a presumption that these are effective at deciding what “our most critical human problems” are and then translating them into the most appropriate courses of action for scientists.

Many civic organizations, associations, networks and individuals perceive issues or problems in relating their own particular interests to those of other members of society. While resolution of such issues is ultimately political, progress towards resolution may be advanced in some cases by some kind of scientific investigation. Such investigations we may call ‘civic research’. Groupings that might instigate such research could include NGOs, patient advocacy groups, consumer rights groups, local residents associations concerned about environmental contamination or pollution, farmers concerned with land stewardship and others. Because the concerns of such civic groups are ultimately political, because the types of scientific investigation they want often do not align well with the priorities of professionalized scientists and because such groups often do not have sufficient funds to engage the services of scientists on a professional contract basis, working with them is often not attractive for professionalized scientists. For engagement with such groups to become attractive, scientists have to be personally motivated by the political objectives, not just the ambition to pursue a scientific career. The scientist is a committed political actor whose contribution to the project happens to take the form of scientific knowledge. The science is as overtly political as the aims of the group. Nevertheless, to be effective in that role, to bring to it the benefits of as broad a range of scientific experience and understanding as it can use, scientists need to be connected to and to draw upon the world of Open science. Although that world is not aligned with any specific civic commitment, it can inform countless committed initiatives. Its openness also allows it to draw upon, integrate and grow from the submitted experience of countless scientists individually committed to overtly political programs of civic research.

Is the possibility of civic research an alternative to professionalized science? On the basis of the above, it seems that if Open Science can open up the prioritization of research problems to be addressed, it could sustain a lot more civic research than currently takes place. Moreover, if Open Science can create standards in dataset and metadata format, then the results of civic research projects could be more readily integrated into the Open Science knowledge base itself. Civic research could grow, be sustained by and eventually sustain the scientific knowledge base without the need for professionalized science. I intend to look more closely at this question in future posts.


George Monbiot has just published an article on the very high subscription rates charged by certain publishers of ‘high impact’ scientific journals (see “The Lairds of Learning” on George Monbiot’s own website or the Guardian here). He does not hesitate to brand commercial publishers of academic journals as “the most ruthless capitalists in the Western world” and suggests that “the racket they run is most urgently in need of referral to the competition authorities”. That might be the case if they really were running a monopoly or cartel, but are they?

Professor David Colquhoun drops in to comment on Monbiot’s article at the Guardian, saying:

“I see no reason to have academic journals at all … We can publish our own papers on the web, and open the comments. It would cost next to nothing.”

So where’s the monopoly? All scientists and other academics have to do is put their papers on the web.

George Monbiot says he wants governments to “work with researchers to cut out the middleman altogether, creating … a single global archive of academic literature and data. Peer-review would be overseen by an independent body. It could be funded by the library budgets”, but why? Colquhoun’s suggestion could be up and running by tea time. No need for government meddling! It is already completely within the power of the great majority of academics from now on to make the results of their research freely and widely available by self-publishing on the web. So why don’t more of them do it?

Monbiot has an answer:

“The reason is that the big publishers have rounded up the journals with the highest academic impact factors, in which publication is essential for researchers trying to secure grants and advance their careers.”

Is that it? The making of “coherent democratic decisions”, the “tax on education”, “a stifling of the public mind” and the apparent contravention of the Universal Declaration of Human Rights that George Monbiot points to are all trumped by academics’ need to secure grants and advance their careers?

The peachy business that the commercial publishers of academic journals enjoy wasn’t really engineered by them. They are just taking advantage of a situation that academics find themselves in: the need to publish in ‘high impact’ journals.

On the face of it, it seems ridiculous that the value of someone’s research should be based on which journals they publish in. Essentially, going by journal ‘impact factors’ is just a way of judging the papers without having to actually read them. It’s like judging a man by the clubs he’s joined rather than by getting to know the man himself. Nevertheless (we are told), this is the basis on which academic careers are built today

So how did this situation arise? Is it what academic researchers chose for themselves? If not, how is it that they, of all people, have not managed to convince their paymasters (that is, ultimately, you and me) that there is a coherent alternative way of explaining why the public money that supports academic research should indeed be spent that way, rather than on better public healthcare, better schools or better social services?

George Monbiot wants to go after the publishing houses, but in the light of David Colquhoun’s observations, they’re an irrelevance. The real issue here, I would suggest, is the inability or unwillingness of academic researchers to set criteria for evaluating their own research that don’t just sound like self interest and that they actually want to live by themselves. Any number of claims that academic research is culturally enriching, or improves education or reduces suffering or makes for better public policy decision making may be true, but the choices of publication route still actually by many researchers suggest that their university careers are more important to them than any of those things. Could it be that the publishing houses profits are just one consequence of publicly-funded science being largely carried out by people who don’t themselves believe the claims made for its value as a public good?



By Peter,

Nature is not a Book (NINAB) is a blog about science, but it’s not a science blog. It’s possible that you may learn something about science here, but I would think it unlikely that you will learn any actual science.

When we use the word “science”, we may use it to stand for two quite distinct things. First, it may stand for a certain standard or quality of knowledge about the natural world; a knowledge that is deemed to go beyond anecdotes of the here and now. Second, it may stand for a set of practices and the institutions that encourage their repetition across society and through time.

The relationships of necessity and sufficiency that may or may not exist between these two aspects are the principal subject matter of NINAB. In particular, NINAB takes the production of a scientific literature to be a central characteristic of science that links the two aspects and therefore mediates relations between them. The literature is one characteristic of science that remains recognisable across practically all scientific disciplines and is also the medium of linkage between them. As such, it supposedly constitutes the most complete account of nature available. This “book of nature” mirrors the traditional philosophical and theological metaphor of the book of nature in which we can supposedly read nature like a book - as a story with a beginning, an ending and a plot that links them. The written book of nature constituted by scientific literature, while continuously developing, is at any point in time inevitably partial in scope and shaped by the conventions and figures of speech it employs in its explanations. NINAB aims to investigate the extent to which (if at all) the path of its development is dictated by nature itself (or at least in the way that we must encounter it) rather than being a contingency of human history or an artefact of the institutionalised practices employed in its writing. The title of this blog is intended to express scepticism that the superabundance of nature may be adequately represented through the metaphor of a book.

Although it calls itself a blog, NINAB is not concerned with maintaining a high profile on anyone’s “blogging community”. It exists as a place for its author to develop a certain way of writing. Individual posts are part-sketches for a projected future complete written work. They are offered publicly in the hope that reader responses will be helpful in making that happen.


In the first article in this series, I examined the shortcomings of both philosophy (epistemology) and sociology in answering the question What is this Thing Called Science? I proposed that an examination of science-as-literature could be a basis for answering the question, since not only is the importance of “the literature” acknowledged by scientists themselves, but its structure is likely to reflect both the epistemology and sociology of science.

The Wikipedia article on ‘Scientific literature’ says:

“Scientific literature comprises scientific publications that report original empirical and theoretical work in the natural and social sciences, and within a scientific field is often abbreviated as the literature. … . Scientific research on original work initially published in scientific journals is called primary literature. “

Many professional scientists would probably agree that accounts of original empirical investigations published in scientific journals (what I will here call Research Reports) constitute the “primary” scientific literature. The assumption is that since the main task of science is to collect and document empirical observations of the world and since Research Reports are where such observations are first reported publicly, then every other kind of scientific literature must necessarily derive from Research Reports.

Some might argue that records of raw experimental data (instrument readouts, lab notebooks and so on) have a stronger claim on primacy since it is from those that the papers published in scientific journals are composed. While it has not generally been the custom to publish or widely distribute raw experimental data (and one might on that basis question whether such records qualify as “literature”), to do so would be entirely consistent with the ideal of science as a collective cultural body of empirical knowledge. Indeed, advocates of open science often argue for the publication of raw experimental data to be made routine on that basis. Although raw data records lack the commentary found in the traditional scientific paper, they still necessarily embody choices as to what data were collected and at what point the quantity of data was deemed sufficient to merit compilation of a report. Those choices, in turn, may be made in the context of preconceived notions of what arguments will be made in an eventual published paper. For this reason, I would argue that records of raw experimental data are themselves Research Reports albeit of a different style from the traditional research paper.

Another more obvious reason to question the supposed primacy of Research Reports would be to point out that the production of Research Reports requires investments of time and effort or, to put it another way, funding. Very few scientists today are self-funded. They have to persuade others that they can benefit by making funding available to researchers. Those ultimately holding the purse strings are not usually scientists, though they may delegate the choice of which particular research projects to fund to people who are. Preceding the production of data records and journal papers or other Research Reports derived from them, therefore, there is a need to produce Research Proposals. Such documents deal with scientific issues and, if not always published, are certainly written to be read by people other than their authors. As such, Research Proposals form part of the literature of science and arguably have a stronger claim on primacy than do Research Reports. This is not only because successful Research Proposals are necessary for the production of research results and therefore Research Reports, but also because Research Proposals are arguably the most important channel of science communication between professionalised scientists and those who fund them. Unlike Research Reports which are written largely for consumption by other scientists, Research Proposals must ultimately persuade people outside the circles of professionalized science and their success in that is acutely attached to the possibility of producing further science. While the focus of Research Proposals is science, they necessarily and sometimes quite openly also rely on rhetoric and appeals to political concerns. The effectiveness of these in turn can be preconditioned by interventions of science communication in the broader public sphere. For instance, when we consult a professional scientist as a source of expert opinion, we may expect that much of the expert language used will have been rehearsed in Research Proposals and that the opinions put forward are made with an eye on how they are likely to set expectations that will help the success of future requests.

Arguably therefore, while Research Reports are probably the type of scientific document most closely studied by professional scientists themselves, the proposition that they represent the “primary” form of scientific literature is subjective. This can be seen more clearly still if we consider the position of some0ne unfamiliar with a particular field of investigation wanting to quickly gain a reliable overview. Most Research Reports emanate from those fields of research where the most speculative empirical enquiry is still active. They can present a fragmented and sometimes contradictory view of their field. There is therefore a market for Research Reviews. At their best, these are articles that survey existing Research Reports in a putative field to reveal emerging consensus and any to delineate and point out ways of resolving controversies in that field. Research Reviews therefore add cohesion to a given field of science and help make it more readily comprehensible. In some areas of research, standards of production of Research Reviews have been formalised (for instance, Cochrane reviews in medicine). Elsewhere, their production is more ad hoc, sometimes being driven by little more than a need for scientists without research funding to maintain a publication rate that will facilitate the success of future Research Proposals. In any case, Research Reviews are generally written by professional scientists for professional scientists, but they can provide a good way into a particular field of science for outsiders.

Yet another example of how the supposed primacy of Research Reports is subjective may be seen in the case of Teaching Texts. Old scientists wear out and die; they have to be replaced with new blood. Scientific education serves this market and to do so, it produces various types of scientific literature that can collectively be referred to as Teaching Texts. Most typically sold as textbooks, scientific Teaching Texts can be seen as a further development from the Research Review, for they are essentially reviews. The difference is that they typically cover a broader field (sometimes very broad, like ‘Physics’, ‘Chemistry’ or ‘Biology’), tend to focus on areas of strong consensus and are written to be accessible to an audience who are not (yet) professional scientists.

My purpose here has not been to show that any other type of scientific literature enjoys primacy over Research Reports, but rather that the perception of any supposed primacy of Research Reports is subjective to the concerns of the professionalized scientist working in a particular field of research. As soon as we look at the need for science as a cultural activity to sustain itself either in terms of soliciting funds or recruitment of new scientists into its ranks or for existing scientists to reach consensus on the scope and meaning of an emerging research field, we see that other types of scientific literature start to look more “primary”.

As a bit of a joke, it is possible to represent the relationships between the various genres of scientific literature discussed graphically as components of a “scientistic organism”, dubbed Amoeba scientisticus, featuring an “empirical” nucleus and “theoretical” cytoplasm.


A. scientisticus feeds on money and “breathes” people (they are inhaled scientistically naïve and exhaled scientistically skilled). The nucleus is where experimental data are produced. Research Reports form from these and migrate toward the nuclear membrane. They are eventually expelled from the nucleus (published) into the cytoplasm. Here, they react with each other and with existing Research Reviews, leading to the formation of new Research Reviews. Some of these migrate back into the nucleus, catalysing the further formation of data and Research Reports. On the way, while still in the cytoplasm, they may react further with each other and with emerging Research Reports. Such reactions may lead to the formation of Research Proposals or of Teaching Texts. These, in turn allow the organism to extend pseudopodia (funding pseudopodia from Research Proposals, recruiting pseudopodia from Teaching Texts) that can engulf money (funding pseudopodia) and people (recruiting pseudopodia) respectively. Once internalised, funds and personnel are carried to the nucleus where are used to produce more experimental data. Thus, the various genres of scientific literature work in concert to drive the “metabolism” and sustain the life of the scientistic organism. In future articles I intend to show how further genres of scientific literature are used to position the organism so that it can maximise its exposure to the money and people it needs to perpetuate itself.

It might be objected that my account here denigrates science by summarising it as a socio-economic phenomenon and ignoring its ability to produce valuable knowledge. However, the account here is an attempt to describe science in the most general terms. The types and value of knowledge produced by science vary widely between fields of inquiry are therefore not suitable as the focus of a general account of science. Instead, I have focused on the function of science by which it produces various types of literature since, as I see it, this is much more consistent over the entire range of enquiries that we call “science”. Characterising science as a socio-economic phenomenon reminds us that ultimately “science” is, like any human endeavour, just a matter of “people doing stuff”. That in doing so they produce knowledge is effectively a way of saying that they don’t just do the same stuff over and over again. It gradually changes over time.


When we turn to scientific experts, what do we expect from them?

For some, the preference will be for the Absolute Truth of universal natural laws that unavoidably govern everything we do. Such a preference naturally entails commitment to a material reality that is independent of human affairs. It also entails commitment to that reality being amenable to empirical investigation and, moreover, to such investigation being the good and proper aim of science. Such commitments engender certain expectations of science and hence of scientific experts. These expectations I will here refer to as a realist expectations of science.

What other types of expectation are possible? People of a pragmatic frame of mind may feel that the realist view is too stringent to be useful. They may point to the perpetual imperfection of scientific knowledge; the fact that even the the most firmly established of scientific principles may need revision in the light of future discoveries. They may say that we can never be sure that any of the “natural laws” proposed by science really are natural laws but that they can nevertheless often provide useful solutions to technical problems because they allow us to make predictions with a fair degree of confidence. Being pragmatic, they may say that this is the real value of science and should be the proper basis of the expectations we place upon scientific expertise. Under this view, the job of science is to provide instruments with which we may make good bets, based on experience, on the likely outcome of any chosen course of action. Accordingly, the expectations thereby placed on scientific expertise may be characterised as “instrumentalist”. Note that the holder of such views need not necessarily deny the existence of natural laws or science’s ability to discover them. It is the expectation placed upon science (and hence the basis for deciding how much to spend on it) that is being deemed instrumentalist here.

Another type of critic may yet remain dissatisfied. They may be concerned that even if a given theory does give us confidence in certain types of prediction, it may not be the best theory. Its predictions may be less accurate than those of another or it may be able to make predictions only in a narrower range of circumstances than another.

What might experts themselves make of all this? One example is to be seen in this article by Daniel Sarewitz. Sarewitz is a scientific expert. He is invited to sit on consensus committees that are asked to “condense the knowledge of many experts into a single point of view that can settle disputes and aid policy-making”.

Clearly, the sponsors of such committees place a premium on consensus and therefore favour the realist or at least instrumentalist view of science. Sarewitz, however, distrusts consensus, saying:

“the discussions that craft expert consensus, however, have more in common with politics than science”

Indeed, he apparently sees the search for consensus as unscientific:

“the very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. Consensus is for textbooks; real science depends for its progress on continual challenges to the current state of always-imperfect knowledge”

Here we see what is apparently an advocacy of the idealist view of science. Most striking is the contrast drawn between the certainty (implied consensus) of the textbooks on the one hand and “real science” on the other. Real science is taken to inherently in the domain of controversy. Sarewitz concludes by proposing a role for science in policy making that is quite distinct from the consensus-seeking norm of current practises:

“science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice”

“Unlike a pallid consensus, a vigorous disagreement between experts would provide decision-makers with well-reasoned alternatives that inform and enrich discussions as a controversy evolves, keeping ideas in play and options open”

Should the job of experts be to ensure the broadest range of considerations in public deliberation? Science is then not the impartial arbitrator, but simply one way of safeguarding the impartiality of those whose deliberations do make the (political) arbitration.


‘PeculiarPhilosopher’, a participant at the Galilean Library (TGL), drew my attention to this excerpt (pp. 88-100) of an article by anarchist Bob Black in which he berates Noam Chomsky for claiming to be an anarchist while not really being one.

Black’s main gripe with Chomsky is that Chomsky is a leftist and as such adheres to moral and ideological values that are barriers to anarchism. Some of the discussion at TGL focused on the contrast between Chomsky’s leftist pragmatism and Black’s – what is it? – idealism? However, I think the difference between Chomsky’s anarchism and Black’s lies more fundamentally in the type of anarchy to which they are supposed to lead. Witness their polarised attitudes to democracy: for Chomsky, anarchy is justified as the perfection of democracy. Chomsky seems to see anarchy as the ultimate realisation of democracy. An anarchist society would be a perfectly democratic one. Democratic institutions, according to Chomsky, provide starting points where people can work within the state to “build the institutions of a future society” that would “place decision-making in the hands of working people and communities”[1]. He sees the attempts of authoritarian government and the PR/advertising industries to influence and indoctrinate people as an undermining of democracy.

Bob Black, on the other hand, seems to see these things as facets of democracy itself[2]. For him, democracy is just another device used by the state to make the people more susceptible to the propaganda of powerful elites and is something to be superseded by anarchy: “anarchism should be the threat to democracy”[3].

In Chomsky’s anarchy, there will still be professors at MIT publishing books and papers in academic journals, but in solidarity with “working people”. In Black’s, there might be none of these things. People, “working” or otherwise, will each be living their own anarchy, freed from their statist addictions which include, in addition to opportunities to answer a few multiple choice questions distilled from the sanitized, prepackaged “issues” on which the elites have deigned to consult them, financial and medical care safety nets and MIT professorships.

Just as it is incumbent upon Chomsky to persuade us that there are effective strategies for working with the state to bring about its democratic dissolution, so it is incumbent upon Black to persuade us that just dropping out of the state to live personal anarchy right now will not simply lead to some dog-eat-dog hell.

Black chides Chomsky [3] for failing to acknowledge that by far the greater part of human (pre)history took place before the advent of states and that in those days everyone lived in anarchic societies. That we’re here today to know that proves that they worked, I suppose. But what do we know about those anarchic societies? While Steven Pinker used his survey of the evidence[4] to support a rather whiggish account of history, I’ve not heard that his data are too badly flawed. He would contend that, overall, a decidedly greater proportion of people in prehistoric anarchic societies died violently at the hands of others than is the case in subsequent state societies. The state may support elites, but it also provides means of negotiation between competing interests, hence it’s less violent. Of course, Pinker’s evidence is statistical. There may have been very nonviolent anarchic societies in the mix. But if Black cares about that, it’s incumbent upon him to show us what characteristics anarchic societies must possess to avoid becoming endemically violent. Yet to do that without heading toward the moderated anarchy he so despises in Chomsky’s account could be a tall order.

While I instinctively find Black’s fauve anarchism more appealing than Chomsky’s enlightenment version, I still need persuading that it could ever persist for any significant length of time. Unfortunately, I think the evidence of history is that in the great majority of cases where an anarchic society has been in head-to-head competition with a state, the state has prevailed. That’s why today, most people live in states and why what anarchic societies there are, mostly survive at the pleasure of some state or other.

By historicising the argument for anarchism, Black attempts to cast the state as an aberration and anarchy as the ‘natural’ condition of human society. However, the state is really just a human invention – a technology – introduced as a way to make human societies more prosperous. So far, it has been rather successful in doing that. Its development is now an empirical part of the trajectory of human evolution. The state may (probably will) eventually disappear, of course, but it will not necessarily give way to anarchy. More likely, it will evolve into something else that we can presently scarcely imagine and for which we will no longer find “state” a useful name. Chomsky’s form of anarchism is perhaps more likely to have an effect on that than Black’s.