This site is supported by Nobility Studios.
  • entries
  • comments
  • views

About this blog

Entries in this blog


By Peter,

Nature is not a Book (NINAB) is a blog about science, but it’s not a science blog. It’s possible that you may learn something about science here, but I would think it unlikely that you will learn any actual science.

When we use the word “science”, we may use it to stand for two quite distinct things. First, it may stand for a certain standard or quality of knowledge about the natural world; a knowledge that is deemed to go beyond anecdotes of the here and now. Second, it may stand for a set of practices and the institutions that encourage their repetition across society and through time.

The relationships of necessity and sufficiency that may or may not exist between these two aspects are the principal subject matter of NINAB. In particular, NINAB takes the production of a scientific literature to be a central characteristic of science that links the two aspects and therefore mediates relations between them. The literature is one characteristic of science that remains recognisable across practically all scientific disciplines and is also the medium of linkage between them. As such, it supposedly constitutes the most complete account of nature available. This “book of nature” mirrors the traditional philosophical and theological metaphor of the book of nature in which we can supposedly read nature like a book - as a story with a beginning, an ending and a plot that links them. The written book of nature constituted by scientific literature, while continuously developing, is at any point in time inevitably partial in scope and shaped by the conventions and figures of speech it employs in its explanations. NINAB aims to investigate the extent to which (if at all) the path of its development is dictated by nature itself (or at least in the way that we must encounter it) rather than being a contingency of human history or an artefact of the institutionalised practices employed in its writing. The title of this blog is intended to express scepticism that the superabundance of nature may be adequately represented through the metaphor of a book.

Although it calls itself a blog, NINAB is not concerned with maintaining a high profile on anyone’s “blogging community”. It exists as a place for its author to develop a certain way of writing. Individual posts are part-sketches for a projected future complete written work. They are offered publicly in the hope that reader responses will be helpful in making that happen.


Redefining Science

By Peter,

On his Labcoat Life blog, Khalil A. Cassimally considers the problems of defining science. He considers in turn theories backed by evidence, the search for truth, finding new and unexpected things, the scientific method and finally settles for the British Science Council’s definition of “the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”.

However, there’s another side to science that none of this addresses: the existence of scientific institutions. Science can be seen as a kind of knowledge or a way of acquiring knowledge, but the word is also used in connection with various institutions: universities, funding organizations, journals, learned societies, professional bodies. These support the work of professional scientists in various ways and also set norms of scientist’s behaviour. It could be interesting to consider the extent to which the value anyone places on the work of professional scientists depends on their association with these institutions. For instance, are research results published in Nature seen differently than they would be if the same results were published simply on the researcher’s personal self-hosted website? If so, in what way and why?

If we accept that science lends itself to both types of understanding (a kind of knowledge; a kind of institution), then we should consider the relationship between them. The simplest kind of relationship would be one of correspondence such that knowledge originating from a scientific institution is necessarily deemed to be scientific and visa versa. If that is not the case, then either significant amounts of scientific knowledge originate from outside the scientific institutions, or significant amounts of knowledge originating from the scientific institutions are not scientific. In the first case, we would then have to ask why we would give special attention to the knowledge originating from scientific institutions and in the second, we would have to ask why we would regard knowledge originating from scientific institutions as being especially reliable.

Of course, it’s possible to admit that the correspondence between kinds of knowledge and kinds of institution is not perfect and that both of the situations referred to above occur to some extent. Defenders of the institutions may say that they perform some supportive or supplementary function; that we get better quality science or better value science by virtue of the institutions being there. To take my example above of journals vs. website publishing, it may be said that work published in Nature is more worthy of attention than work published on a personal website because research published in Nature has been peer-reviewed. However, the reviewers generally have to base their opinions entirely on what they see in the submitted manuscript. Their ability to decide the scientific strength of the work described is not really any more than the general readership would be able to decide for themselves if the manuscript was published anyway. We don’t need peer-reviewed journals to help us decide what is scientifically valid and what isn’t, because we are just as well able to decide this as the reviewers on the strength of what we can see – the manuscript. For a prestigious journal, the number of scientifically valid manuscripts received will generally exceed the number that can be published. The decision on what to publish is an editorial one based on what scientifically valid work is furthermore deemed to be important or worthy of our attention.

The effects of peer review are not limited to determining what gets published in journals. Peer review of one kind or another runs through most scientific institutions. It determines not only what gets published but what research proposals are deemed worthy of funding when the number of scientifically valid proposals exceeds the scope of available funds; who gets a job when several scientifically qualified candidates are available; and which scientists receive honours and which don’t. Clearly, whatever the value of scientific institutions in safeguarding proper standards of scientific knowledge, another important effect they have is to promulgate a sense of what is important within the body of scientific knowledge. This leads to questions of who makes those choices and on what basis?


In an earlier article, I briefly mentioned the two basic points of view from which funding for scientific research can be justified: funding for specific scientists on the grounds that one wants a vibrant research culture; or funding for specific research projects on the grounds that one wants answers to scientific questions. I was therefore pleased to discover this draft paper: Incentives and Creativity:  Evidence from the Academic Life Sciences by Pierre Azoulay, Joshua Graff Zivin and Gustavo Manso in which the authors compare the effects of these finding policies on scientists’ behaviour. The study compared scientists funded by the Howard Hughes Medical Institute (HHMI) with a comparable group of “Early Career Prize Winners” (ECPWs) whose research was supported by National Institutes of Health (NIH) funding. HHMI funds “Investigators” as individuals. In the words of the HHMI website: “HHMI urges its researchers to take risks, to explore unproven avenues, and to embrace the unknown—even if it means uncertainty or the chance of failure.” According to the paper, HHMI is tolerant of failure in the early stages and provides its recipients with access to peer group feedback on their research throughout. The ECPW control group was chosen as having similar overall research accomplishments to those in the HHMI group prior to their receiving HHMI support, but who subsequently went on to perform research with NIH funding. In contrast to HHMI, NIH awards grant funding in support of specific research project proposals and researchers who fail to achieve project goals are unlikely to have their grants renewed.

The authors found that HHMI-funded scientists were more strongly represented than those in the ECPW control group as authors of both the most highly cited publications and of the most rarely cited. In other words, HHMI-funded scientists had both more successes and more failures than the control group. Moreover, there was a greater proliferation of keywords associated with the publications of the HHMI-funded researchers after their appointment than in controls. Altogether, these observations are taken as an indication that because the consequences of short-term failure were ameliorated for them, HHMI-funded scientists were more willing to take the risks associated with a more exploratory and serendipitous approach than their project-funded counterparts. To put it another way, HHMI scientists were able to be more “creative”. That is, they were freed from the constraints of having to answer questions contractually agreed at the outset and allowed to answer questions of their own choosing; preferably, one presumes, questions that no-one else had before thought of asking.

I suspect that a lot of scientists will enjoy hearing this, which might be taken as some kind of justification of the “Haldane Principle“. However, the authors stress that their study should not be taken as a criticism of the NIH or of project-oriented funding. They point out the difficulties in making investigator-oriented funding work on a larger scale and of the need for ready political accountability in decisions involving the distribution of public funding. Analogous constraints apply in corporate R&D where accountability to shareholders has to be maintained.

There are interesting implications here for the criteria (“metrics”) used to evaluate individual scientists’ performance in academia and in industry. In academia, there are grumblings about the limitations of using publications and “impact factors” as measures of a researcher’s worth. In industry, there are alternating waves of enthusiasm for either “blue skies” or nose-to-the-grindstone project-managed research. However, consideration of successful and unsuccessful attempts at innovation (see for instance Why Innovation Fails by Carl Franklin or How Breakthroughs Happen by Andrew Hargadon) suggest that no matter how good they are as scientists, researchers who give birth to innovation that actually changes how people live have their thinking realistically attuned to the needs of subsequent commercial development even as they think up those previously unthought-of questions. The best researchers give their best when they’re embedded in a culture that gives them incentives to be exploratory that are also tied in to the business aims of the institution in which they work.


Responses to SAPE

By Peter,

Science as a Public Enterprise (SAPE) is an initiative of the Royal Society who intend it to “ask how scientific information should be managed to support innovative and productive research that reflects public values”. The matter is being overseen by a “high-level working group” who have issued a call for evidence. The preferred route of response is by way of this form of set questions.

I’m not sure I’ve got that much in the way of material evidence to submit, but I couldn’t resist framing a few replies to their questions. In the spirit of openness, I’m publishing my draft replies here before submitting them to the RS. I welcome comments, criticisms and questions.

SAPE: What ethical and legal principles should govern access to research results and data? How can ethics and law assist in simultaneously protecting and promoting both public and private interests?

Research results and data are essentially documented accounts of certain experiences. Experiences in themselves are necessarily private, but documents containing accounts of them can be sold, shared or published. Arguably, the exchange of information through the distribution of such documents is the essence of science (as opposed to anecdotal knowledge), but that doesn’t automatically mean there’s an obligation to exchange on terms dictated by others.

Conventionally (in law), such documents are ‘works’ and any copyright or other rights as may exist in them belong to the authors, or in the case of works made for hire, to the author’s employer. One may sometimes hear the idea that research data can be owned being described as “absurd” or “preposterous”, but it is simply a matter of acknowledging that the carrying out of research has a cost and that the party who bears that cost has some priority in deciding how the results should be used. Only rather rarely nowadays is that cost really borne by the scientists carrying out the research as they are usually compensated by being paid for their time.

In line with that, research conducted by scientists in industry is generally regarded as works made for hire without problems. With academic scientists, the situation is more complex since some research funding (possibly the majority nowadays) is made in support of a particular research project. As such, it could be argued that the research belongs to the party who paid for it and that it is a work made for hire. However, some academic research is still funded on the basis of supporting individual researchers, a department or institute and the ownership of this would need to be the subject of local agreement.

The choice of who has access to research results and on what terms should ultimately be that of the owner of the rights in them, although provision should be made to legally oblige the owner to allow access or prohibit the owner from allowing access in certain cases where the public interest would otherwise be damaged. Under such circumstances, the owner should be entitled to just compensation for any consequential losses.

SAPE: How should principles apply to publicly-funded research conducted in the public interest?

The rights in any research entirely funded by the public should be automatically assigned to the public. The decision on whether to publish or otherwise distribute then lies ultimately with the public and any decision made on the public’s behalf has to be made in the public interest by accountable representatives. It follows that any assignation back into private hands should itself be made in the public interest.

SAPE: How should principles apply to privately-funded research involving data collected about or from individuals and/or organisations (e.g. clinical trials)?

The key thing here is “data collected about or from individuals and/or organisations” regardless of whether the research was privately or publicly funded. Information about legal persons that is not already publicly available should be used for research only with the emphatic consent of each individual concerned and with express statement of the limits to use. Ethical review should be made of the process used to gain such consent. Publication or other distribution of documents containing such information can then only occur within the limits set by the subjects’ consent.

SAPE: How should principles apply to research that is entirely privately-funded but with possible public implications?

If publication of the research would allow members of the public to avoid otherwise likely positive harm, then obligatory publication should be possible subject to the proprietors of the research receiving just compensation for any consequential losses. Simply asserting that publication would benefit the public interest should not be sufficient to force obligation to publish.

SAPE: How should principles apply to research or communication of data that involves the promotion of the public interest but which might have implications from the privacy interests of citizens?

I’m not sure this is the kind of question that can be answered in general terms. Clearly “the public interest” does not uniformly match up against everyone’s individual interests. Conflicts can only be decided in a case-by-case basis. Obviously, there has to be a right to petition for redress for anyone who feels “the public interest” is being interpreted in a way that infringes their private interests.

SAPE: What activities are currently under way that could improve the sharing and communication of scientific information?

“Scientific information” can mean two things: information in the data that scientists collect or produce; and information in knowing what scientists say the data mean. With regard to the first, the so-called “open data” initiative stands to make the biggest difference by advocating the indiscriminate online publication of “all” data (as opposed to only data selected to illustrate points made in scientific papers). There would appear to be a strong case that this will allow better use of data though there are problems around the costs of digitising certain types of information and standardization of data formats. There are also issues around how a commitment to open data conflicts with the career interests of professionalized scientists, or could encourage overenthusiastic publication of data without the owner’s consent. Overall, developing working open data initiatives would appear to be an area deserving of funding support in its own right.

SAPE: How do/should new media, including the blogosphere, change how scientists conduct and communicate their research?

The ease with which information can be published online means that freely available, informal, non peer-reviewed publications (such as blogs) containing scientific data and/or discussion as to what the data mean could proliferate and eventually displace traditional scientific journals as the preferred source of scientific information. This might make scientific information more easily available to those not working within industry or academia. The withering authority of the peer-reviewed journals might also result in less readiness to take the integrity of published data on trust and lesser readiness by individuals to fall in with ‘leading’ opinions with regard to data interpretation. Science could come to be seen less as a quest for truth – a set of limits within which all things must operate, and more as a search for pointers to what might be possible – a set of priorities for the next step in one’s own on-going investigations.

SAPE: What additional challenges are there in making data usable by scientists in the same field, scientists in other fields, ‘citizen scientists’ and the general public?

The general problem associated with making data suitable for use by others than those who generated them (“cross-use”), is that data on their own are meaningless. One must also know how they were produced, what they represent. Information relating to this is sometimes called “metadata”.

Cross-use of data by scientists in the same field can be relatively unproblematic in this technical sense because there is usually an implicit shared understanding of the experimental and observational fashions in the field. Only brief summary details of what the data represent are necessary for understanding. Of course, cross-use by scientists in the same field presents the greatest political/economic problems in professionalized science because it is in this case that the scientists from whom the data originated face the greatest risk of losing priority over its interpretation.

The problems of the quality of available metadata in allowing cross-use of data become more acute if we wish to allow cross-use by scientists in other fields or those working outside professionalized science. The quantity of necessary metadata can become substantial. It will be necessary not only to develop standardized formats for shared data, but also for metadata, to allow commensurability between data sets.

SAPE: What might be the benefits of more widespread sharing of data for the productivity and efficiency of scientific research?

In principal, making data available can contribute by reducing the number of occasions on which investigations are carried out into something that is already known and by increasing the number of occasions in which a given data set is used to answer different questions from those for which it was originally compiled. It does not, however, do this on its own. Before the availability of shared data can make a difference, one has to understand what range of possible data types might be helpful in advancing one’s own project, know how to look for them and know how to use the data once found.

SAPE: What might be the benefits of more widespread sharing of data for new sorts of science?

By encouraging the re-examination of existing data sets from new perspectives, data sharing could help the development of scientific investigation along lines not previously envisaged. Data sharing also encourages a “new sort of science” in another sense: it separates the collection of data from the theoretical interpretation of data. Traditionally, possibly as a legacy of the self-funded “gentleman scientist” era, many types of science have been carried out in a way that implicitly understands the collection of data and their interpretation to be the work of one person or a small group of people all working together (the “authors”). Of course, this vision of science has, since the mid-twentieth century, been in competition with “big science” where research projects are managed efforts carried out by organized division of labour between large numbers of research workers. This probably began with the Manhattan project. More recent examples might be the human genome project and the Large Hadron Collider project. More recently still, we have seen the rise of what might be called “stakeholder science” where scientific debates have taken place outside professionalized science, sometimes with direct participation of groups and individuals from outside professionalized science. Examples might be that of climate science and “global warming” or the debate around the science of autism and use of the MMR vaccine. Data sharing may encourage and accelerate the growth of these “styles” of science. A long term effect might be that professionalized scientists are seen less as “expert witnesses” and more as “knowledge workers” and that scientific problems are not seen as self-contained but rather as part of broader political or economic problems.

SAPE: What might be the benefits of more widespread sharing of data for public policy?

Data sharing might help crystallize ideas about what specific investigations need to be made to answer questions relating to policy formulation. It might be easier to discover if such investigations have already been made. However, without a warranty as to the authenticity of the data (problematic in a “free” sharing ethos), it would be unwise to take such data on trust. Specific audited studies would still have to be commissioned in most cases.

SAPE: What might be the benefits of more widespread sharing of data for other social benefits?

As above, it might help crystallize ideas about what specific investigations need to be made, but without some kind of trusted third party to act as guarantor of data authenticity, it might be unwise to take data on trust.

SAPE: What might be the benefits of more widespread sharing of data for innovation and economic growth?

As above

SAPE: What might be the benefits of more widespread sharing of data for public trust in the processes of science?

Initially, the availability of data additional to that used to illustrate the points made in research papers and even so-called “negative” data could improve public trust. In the longer term, a proliferation of data from unverified sources could have the opposite effect.

SAPE: How should concerns about privacy, security and intellectual property be balanced against the proposed benefits of openness?

The desire for privacy, security and ability to earn a return for making one’s original ideas available to others are natural human concerns. Irresponsible data sharing could easily lead to transgressions of those concerns. Everything depends on the data sharers behaving responsibly. This, in turn, depends on them understanding first that being a scientist does not in any way diminish one’s common ethical obligations as a citizen, and second, on understanding to whom (as well as to the public generally) they owe specific obligations in any particular case. This would include parties to whom the data relate directly and parties who funded the research either expressly or by way of funding the supporting infrastructure. Ethical frameworks of this kind already exist for research that involves studies of human subjects (such as clinical trials) and suitable components of these could be extended to cover other types of research.

SAPE: What should be expected and/or required of scientists (in companies, universities or elsewhere), research funders, regulators, scientific publishers, research institutions, international organisations and other bodies?

Following the above, subscription to an ethical framework that sets out guidelines for understanding who all the stakeholders are (individuals and collectives)  in a given research project and what obligations one owes to each of them.

SAPE: Other comments?

Science as a public enterprise can achieve and should aim to achieve more than just opening up access to scientific information. Science is presently understood largely as a professionalized activity that produces data in accordance with self-referring criteria of evaluation with funding from well-financed organizations (business corporations, governments, charities). Whether the funding is public or private, the scope of scientific investigations is formulated in response to standardized or homogenised concepts of market demand or public interest. In this model, science is effectively closed to the disparate minority interests of individuals or small collectives. At present, such minorities have to pursue their interests without the benefits of scientific investigation or have to reformulate their interests to align them with the homogenized or standardized ‘administrative’ interests of the organizations. A task for open science is to lower the financial and cultural entry barriers to instigating scientific research projects in response to disparate minority interests. Some of the requirements would be:

  • To help “lay” people articulate their interests in terms that reveal what types of scientific information could inform them. Informal learning initiatives such as Peer to Peer University might help with this.
  • To find which parts of those investigations have already been performed or are in common with the interest requirements of other parties.
  • To encourage instigation of short-lived, low overhead “collective experimentation” networks to carry out research projects with component parts distributed across multiple sites.
  • To encourage more specialized scientists to become “citizen scientists” prepared to engage with disparate citizen interests rather than see them as a funding opportunities for scientific interests.


Over at Research Cycle Research Daniel Mietchen has posted some interesting comments on the opening of science. Obviously, a lot of the contention around open data stems from tensions inherent in professionalized science itself. Very few scientists would be content with (or could earn a good living by) merely collecting or producing data. They want to interpret the data too; to say what the data mean. And beyond that, it’s nice to be right; to be the one to have the last word on what the data mean.

Now, one can always say “relax – truth will out!” But that takes time. In fact, it may take forever for “the” truth to come out. Even the best scientific theory can expect to find itself one day again under scrutiny when someone decides that what had traditionally been written off as experimental noise is actually a sign of a real and important effect or when it has to be reconciled with a newer, fashionable theory originating from another field.

Meanwhile, the world won’t wait. People want to know what the data mean right now. So, there’s demand (a market, if you will) for a proposed version of what truth could be and the scientist wants to be the one seen to be doing the proposing. In view of that, keeping one’s data ‘closed’ is prudent. It stops others from coming out with their own versions of what the data mean. Likewise, publishing one’s own account in a ‘good’ journal that gets people’s attention, easily trumps considerations of open access. If not, the ready availability of DIY web publishing would have put the journals out of business by now.

My point here is that perhaps the very idea of “scientist” as a vocation or something that offers a distinct career path is bound up with ‘closed’ notions of science. That while the opening of science may well be good for science understood as the rapid and reliable development of understanding of how the world works, it is not necessarily at all good for science when understood as a way of earning a living. Open science poses an ethical question for anyone who identifies him- or herself as a “scientist”: which is more important to you – developing reliable knowledge in the best way possible even if that means having to do something else as well in order to earn a living, or building a career that depends on being perceived as an expert to whom others defer?


The Brain and Behavior Blog Contest at drew my attention to this post on NeuroDojo which compares the way original research papers, press releases and textbooks treat the same research topic.

It seems there’s an assumption running through this: that the research paper is primary and that the press release and text books are there to support it.

However, one can just as easily say that each type of document comes from its own genre and serves its own distinct purposes: the paper serves to justify receipt of the grant that funded the research and promotes the professional reputations of the authors among other professional researchers, improving the peer review chances of their next grant application; the press release serves to grab a few seconds of attention from the general public and underpins public sympathy for funding of this type of research; a textbook is a saleable product serving the needs of the undergrad student market that also underpins the academic reputation of its authors.

Seen that way, Spruston’s PR policy is sensible rather than hype. Textbook simplifications are also prudent: students with exams to pass want to get to grips with their subject quickly and not be distracted by exceptions.

It’s also worth noting that the process of simplification seen in textbooks starts in research papers. There will almost always be experimental details that for one reason or another get simplified in (or even omitted from) the paper because the authors judge them to be insignificant. Chances are, there are other investigators who have seen evidence of the phenomenon described by Sheffield et al., but who were looking for something else and didn’t consider it worth writing up.

I will certainly be posting more on this in my ‘Ecology of Literatures‘ series.


  1. Back in the 1970s Alan Chalmers published what was to become a popular introduction to the philosophy of science called What is this Thing called Science? The title implied an assumption that science is a ‘thing’, distinct from non-science, from pseudo-science, from bad science and from anti-science.
  2. The idea that science is a definite thing and therefore the question that begs it probably seems quite natural to most people. Certainly, that’s how we hear the word being used both by scientists and non-specialists.
  3. Being concerned with the philosophy of science, Chalmers’ book had to take as its subject that use of the word “science” that is amenable to philosophic investigation. For most philosophers of science, that has largely taken the form of a concern with knowledge and the claims made for science as the way to especially reliable, authoritative or just plain good old true knowledge. The philosophical problem has largely been understood as one of epistemological demarcation: can we come up with an understanding of “science” that explains how we can recognise or produce such superior knowledge?
  4. Progressing from a realisation that mere correspondence to the facts isn’t good enough (facts are all in the past or present, but science only really becomes interesting when it talks of the future), the focus has been on scientific theories and how they can be distinguished from things that may look like theories but aren’t scientific.
  5. Karl Popper’s principle of falsifiability almost inevitably sits at the centre of things here. Based on the acceptance that no finite amount of corroboration can finally establish the truth of a theory and also the surmise that a single contradictory observation would finally establish falsehood, it seemed like a good answer for a while. It is still rated by some as the best there is, but it turns out that one can rarely, if ever, say with absolute certainty that a given observation conclusively refutes a given theory. Certainly not if the theory is at all an interesting one that makes bold predictions.
  6. Indeed, scientists’ response to observations that might be taken to refute a favoured theory was often to investigate auxiliary theories that would allow them to discount the apparent refutation. Further, scientists often maintained allegiance to apparently unfalsifiable theories because those theories nevertheless provided a fecund conceptual framework for further experimental investigations.
  7. At the same time, some theories produced by enterprises not generally described as “scientific” nevertheless met the suggested epistemological criteria.
  8. Thus, knowledge that was judged epistemologically scientific did not necessarily correspond that well to the theories of what is colloquially called “science”.
  9. Chalmers concluded his book by saying (second edition) “the question that constitutes the title of this book is a misleading and presumptuous one”. He doubted that a general characterization of science can be established or defended. Other philosophers have variously opined that the demarcation problem is intractable (to philosophy at any rate) or that it is a non-problem.
  10. So much, then, for the philosophy of science and epistemological demarcation.
  11. But what if we continue to think that What is this Thing called Science? really is an interesting question, even if not one that philosophers can answer.
  12. Another approach to the question comes from sociology. For sociology, the uses of the words “science”, “scientist” or “scientific” may be taken as normative and miscorrelation between the attempted demarcation and colloquial use of the words cannot occur. On the other hand, sociology, strictly understood, does not (indeed cannot) say anything about the veracity of any claims that science produces a special kind of knowledge. It can only tell us what those claims are and how they come to be made. This is true even of the so-called “strong program” sociology of scientific knowledge which asserts not only that society determines who gets to be called a scientist and how these people relate to others and to each other, but also the choice and manner of expressing the knowledge they produce.
  13. While sociology may tell us who actually values science, what makes them do it and how that valuation may manifest itself, it cannot tell us why we should value science.
  14. For that reason, the sociological approach has perhaps even less to commend it than the philosophical to someone intent on knowing What is this Thing called Science?
  15. As for scientists themselves, while it is not uncommon for them to have some appreciation of the epistemological philosophy of science (usually a caricature of Popper) and to make any number of informal homespun conjectures as to the sociology of science, neither the philosophy nor the sociology of science as formal academic disciplines seem to be of much use to them.
  16. Is there a way of approaching the question What is this Thing called Science? that keeps on the right side of epistemology and of the colloquial use of the word “science” and addresses the issue in terms that are at least acknowledged by scientists as being relevant to how they actually work?
  17. One problem is that science comprises a wide range of disciplines, each with its own sociology and standards of epistemology. However, scientists of all disciplines frequently mention “the literature”. The literature is the common shared resource to which all scientists contribute and to which all refer when they wish to know what their peers are up to. To be sure, each scientific discipline has its own literature, but equally, the literature is the medium through which scientists of different disciplines often first become aware of each other’s ideas.
  18. “The literature” therefore is a tangible quantity, through which each scientific discipline defines itself but which also provides cohesion to the entire enterprise of science. Moreover, the structure of the literature (what one might call its external structure) reflects the sociology of science while analysis of the “internal” structure may be expected to reveal the epistemology. When we refer to scientific experts, we are looking not only for their first-hand knowledge of experiment and observation, but also a comprehensive command of the literature in the field.
  19. In the posts to follow this one, I intend to look at the structure of the scientific literature as a way into answering the question What is this Thing called Science?


Judith Curry posts this pointer to the National Academies Press book “On Being a Scientist: A Guide to Responsible Conduct in Research“.

It’s not clear to me why anyone thinks scientists would be in special need of ethical instruction. Surely, the common virtues of honesty, responsible use of resources and respectful attitudes towards one’s colleagues, as would be desirable in any walk of life, should guide scientists well enough? All the more remarkable that this book seems to be directed at postgrads. Undergraduates aren’t in need of ethical guidance? Or is it that those who move on into postgraduate research careers encounter norms of behavior that leave something to be desired as examples of ethics?

Coincidentally, Praj has just posted a quote from Evelyn Fox Keller:

“Science is first and foremost a domain of opportunism”

Might that be a clue?


I’m grateful to the fine folks at the Bubble Chamber for the pointer to this video of Susan Haack’s seminar “Six Signs of Scientism” at the University of Western Ontario:

The audio quality is not good, but her arguments appear to be largely the same as those she makes in this paper (thanks to “Adult Child” for that pointer).

Professor Hack has plenty of interesting and thought-provoking things to say. These include a nice discussion of demarcation and a quick history of the word “scientism” which was not originally pejorative. Like most people today, though, Haack does use the word pejoratively and her definition of scientism is a good one:

“a kind of over-enthusiastic and uncritically deferential attitude towards science”

However, it’s a little different from the one I tend to:

“the idea that scientific progress requires the existence of current scientific institutions”

Then there’s the definition offered up at Wikipedia:

“the idea that natural science is the most authoritative worldview or aspect of human education, and that it is superior to all other interpretations of life”

Maybe the differences are ultimately more linguistic than semantic, but if one feels the need to be pejorative, it’s probably a good idea to know what one is trying to be pejorative about. Would it be the presumed superiority of enquiry over other occupations; the presumed superiority of one method of enquiry over others; the presumption that enquiry is only valid if conducted by individuals with particular qualifications; or the presumption that those qualified to conduct valid enquiry should be treated as inherently more valuable than those who are not?

What do you think?


In his continuing discussion of basic, applied and “Jeffersonian” science, Praj Kulkarni says:

… scientists push a narrative that the people in power clearly, and thankfully, don’t believe.
” [discussion about the value of basic
. applied science] “
are not really about changing funding patterns. They’re more about changing scientists’ priorities and the culture of academia.

I largely agree with that. A key issue here is that “science” is often understood differently by those who pay for it and those who do it.From the point of view of those who pay for it, the primary aim of funding basic scientific research in academia is to provide education and training. Scientific research in academic settings is mainly a way of training the next generation of researchers needed by industry. Replenishing the supply of academics will use up only a small proportion of academically trained researchers. Using universities to train researchers for industry is effectively an outsourcing of researcher training and is a good solution only so long as it’s more cost effective than each individual business running its own researcher training programs. Focusing the research on ‘basic’ science keeps the training neutral, not biased to the specific needs of one employer over another.In academia itself, the view persists that the primary aim of academia is to provide a protected environment in which academic scientists can carry out research in basic science (or, more accurately, oversee low-paid postgrads and post-docs carrying out research). Scientific education, as I know it anyway (UK; biomedical), does little to dispel that and some private sector grant funding organizations (e.g. HHMI in the US, Wellcome Trust in UK) positively encourage it by providing funding for specific scientists rather than projects. In fact, it may be surprising how much this attitude survives even among scientists employed in industry.The process of becoming a scientist involves not only the accumulation of specialist knowledge and technical skills, but also induction into a scientistic culture to which the PhD is a rite of initiation. Scientists are encouraged to see themselves as a professional elite, by which I mean that the cultural codes around someone being identified as a “scientist” direct people to evaluate that person on that scientistic basis rather than on the basis of that individual’s personal characteristics as evident right there and then.


I’ve just read Praj’s post “Scientists as a special interest” in which he considers the question asked by Matthew Nisbet: “as a matter of social responsibility, do scientists have an obligation to accept that reductions in scientific spending are necessary to preserve social programs?”

Surely, as long as we see this question as one of priority of science over social programs or vice versa, we’re on the path straight back to the “egghead” vs. “anti-intellectual” brickbats that have a lot to do with selling newspapers (or online attention) and very little to do with intelligent discussion. The arguments that can be made for one type of spending over the other become blurred by the fact that social programs are often predicated on quasi-scientific propositions as to the economic benefits consequent to certain courses of social action while scientific research programs are sometimes justified on the sociologically unexamined predicates that they will make previously unimagined (and therefore nebulously-defined) solutions to economic problems available at some unspecified time in the distant future. Science and social action are both special cases, but each special in its own way. Each justifies itself by seeking to persuade that some bright future awaits us if only we spend on this thing now.

Put like that, it’s just a gamble, of course, and naturally there are ways of managing such projects: breaking down into small discrete steps each of which can be evaluated over a short term so that the possibility of support withdrawal is always there if things go awry. Given that such techniques are so well established, one wonders why separate science and social programs even exist. What reason, apart from entrenched vested interests, is there not to allow individual proposals in either arena to compete for the same common pool of funding?

That said, perhaps there is a basis for making science (as opposed to technology R&D) a special case. While technology R&D seeks working and affordable solutions to specific technical problems, the job of ‘pure’ science might be taken to describe the observations made in some specific circumstances, but in a theoretical context that allows that abstracts their meaning and allows one to divine relevance within them for as broad a range of circumstances as possible. In this way, commonalities between observations from diverse circumstances between which no relationship was previously perceived become visible and new unsuspected areas for technological development become evident. Seen that way, it is a strength, not a weakness, of pure science that its consequences are unforeseeable. But being unforeseeable, its consequences cannot be used as justification. Could it be that our commitment to science is not so much a sign of commitment to making things better, but only a commitment to unforeseeable change?


Following on from my earlier post, I want to look at the concept of the expert in some more detail. The ultimate question to ask about experts is about the benefits the rest of us receive by referring to them, but to put that in context, and because Michael Pearl actually asked the question, I shall first say a few things about what experts are.

Wikipedia is probably as good an indicator as any of what is generally understood by ‘expert’. There, we read:

“An expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by their peers or the public in a specific well-distinguished domain.”

The essential features of an expert are therefore, first, the possession of a facility for sound judgement in some specified field of concern and, second, a recognition of this by others. The expert is a distinguished person and it might be pertinent to ask how someone acquires, or achieves or is imbued with such a distinction.

The English word ‘expert’ appears to derive from the same Latin root as ‘experience’ [»]. In that, there is an implication that whatever expert authority is accorded to any individual is dependent upon that individual’s experience. Certainly, none of us are born evidently in possession of any special knowledge or skill or any powers of judgement. These things appear with growing experience and any distinguished powers that appear seem to be largely related to the types of experience lived by the individual concerned. In particular, the extent to which that individual’s experience includes practice of the skill concerned. It is true that there are innate differences between individuals in their mental and physical capabilities and it follows that there will be innate differences in their ability to have certain types of experience and in the extent to which any given level of experience leaves them permanently imbued with soundness of judgement or any particular skill. A concert pianist, for example, might be said to demonstrate levels of practical expertise that are forever unattainable by most of us. However, in these discussions, I wish to focus on scientific experts and it is not at all clear that scientific ability is innate. Certainly, if there are among us a special class of human beings born with an innate predisposition to scientific distinction, our systems for training scientists would not seem to be very efficient in recognising it. Thus, while innate differences may play some part in determining who develops powers of judgement worthy of an expert, experience (including education and training) are probably the really decisive factors. In summary, it may be fair to say that experts (scientific experts at any rate) are made, not born.

An implication of this is that while the value we place on another’s expertise is based on their experience as it differs from our own, it is not experience that need necessarily forever remain outside our own. Given the time and opportunity, we could acquire equivalent experience ourselves and therefore presumably also the powers of judgement that come with it. From this, it follows that the expert’s soundness of judgement is not something that we absolutely cannot acquire for ourselves. Rather, it is something for which we refer to another because we would rather use our own time for other things. In effect, the use of expert opinion is a way of saving time and effort. In this context, it becomes clear that in any relationship in which one party seeks the opinion of another as an expert, it is the party seeking the opinion, not the expert, who is the instigator. The expert is the subordinate and should provide opinions that serve the other party’s interests as defined by that other party. It is not the place of the expert to presume or define what that other party’s interests are.

To be continued ...


What does it profit us to discover some truth if we have no practical use for it? Conversely, if we derive practical benefits from some piece of knowledge, why would we worry as to its truth?

In this article, Boaz Miller discusses knowledge and practical interests and considers arguments for the assertion that “whether a person knows a certain claim depends not only on the truth of the claim and the evidence she possesses to support this claim, but also on facts about her practical interests and social values“.

One thing I do have difficulty with here (and maybe this is because I haven’t read the books that Boaz cites in his post) is the relationship between knowing and believing. Boaz considers the argument that “if believing a certain claim gives you sufficient reason to act on your belief, then this belief is knowledge“. Does this mean that knowing something involves acting as though it is true, possibly in a way that involves bearing a cost if one turns out to be mistaken, whereas believing means merely asserting that something is true without putting it to the test in a way that might incur a cost if one turns out to be wrong?

Either way, I think these considerations are important in the way we talk about and use scientific knowledge. Even if one believes that science can tell us some ultimate and objective truths about the world, one has to concede that many theories held today will eventually turn out to be wrong in some way (even if only in the fine details). Nevertheless, one frequently has to decide on actions to be taken right now and the theories one has right now are all one has to go on. Therefore, one will act as though one believes the theories to be true, even if one also believes that they will most likely turn out not to be strictly true at some later date. The level of confidence one has in doing that will depend on how much information one has about the extent to which the theories involved have been tested (and the extent to which one believes that information, of course). If one believes the theory to be good enough for one’s immediate practical needs, then one can proceed to act as though it is true, regardless of whether one believes it to contain some fundamental objective truth about the world or not.


In May 2010, Science magazine published an article by Philip Kitcher, in which he reviewed a selection of books relevant to the science and politics of global warming. These included books by climate scientists expressing their frustration at the reluctance of successive American governments to take up strong policies on climate change. This reluctance is taken to be one example of a series of cases (health effects of tobacco smoke being another) in which eminent scientists pushed American government policy away from the path indicated by scientific consensus by casting doubt on the evidence on which that consensus was based. This they did, apparently, without themselves being active as researchers in the relevant field.

Kitcher starts his article by presenting contrasting views on the relative value of free and open debate on the one hand and reliance on expert opinion on the other in guiding democratic decision making. In favour of open debate is the view that truth alone will withstand questioning and criticism and that open debate can therefore be relied upon to indicate the ‘correct’ decision, given enough time. A frequent criticism of that view, however, is that in the real world decisions have to be made urgently and the time available for debate is limited. Under such circumstances, unscrupulous parties may express endless trivial or frivolous doubts about any proposal they dislike to ensure that their own proposal is the only one that still looks strong when the time comes to decide. Open debate then becomes an open door to the ethic of might is right. Limiting the debate to those judged to possess comprehensive and impartial knowledge and understanding of the relevant facts is seen as a way of obtaining a much more controlled debate that can lead to the best possible decision in time-limited circumstances.

Kitcher doesn’t directly express his support of one view over the other and sometimes it’s not altogether clear whether his statements are his own views or what he takes to be those of the authors of the books he’s reviewing. Nevertheless, one might reasonably come away with the feeling that his bias is toward reliance on expert opinion. Here are a few quotes:

“genuine democratic participation in the issues can only begin when citizens are in a position to understand what kinds of policies promote their interests. To achieve that requires a far clearer and unmistakable communication of the consensus views of climate scientists” (p2)

“Serious democracy requires reliance on expert opinion.” (p2)

“messages have been distorted and suppressed because of the short-term interests of economic and political agents” (p2)

“They have used their stature in whatever areas of science they originally distinguished themselves to pose as experts who express an "alternative view" to the genuinely expert conclusions that seem problematic to the industries that support them or that threaten the ideological directions in which their political allies hope to lead.” (p2)

“It is an absurd fantasy to believe that citizens who have scant backgrounds in the pertinent field can make responsible decisions about complex technical matters” (p3)

On reading these, a number of questions occurred to me:

  • Why would citizens have difficulty understanding their own interests and the policies that promote them?
  • Why would understanding one’s own interests depend on understanding a consensus among scientists?
  • What is “serious” democracy (as opposed to mere democracy) and why would it have to rely on expert opinion?
  • Is there any reason why calling interests “short term” would make them any less worthy of respect?
  • Is there any reason why the interests of “economic and political agents” would be any less worthy of respect than those of any other social entity?
  • Why would the pursuit of citizens’ interests necessarily have to boil down to technical matters?
  • How, except through special pleading, would it be established that climate scientists’ messages are more objective or more relevant to the common good?
  • How would the public tell real experts from those who merely pose as experts?

In articles to follow this one, I want to look at the role of experts in democratically accountable decision making. The decision making issues around climate change make for a particularly interesting context in which to have this discussion because the stakeholders include practically everyone and their interests are therefore highly disparate; the scientific evidence, although extensive, is still contentious; the need to act quickly may be acute; and the potential for useful unilateral action by any individual actor is extremely limited. The problem therefore refuses to stay within the norms of any established political system or subculture.

In his personal blog, TGL member Praj asks if scientific thinking is like reading comprehension. He gives the example of a paragraph about American football. Being British and not a follower of the game, but nevertheless (I believe) fully literate in the English language, I can confirm that understanding the example is not merely a matter of general English comprehension. Some of the jargon terms such as "receiver", "training camp", "running back", "season", "backup" and "offensive line" may evoke some kind of sense to any reader of English, but one really has to know about football to fully understand them in this piece. Likewise, some of the slang terms of American reportage, such as "get off the ground", "getting on track" or "banged up" have literal or conventional meanings in general English usage, but are used metaphorically here.

Praj worries about whether or not "scientific thinking" necessarily requires scientific knowledge, particularly in relation to "the issue" of global warming. I think you need to start by being clear about what you mean by the issue of global warming. I can see at least five distinct issues here:

(1) the issue of how much reliance we can place on data that indicate global climate has changed rapidly in recent decades;

(2) the issue of how much reliance we can place on climate models that predict the implications for future climate change;

(3) the issue of how much confidence we can have in the efficacy of any policy designed to reverse or adapt to such predicted climate change;

(4) the issue of how well we think we understand the economic and political consequences of such policies or their failure;

(5) the issue of how people in whose name such policies are made value the presumed benefits of the policy as opposed to the presumed risks of any alternative.

I'd say that issue (1) is essentially scientific. To know what the measurements actually represent and the practical limitations of the measurement techniques used, requires specialist technical knowledge. When that is drawn from a shared pool of individual experiences of related technical knowledge, then you could call it "scientific thinking".

Issue (5), on the other hand, isn't scientific at all. Any individual or group of individuals is equally entitles to make a judgement against what it autonomously sees as its own interests.

Issues (2), (3) and (4) make a gradual transition between those two positions.

For the policy maker, the problem is to assess the need for a policy which takes into account (1) and (2), consider all technical possibilities for a technical response (2, 3 and 4) and then prioritize those for policy adoption (4 and 5).

Taking all those various (and even contradictory) interests into account to come up with a solution that is sufficiently acceptable to a sufficient number of parties to stand a chance of actually working, is the policy makers job, and I don't envy them it.

This started as a comment on something davidm wrote in one of his comments on Hugo Holbling's blog post Doubt and Disunity, but it rambled too much so I just put it here.

Your comments are welcome!

I don’t know why we should think of the consensus in question as a political rather than a scientific consensus. I’m not sure how meaningful the distinction is anyway, given that we know that politics (and other cultural factors) play a role in science. But, to take a simplified example, if ten scientists get together and investigate whether the solar system is geocentric or heliocentric, and then at the end of their inquiry report that they have achieved a consensus that the solar system is heliocentric, how is that not a scientific consensus? Note also that these scientists would not be claiming that the solar system is heliocentric because they say so; they would be claiming that they have achieved a consensus on heliocentrism because that’s just what the best available evidence shows is most likely to be true.

If you took a group of scientists today and asked them to "investigate whether the solar system is geocentric or heliocentric", they might well tell you that the heliocentric/geocentric debate is outmoded. Modern cosmology places neither the sun nor the earth at the centre of the universe and, moreover, that in a proper analysis of orbital motion, each body orbits the common centre of mass, not one the other.

Nevertheless, the stuff of astronomical science is still essentially what it always has been: observations about the position, brightness and shape of points or bodies of light in the sky at certain times when viewed from certain places. Nobody observes or measures "heliocentrism". Galileo's observations of the heavens broadly still broadly stand today, but heliocentrism is no longer a useful or interesting doctrine.

The coining of a word like heliocentrism (an "-ism") is a call to closure; an implication that we now know all we need to know and that no further investigation is required. It was part of the drive to resolve the question, is the Church of Rome the final authority on the physical constitution of the world or isn't it?

Now, what does any of that have to do with "global warming" or the subject of disunity and doubt in science?

Just as 16/17th century astronomers didn't measure or detect "heliocentrism", so today's climate scientists don't measure or detect "global warming". They measure the temperature, or the percentage of carbon dioxide in the air, or the amount of ice on earth's surface and other specific parameters at certain places at certain times. They try to identify trends or patterns in the data. They attempt to formulate models that describe those trends. The models, in turn, allow predictions that can drive further research. But those predictions can also be used to rationalize certain courses of action outside of scientific investigation. The political debate revolves around whose proposed actions (or abstention therefrom) should prevail and become policy.

Someone may discern an upward trend in temperatures over a certain period of time and decide that "global warming" is a good name for it. As a shorthand way of referring to such a trend, global warming is still a scientifically useful concept because it indicates further paths of investigation. However, just because it is a shorthand for the trend in historical data, it doesn't tell us anything about the future. It may precipitate or provide rationale for certain types of belief about the future, but it doesn't tell us anything.

Nevertheless, if we believe, for this or any other reason, that global climate is likely to change in economically damaging ways in the coming decades and we want to do something about it, we need rationale for the actions we propose. "Global warming" is only a strong rationale if it is a "fact". And those who are most motivated to instigate specific courses of action rationalized by global warming are most motivated to state that global warming is a fact. Of course, declaring that global warming is a fact in order to rationalize a certain type of action, is to call for closure to some extent - "We already know enough, let's get on and do something about it!" Invoking the unity or consensus among scientists helps rationalize that call. Of course, it is one thing for scientists to be in a state of consensus about (1) the basic observations on which the conclusion of global warming is based; another for that consensus to extend to (2) the trends that may be discerned in those data; yet another for scientists to agree about (3) how much reliance we may place in using those trends to make projections into the future; and then quite another again for them to be (4) in a state of consensus about what the broader economic consequences may be if such projections turn out to be correct.

It matters a great deal if scientists are not in broad agreement about (1). If the veracity of the data is in doubt, then hypotheses and conjectures cannot be supported (or refuted) by them. On the other hand, consensus among climate scientists with regard to (4) hardly matters since they are no more qualified in that regard than anyone else.

Scientists have an all-too-human tendency to not only report to the rest of us on what observations they have made of the world, but also to play at being 'masters of reality' with a monopoly on the interpretation of those observations. If we do not pay proper attention to the very different role of speculation in those two activities then there will be confusion about the importance and meaning of consensus among scientists.