The writer who relies on those statements becomes a mouthpiece for the views of another – the perpetuator of another’s myth – rather than one who relates their own experience.
About this blog
Please comment at https://anglosaxonmonosyllable.wordpress.com/
Entries in this blog
Where writing inspires others to take a closer direct interest in the world for themselves, one might call its effect "beautiful". Once that has been achieved, whatever residual value remains in the writing might be what we could call literary beauty.
The rationale for defendign atheism should be one's faith in the principal of equality before the law and the standards of behaviour that follow rationally from it.
William Heath's satirical engravings show how application of intellect simply tends to replace old problems with new ones.
Why it is misguided for open access advocates to bash paywall publishers for asserting their copyright.
Timb Hoswell's "The Blake Feyerabend Hypothesis" is an intriguing work that makes a case for taking William Blake seriously as a seminal figure in the philosophy of knowledge and suggests an interesting synthesis with the philosophy of Paul Feyerabend.
Contrary to Steven Pinker's recent attempt to rehabilitate "scientism", I argue that the word should stand for a persistent belief that the trustworthiness of institutionalised science is a matter of fact rather than something that needs to be subject to continuous empirical re-evaluation.
The title of this paper has aroused a bit of discussion in the last few days. You can read it for yourself, but in brief it claims to show: Science undergraduates or those who profess “belief” in science are more likely to condemn “date rape” than students of non-science subjects; Undergraduates who perform sentence unscrambling […]
Do scientific concepts, hypotheses or theories have any existence beyond the words that refer to them as though they are real things?
"Slow Scientists" call for unlimited time to think. But perhaps they should be careful what they wish for.
It is a mistake to think that artworks that borrow scientific terms or imagery are inspired by science. Rather, it is the non-scientific or even anti-scientific mythologies that grow up around science that provide the substrate on which new artworks grow.
Mandatory open data in publicly-funded researh may well be best for the greatest possible amount of academic reworking and even the greatest overall commercial exploitation, but it's not necessarily the way for taxpayers to get the best return on their investment
Is it not time that science metrics shift their focus from what is worthy of attention to who has a good track record of solving problems and what information in the literature can be regarded as trustworthy because successful problem solvers have successfully relied upon it?
I've added a new 'Like' button to my blog posts so you can say you like them without having to write a comment. So... get using it!
If so much scientific writing is bad, perhaps it is because the people who write it don't have a clear idea of what they're doing or why.
Claims that the capacity for self-correction is what separates science from other endeavours don't hold up when you look at some of those other endeavours.
Can science expect to get public attention only to the extent that it is promoted by louder and edgier publicity than other things?
Maja Klevanski's drawings give us cause to reflect on why we tend to see animals and faces even when there aren't any there.
Tweaking the Frascati definitons of research types to highlight the role that each type of research plays in an economic cycle of knowledge production.
... when there's nobody there to hear, does it make a sound?
We may not need faith to believe the results of scientific research, but investing in a research project is often an act of faith.
… so says the caption at the start of this video made by OpenScienceSummit. That statement is not necessarily meant to be a comprehensive description of science, of course, but it does presumably reflect the priorities of those who advocate Open Science. As such, it is striking that it omits to mention the idea that science is about producing reliable, empirically tested knowledge that confers practical benefits. Possibly, that is seen as something that can be taken for granted. It is the social nature of science that still has to be argued for.
While it is certainly possible for someone working in isolation to produce empirically tested knowledge that confers practical benefits, it is also fairly obvious that sharing of ideas and observations allows for a greater diversity of hypotheses to consider and a greater range of experience to test them against. Likewise, a greater diversity of perspectives and ingenuity can only result in greater overall practical benefit being derived from any given expression of scientific knowledge. As soon as there is any sense of competitive urgency or ambition about the scope of production of empirically tested knowledge that confers practical benefits, it is advantageous to work socially.
How can Open Science encourage or optimise the benefits of this social aspect of science?
Science, as we understand it today, is largely produced by professional scientists with specific kinds of education and training, working with equipment and facilities not usually found outside the world of professionalized science. To a large degree, the principal influences on their choice of research problem and the principal audience for their reporting of results come from within that world. If we are interested in allowing those from outside the world of professionalized science to realise the greatest overall practical benefit from scientific research, we need to see to it that the choice of research problems and the reporting of results are done in ways that take into account the perspectives of people from outside that world. One question for advocates of Open Science is therefore whether Open Science helps achieve that aim.
The term Open Science has been used in connection with a variety of concerns and its overall meaning arises as a summary of those various contexts. The short Wikipedia entry on Open Science describes it as “the umbrella term of the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge”.
A few readily-found examples of initiatives branding themselves as Open Science include: the OpenScience Project, “dedicated to writing and releasing free and Open Source scientific software”; the Open Science Grid, which “advances science through open distributed computing”; and the Open Science Directory, “a global search tool for all open access and special programs journal titles”. A blog post at the Open Science Project answers the question “What, exactly, is Open Science?” with: “transparency in experimental methodology, observation, and collection of data; Public availability and re-usability of scientific data; Public accessibility and transparency of scientific communication; Using web-based tools to facilitate scientific collaboration”. Open Science Summit, answers the question “What is Open Science?” with “Science in the 21st century using distributed innovation to address humanity’s greatest challenges”.
The Open Science Federation aims “to improve science communications” with the participation of “open source computer scientists and citizen scientists, science writers, journalists, and educators, and makers of and advocates for Open Data, Open Access, and Open Notebooks”. The federation’s own contribution appears to mainly involve encouraging the use of blogs and online social networking media. From examples like these, it is possible to identify several specific areas on concern to Open Science advocates. Altogether, I have identified the following specific topics each of which seems to have a significant following among Open Science advocates:
Readiness to makes one’s data available to others is fundamental to good scientific practise. It bolsters confidence in one’s conclusions and allows alternative interpretations to be developed. The Open Data principle attempts to consolidate this into express obligations, first to make all data (including so-called “negative” data from studies or experiments that seemed to lead nowhere) available and, second, to make it available on terms that allow others to reinterpret and re-use it freely. This provides the possibility of allowing as many uses and interpretations of a given dataset as possible. There are limitations, of course. Some datasets will be proprietary and there is not always a clear boundary between “data” and anecdotal accounts of observations. More seriously, the effective and efficient use of datasets requires standardization of comprehensive metadata and some standardization in formatting of datasets themselves. These requirements are well advanced in some fields of science, but much less so in others where significant investment in standardization would be required to realise the benefits of Open Data, even if willingness to make data available is well established. Establishing standards for all types of data is a significant undertaking and may even confound the openness of scientific enquiry by placing constraints on the type of data to be collected. At the same time, they assist in establishing commensurability between datasets collected in differing contexts which could help diversify scientific enquiry. Until those considerations are addressed, professionalized scientists are likely to choose problems for study in much the same way whether they have a view to making their data open or not.
For some advocates of Open Science, the term is largely synonymous with an insistence on making scientific software Open Source. In fields of science where very large datasets are generated, data analysis may rely on specialist software. Publishing the source code of such software allows the manipulations it carries out to be properly understood by everyone with an interest in knowing, helps the discovery and elimination of coding errors and could accelerate the development of new or improved software to extend and diversify possible analysis. It is really just one aspect of the long-established scientific principle of disclosing one’s methods. While the term “Open Source” need not imply any more than disclosure of the source code, it is often accompanied by an expectation of freedom to use as well. This ensures that the above benefits are realised as broadly as possible. However, the Open Source principle does not in itself do anything to open up choice of what datasets are desirable in the first place.
Allowing free access to and distribution of scientific literature helps make the results of research more widely available. Although Open Access journals or, failing that, do-it-yourself web publishing must by now be possible for just about any professionalized scientist. Nevertheless, for many (most?), the priority remains is to publish in a ‘good’ journal even if that means libraries and individuals will be charged hefty subscription fees. This priority is professional, not scientific: a ‘good’ journal will attract readers and enhance one’s CV better than any of the open options. Much is said by Open Access advocates of how this professionalism limits access to science by those who could use it from outside the circles of professionalized science. Another possible effect of this ‘professionalist channelling’ of scientific publication into ‘good’ journals that is much less discussed lies in the uniformity of perception of the value of topics to be researched. Researchers gear their research priorities to what will get published in a ‘good’ journal. This effect will be pretty much the same whether the ‘good’ journals are open access or not.
In principle, the idea of opening up the drafting of research proposals presents an excellent opportunity for “lay” people to participate in deciding the direction of scientific research. It changes the relationship between publicly-funded professional scientists and “society” from one in which the existence of a class of professionalized scientists is seen as a public good in its own right to one in which the public good stems entirely from the extent to which professional scientists brings their specialist knowledge and expertise to bear on problems selected by the public. This does represent quite a shift from the way things are generally done at present, of course. Not least, it requires a change in the way that the world of professionalized science sees itself. Even scientists who are apparently committed to “the re-use and re-purposing … of scientific knowledge through collaboration between the scientific community and the wider society” go on to represent that collaboration like this. The research world is portrayed as something separate, even remote, from “society”. Research provides society with information through education and publishing and is itself influenced by society through “policy”, volunteer work and “citizen science” (more about that below). Behind this type of representation is a tacit assumption of research (professionalized science) leading an autonomous existence, almost as though science were itself a natural phenomenon outside the bounds of human society. In contrast to that, there is the view of science as a set of customs followed by a set of people in pursuit of relating to others and making a living for themselves within society.
In his book Reinventing Discovery, Michael Nielsen features the Polymath Project, initiated by Cambridge University mathematician Tim Gowers, as an example of crowdsourcing in science. Gowers used the reach of online social networking to swiftly form a virtual community of people interested in collaborating on developing a mathematical proof. This community was informal and non-professional. Amateurs could join in on practically the same basis as professionals. Individuals contributed only as much as they wanted to. A critical contribution might come from anyone at any time. They had a solution in only a few weeks.
Presumably, at each step of the way, the size of the community was enough to ensure that someone would already be thinking along the right lines to suggest the next step. One wonders whether an entirely professional collaboration, probably of fewer people and united as much by considerations of professional or expert status as by their interest in a specific mathematical problem, might have been keener to preserve orthodoxy in their thinking and would have taken longer.
A deprofessionalized crowdsourcing approach of the type exemplified by Polymath might also be used in experimental science to choose specific hypotheses for investigation, to design experiments that properly test a chosen hypothesis, or to evaluate hypotheses against data. Potentially, then, crowdsourcing could open science up to participation by non-professionals and, by that token, to some extent, direction by non-professionals. However, one has to question how far this could progress before running into difficulties. To what extent does Polymath’s success stem from it having been instigated by an individual of Gowers’ status? His reputation meant that a lot of people were already ‘watching’ (i.e. reading his blog) when he first started it. It also gave a kind of credibility to his selection of problem to work on. The collaboration formed quickly because a lot of people were aware of it as soon as it was announced and because Gowers’ involvement gave them confidence that the project would go somewhere. Gowers’ choice of problem seems to have been made on the basis that there was academic mileage in it. In other words, his career priorities would be served by it. That wasn’t necessarily true of other participants. Indeed, for non-academic participants there was little to gain other than the amusement value of participating itself. To be sure, Gowers gave up the kind of exclusivity of authorship he might have had from a more conventional way of working, but he retained “authorship” of the choice of problem to be tackled in the first place. To what extent can we expect a similar crowdsourcing approach to work for just anyone who has a problem they feel unable to solve themselves? Further difficulties become visible when one considers what might happen when the proposed problem is being solved in pursuit of some further practical purpose. How might a crowdsourcing collaboration go if working on a problem connected with potential responses to global climate change or mass vaccination proposals?
Science Communication/Public Understanding of Science
The world of professionalized science consists largely of networks of people who have undergone extensive formal science education and training and who talk to one another using specialized language. It’s difficult for an outsider who wants informed answers to specific questions to just dip into the primary scientific literature and get what they want. This is not only because there is a specialized vocabulary to learn, but also because the professional scientific literature follows the research priorities of professional scientists. If the questioner does not frame his or her question in relation to those priorities, it will be hard to relate what is found in the literature to the question, even if the vocabulary has been mastered. Science Communication and Public Understanding of Science are attempts to bridge this gap by training scientists and journalists to explain science in ‘ordinary’ terms. These initiatives could, in principal, foster a general understanding of science that could help members of the “lay” public articulate their interests into proposals for research. In practise, however, much of what is produced under these headings at present is either aimed at persuading the public that the projects of professionalized science are aligned with their interests and therefore worthy of public funding support, or are aimed at showing how existing scientific knowledge can inform government policy decisions. If Science Communication and Public Understanding of Science are limited to communicating the research priorities of professional scientists to the public and understanding how science can inform the decision-making of professional politicians, then they ‘open’ science only by providing a window through which the public can gaze as an essentially passive audience. They don’t open the way to direct public involvement in driving local research priorities.
Volunteer Science/Citizen Science
The term “Citizen Science” is used to mean different things by different people. To some, it means professional scientists recruiting members of the public to assist with data collection or data analysis. In some online research projects this has involved large numbers of informal volunteer researchers. Accordingly, I would prefer to call it Volunteer Science. While Volunteer Science certainly allows the public to get involved in research, it is rather like crowdsourcing in that most such projects so far seem to rely on direction by professionals. It remains to be seen whether online networks of ‘lay’ people who have a common civic interest or problem thought to be amenable to scientific investigation can recruit professional scientists.
Such engagement of professional scientists with those outside the world of professionalized science is described by Jack Stilgoe in Citizen Scientists – Reconnecting Science with Civil Society. Stilgoe’s Citizen Scientists are “people who intertwine their work and their citizenship, doing science differently, working with different people, drawing new connections and helping to redefine what it means to be a scientist”. The Citizen Scientist is motivated by a sense of engagement with civic interests that not only permeates, but actually drives his or her research interests. Research priorities are chosen not on the basis of what maintains one’s reputation and status within a professionalized science community linked by a professional interest in science, but rather on the basis of how they contribute to the needs and ambitions of a civic community linked by place or civic tradition.
How might Open Science contribute to the advance of Citizen Science?
Opening Up Open Science: The Possibility of Civic Research
We have seen that the concept of Open Science encapsulates a variety of initiatives, each of which encourages more openness, closeness or collaboration between the various parties involved in the scientific enterprise. Advocates of Open Science generally argue that the benefit of these developments is that they will accelerate the advance of science. In Reinventing Discovery, Michael Nielsen looks forward to “a new era of networked science that speeds up discovery” and assures us that this “will deepen our understanding of how the universe works and help us address our most critical human problems”. Inherent in that is the idea that science can and will (eventually) answer every question. Maybe so. But who decides which questions we tackle first? As I’ve tried to argue above, most of the initiatives of Open Science leave that question open to the status quo. In effect, that means professional scientists acting within the culture of professionalized science itself in conjunction with government or other organizations that sponsor them. There is a presumption that these are effective at deciding what “our most critical human problems” are and then translating them into the most appropriate courses of action for scientists.
Many civic organizations, associations, networks and individuals perceive issues or problems in relating their own particular interests to those of other members of society. While resolution of such issues is ultimately political, progress towards resolution may be advanced in some cases by some kind of scientific investigation. Such investigations we may call ‘civic research’. Groupings that might instigate such research could include NGOs, patient advocacy groups, consumer rights groups, local residents associations concerned about environmental contamination or pollution, farmers concerned with land stewardship and others. Because the concerns of such civic groups are ultimately political, because the types of scientific investigation they want often do not align well with the priorities of professionalized scientists and because such groups often do not have sufficient funds to engage the services of scientists on a professional contract basis, working with them is often not attractive for professionalized scientists. For engagement with such groups to become attractive, scientists have to be personally motivated by the political objectives, not just the ambition to pursue a scientific career. The scientist is a committed political actor whose contribution to the project happens to take the form of scientific knowledge. The science is as overtly political as the aims of the group. Nevertheless, to be effective in that role, to bring to it the benefits of as broad a range of scientific experience and understanding as it can use, scientists need to be connected to and to draw upon the world of Open science. Although that world is not aligned with any specific civic commitment, it can inform countless committed initiatives. Its openness also allows it to draw upon, integrate and grow from the submitted experience of countless scientists individually committed to overtly political programs of civic research.
Is the possibility of civic research an alternative to professionalized science? On the basis of the above, it seems that if Open Science can open up the prioritization of research problems to be addressed, it could sustain a lot more civic research than currently takes place. Moreover, if Open Science can create standards in dataset and metadata format, then the results of civic research projects could be more readily integrated into the Open Science knowledge base itself. Civic research could grow, be sustained by and eventually sustain the scientific knowledge base without the need for professionalized science. I intend to look more closely at this question in future posts.
When we turn to scientific experts, what do we expect from them?
For some, the preference will be for the Absolute Truth of universal natural laws that unavoidably govern everything we do. Such a preference naturally entails commitment to a material reality that is independent of human affairs. It also entails commitment to that reality being amenable to empirical investigation and, moreover, to such investigation being the good and proper aim of science. Such commitments engender certain expectations of science and hence of scientific experts. These expectations I will here refer to as a realist expectations of science.
What other types of expectation are possible? People of a pragmatic frame of mind may feel that the realist view is too stringent to be useful. They may point to the perpetual imperfection of scientific knowledge; the fact that even the the most firmly established of scientific principles may need revision in the light of future discoveries. They may say that we can never be sure that any of the “natural laws” proposed by science really are natural laws but that they can nevertheless often provide useful solutions to technical problems because they allow us to make predictions with a fair degree of confidence. Being pragmatic, they may say that this is the real value of science and should be the proper basis of the expectations we place upon scientific expertise. Under this view, the job of science is to provide instruments with which we may make good bets, based on experience, on the likely outcome of any chosen course of action. Accordingly, the expectations thereby placed on scientific expertise may be characterised as “instrumentalist”. Note that the holder of such views need not necessarily deny the existence of natural laws or science’s ability to discover them. It is the expectation placed upon science (and hence the basis for deciding how much to spend on it) that is being deemed instrumentalist here.
Another type of critic may yet remain dissatisfied. They may be concerned that even if a given theory does give us confidence in certain types of prediction, it may not be the best theory. Its predictions may be less accurate than those of another or it may be able to make predictions only in a narrower range of circumstances than another.
What might experts themselves make of all this? One example is to be seen in this article by Daniel Sarewitz. Sarewitz is a scientific expert. He is invited to sit on consensus committees that are asked to “condense the knowledge of many experts into a single point of view that can settle disputes and aid policy-making”.
Clearly, the sponsors of such committees place a premium on consensus and therefore favour the realist or at least instrumentalist view of science. Sarewitz, however, distrusts consensus, saying:
Indeed, he apparently sees the search for consensus as unscientific:
Here we see what is apparently an advocacy of the idealist view of science. Most striking is the contrast drawn between the certainty (implied consensus) of the textbooks on the one hand and “real science” on the other. Real science is taken to inherently in the domain of controversy. Sarewitz concludes by proposing a role for science in policy making that is quite distinct from the consensus-seeking norm of current practises:
Should the job of experts be to ensure the broadest range of considerations in public deliberation? Science is then not the impartial arbitrator, but simply one way of safeguarding the impartiality of those whose deliberations do make the (political) arbitration.
In the first article in this series, I examined the shortcomings of both philosophy (epistemology) and sociology in answering the question What is this Thing Called Science? I proposed that an examination of science-as-literature could be a basis for answering the question, since not only is the importance of “the literature” acknowledged by scientists themselves, but its structure is likely to reflect both the epistemology and sociology of science.
Many professional scientists would probably agree that accounts of original empirical investigations published in scientific journals (what I will here call Research Reports) constitute the “primary” scientific literature. The assumption is that since the main task of science is to collect and document empirical observations of the world and since Research Reports are where such observations are first reported publicly, then every other kind of scientific literature must necessarily derive from Research Reports.
Some might argue that records of raw experimental data (instrument readouts, lab notebooks and so on) have a stronger claim on primacy since it is from those that the papers published in scientific journals are composed. While it has not generally been the custom to publish or widely distribute raw experimental data (and one might on that basis question whether such records qualify as “literature”), to do so would be entirely consistent with the ideal of science as a collective cultural body of empirical knowledge. Indeed, advocates of open science often argue for the publication of raw experimental data to be made routine on that basis. Although raw data records lack the commentary found in the traditional scientific paper, they still necessarily embody choices as to what data were collected and at what point the quantity of data was deemed sufficient to merit compilation of a report. Those choices, in turn, may be made in the context of preconceived notions of what arguments will be made in an eventual published paper. For this reason, I would argue that records of raw experimental data are themselves Research Reports albeit of a different style from the traditional research paper.
Another more obvious reason to question the supposed primacy of Research Reports would be to point out that the production of Research Reports requires investments of time and effort or, to put it another way, funding. Very few scientists today are self-funded. They have to persuade others that they can benefit by making funding available to researchers. Those ultimately holding the purse strings are not usually scientists, though they may delegate the choice of which particular research projects to fund to people who are. Preceding the production of data records and journal papers or other Research Reports derived from them, therefore, there is a need to produce Research Proposals. Such documents deal with scientific issues and, if not always published, are certainly written to be read by people other than their authors. As such, Research Proposals form part of the literature of science and arguably have a stronger claim on primacy than do Research Reports. This is not only because successful Research Proposals are necessary for the production of research results and therefore Research Reports, but also because Research Proposals are arguably the most important channel of science communication between professionalised scientists and those who fund them. Unlike Research Reports which are written largely for consumption by other scientists, Research Proposals must ultimately persuade people outside the circles of professionalized science and their success in that is acutely attached to the possibility of producing further science. While the focus of Research Proposals is science, they necessarily and sometimes quite openly also rely on rhetoric and appeals to political concerns. The effectiveness of these in turn can be preconditioned by interventions of science communication in the broader public sphere. For instance, when we consult a professional scientist as a source of expert opinion, we may expect that much of the expert language used will have been rehearsed in Research Proposals and that the opinions put forward are made with an eye on how they are likely to set expectations that will help the success of future requests.
Arguably therefore, while Research Reports are probably the type of scientific document most closely studied by professional scientists themselves, the proposition that they represent the “primary” form of scientific literature is subjective. This can be seen more clearly still if we consider the position of some0ne unfamiliar with a particular field of investigation wanting to quickly gain a reliable overview. Most Research Reports emanate from those fields of research where the most speculative empirical enquiry is still active. They can present a fragmented and sometimes contradictory view of their field. There is therefore a market for Research Reviews. At their best, these are articles that survey existing Research Reports in a putative field to reveal emerging consensus and any to delineate and point out ways of resolving controversies in that field. Research Reviews therefore add cohesion to a given field of science and help make it more readily comprehensible. In some areas of research, standards of production of Research Reviews have been formalised (for instance, Cochrane reviews in medicine). Elsewhere, their production is more ad hoc, sometimes being driven by little more than a need for scientists without research funding to maintain a publication rate that will facilitate the success of future Research Proposals. In any case, Research Reviews are generally written by professional scientists for professional scientists, but they can provide a good way into a particular field of science for outsiders.
Yet another example of how the supposed primacy of Research Reports is subjective may be seen in the case of Teaching Texts. Old scientists wear out and die; they have to be replaced with new blood. Scientific education serves this market and to do so, it produces various types of scientific literature that can collectively be referred to as Teaching Texts. Most typically sold as textbooks, scientific Teaching Texts can be seen as a further development from the Research Review, for they are essentially reviews. The difference is that they typically cover a broader field (sometimes very broad, like ‘Physics’, ‘Chemistry’ or ‘Biology’), tend to focus on areas of strong consensus and are written to be accessible to an audience who are not (yet) professional scientists.
My purpose here has not been to show that any other type of scientific literature enjoys primacy over Research Reports, but rather that the perception of any supposed primacy of Research Reports is subjective to the concerns of the professionalized scientist working in a particular field of research. As soon as we look at the need for science as a cultural activity to sustain itself either in terms of soliciting funds or recruitment of new scientists into its ranks or for existing scientists to reach consensus on the scope and meaning of an emerging research field, we see that other types of scientific literature start to look more “primary”.
As a bit of a joke, it is possible to represent the relationships between the various genres of scientific literature discussed graphically as components of a “scientistic organism”, dubbed Amoeba scientisticus, featuring an “empirical” nucleus and “theoretical” cytoplasm.
A. scientisticus feeds on money and “breathes” people (they are inhaled scientistically naïve and exhaled scientistically skilled). The nucleus is where experimental data are produced. Research Reports form from these and migrate toward the nuclear membrane. They are eventually expelled from the nucleus (published) into the cytoplasm. Here, they react with each other and with existing Research Reviews, leading to the formation of new Research Reviews. Some of these migrate back into the nucleus, catalysing the further formation of data and Research Reports. On the way, while still in the cytoplasm, they may react further with each other and with emerging Research Reports. Such reactions may lead to the formation of Research Proposals or of Teaching Texts. These, in turn allow the organism to extend pseudopodia (funding pseudopodia from Research Proposals, recruiting pseudopodia from Teaching Texts) that can engulf money (funding pseudopodia) and people (recruiting pseudopodia) respectively. Once internalised, funds and personnel are carried to the nucleus where are used to produce more experimental data. Thus, the various genres of scientific literature work in concert to drive the “metabolism” and sustain the life of the scientistic organism. In future articles I intend to show how further genres of scientific literature are used to position the organism so that it can maximise its exposure to the money and people it needs to perpetuate itself.
It might be objected that my account here denigrates science by summarising it as a socio-economic phenomenon and ignoring its ability to produce valuable knowledge. However, the account here is an attempt to describe science in the most general terms. The types and value of knowledge produced by science vary widely between fields of inquiry are therefore not suitable as the focus of a general account of science. Instead, I have focused on the function of science by which it produces various types of literature since, as I see it, this is much more consistent over the entire range of enquiries that we call “science”. Characterising science as a socio-economic phenomenon reminds us that ultimately “science” is, like any human endeavour, just a matter of “people doing stuff”. That in doing so they produce knowledge is effectively a way of saying that they don’t just do the same stuff over and over again. It gradually changes over time.
George Monbiot has just published an article on the very high subscription rates charged by certain publishers of ‘high impact’ scientific journals (see “The Lairds of Learning” on George Monbiot’s own website or the Guardian here). He does not hesitate to brand commercial publishers of academic journals as “the most ruthless capitalists in the Western world” and suggests that “the racket they run is most urgently in need of referral to the competition authorities”. That might be the case if they really were running a monopoly or cartel, but are they?
Professor David Colquhoun drops in to comment on Monbiot’s article at the Guardian, saying:
So where’s the monopoly? All scientists and other academics have to do is put their papers on the web.
George Monbiot says he wants governments to “work with researchers to cut out the middleman altogether, creating … a single global archive of academic literature and data. Peer-review would be overseen by an independent body. It could be funded by the library budgets”, but why? Colquhoun’s suggestion could be up and running by tea time. No need for government meddling! It is already completely within the power of the great majority of academics from now on to make the results of their research freely and widely available by self-publishing on the web. So why don’t more of them do it?
Monbiot has an answer:
Is that it? The making of “coherent democratic decisions”, the “tax on education”, “a stifling of the public mind” and the apparent contravention of the Universal Declaration of Human Rights that George Monbiot points to are all trumped by academics’ need to secure grants and advance their careers?
The peachy business that the commercial publishers of academic journals enjoy wasn’t really engineered by them. They are just taking advantage of a situation that academics find themselves in: the need to publish in ‘high impact’ journals.
On the face of it, it seems ridiculous that the value of someone’s research should be based on which journals they publish in. Essentially, going by journal ‘impact factors’ is just a way of judging the papers without having to actually read them. It’s like judging a man by the clubs he’s joined rather than by getting to know the man himself. Nevertheless (we are told), this is the basis on which academic careers are built today
So how did this situation arise? Is it what academic researchers chose for themselves? If not, how is it that they, of all people, have not managed to convince their paymasters (that is, ultimately, you and me) that there is a coherent alternative way of explaining why the public money that supports academic research should indeed be spent that way, rather than on better public healthcare, better schools or better social services?
George Monbiot wants to go after the publishing houses, but in the light of David Colquhoun’s observations, they’re an irrelevance. The real issue here, I would suggest, is the inability or unwillingness of academic researchers to set criteria for evaluating their own research that don’t just sound like self interest and that they actually want to live by themselves. Any number of claims that academic research is culturally enriching, or improves education or reduces suffering or makes for better public policy decision making may be true, but the choices of publication route still actually by many researchers suggest that their university careers are more important to them than any of those things. Could it be that the publishing houses profits are just one consequence of publicly-funded science being largely carried out by people who don’t themselves believe the claims made for its value as a public good?