Suggesting Reviewers in the Era of arXiv and Twitter

Along with many others in the evolutionary genetics community, I’ve recently converted to using arXiv as a preprint server for new papers from my lab. In so doing, I’ve confronted an unexpected ethical question concerning pre-printing and the use of social media, which I was hoping to generate some discussion about as this practice becomes more common in the scientific community. The question concerns the suggestion of reviewers for a journal submission of a paper that has previously been submitted to arXiv and then subsequently discussed on social media platforms like Twitter. Specifically put, the question is: is it ethical to suggest reviewers for a journal submission based on tweets about your arXiv preprint?

To see how this ethical issue arises, I’ll first describe my current workflow for submitting to arXiv and publicizing it on Twitter. Then, I’ll propose an alternative that might be considered to be “gaming” the system, and discuss precedents in the pre-social media world that might inform the resolution of this issue.

My current workflow for submission to arXiv and announcement on twitter is as follows:

  1. submit manuscript to a journal with suggested reviewers based on personal judgement;
  2. deposit the same version of the manuscript that was submitted to journal in arXiv;
  3. wait until arXiv submission is live and then tweet links to the arXiv preprint.

From doing this a few times (as well as benefiting from additional Twitter exposure via Haldane’s Sieve), I’ve realized that there can often be fairly substantive feedback about an arXiv submission via twitter in the form of who (re)tweets links to it and what people are saying about the manuscript. It doesn’t take much thought to realize that this information could potentially be used to influence a journal submission in the form of which reviewers to suggest or oppose using an alternative workflow:

  1. submit manuscript to arXiv;
  2. wait until arXiv submission is live and then tweet about it;
  3. moniter and assimilate feedback from Twitter;
  4. submit manuscript to journal with suggested and opposed reviewers based on Twitter activity.

This second workflow incidentally also arises under the first workflow if your initial journal submission is rejected, since there would naturally be a time lag in which it would be difficult to fully ignore activity on Twitter about an arXiv submission.

Now, I want to be clear that I haven’t and don’t intend to use the second workflow (yet), since I have not fully decided if this an ethical approach to suggesting reviewers. Nevertheless, I lean towards the view that it is no more or less ethical than the current mechanisms of selecting suggested reviewers based on: (1) perceived allies/rivals with relevant expertise or (2) informal feedback on the work in question presented at meetings.

In the former case of using who you perceive to be for or against your work, you are relying on personal experience and subjective opinions about researchers in your field, both good and bad, to inform your choice of suggested or opposed reviewers. This is some sense no different qualitatively to using information on Twitter prior to journal submission, but is instead based on a closed network using past information, rather than an open network using information specific to the piece of work in question. The latter case of suggesting reviewers based on feedback from meeting presentations is perhaps more similar to the matter at hand, and I suspect would be considered by most scientists to be a perfectly valid mechanism to suggest or oppose reviewers for a journal submission.

Now, of course I recognize that suggested reviewers are just that, and editors can use or ignore these suggestions as they wish, so this issue may in fact be moot. However, based on my experience, suggested reviewers are indeed frequently used by editors (if not, why would they be there?). Thus resolving whether smoking out opinions on Twitter is considered “fair play” is probably something the scientific community should consider more thoroughly in the near future, and I’d be happy to hear what other folks think about this in the comments below.

Why You Should Reject the “Rejection Improves Impact” Meme

ResearchBlogging.org

Over the last two weeks, a meme has been making the rounds in the scientific twittersphere that goes something like “Rejection of a scientific manuscript improves its eventual impact”.  This idea is based a recent analysis of patterns of manuscript submission reported in Science by Calcagno et al., which has been actively touted in the scientific press and seems to have touched a nerve with many scientists.

Nature News reported on this article on the first day of its publication (11 Oct 2012), with the statement that “papers published after having first been rejected elsewhere receive significantly more citations on average than ones accepted on first submission” (emphasis mine). The Scientist led its piece on the same day entitled “The Benefits of Rejection” with the claim that “Chances are, if a researcher resubmits her work to another journal, it will be cited more often”. Science Insider led the next day with the claim that “Rejection before publication is rare, and for those who are forced to revise and resubmit, the process will boost your citation record”. Influential science media figure Ed Yong tweeted “What doesn’t kill you makes you stronger – papers get more citations if they were initially rejected”. The message from the scientific media is clear: submitting your papers to selective journals and having them rejected is ultimately worth it, since you’ll get more citations when they are published somewhere lower down the scientific publishing food chain.

I will take on faith that the primary result of Calcagno et al. that underlies this meme is sound, since it has been vetted by the highest standard of editorial and peer review at Science magazine. However, I do note that it not possible to independently verify this result since the raw data for this analysis was not made available at the time of publication (contravening Science’s “Making Data Maximally Available Policy“), and has not been made available even after being queried. What I want to explore here is why this meme is so uncritically being propagated in the scientific press and twittersphere.

As succinctly noted by Joe Pickrell, anyone who takes even a cursory look at the basis for this claim would see that it is at best a weak effect*, and is clearly being overblown by the media and scientists alike.

Taken at face value, the way I read this graph is that papers that are rejected then published elsewhere have a median value of ~0.95 citations, whereas papers that are accepted at the first journal they are submitted to have a median value of ~0.90 citations. Although not explicitly stated in the figure legend or in the main text, I assume these results are on a natural log scale since, based on the font and layout, this plot was most likely made in R and the natural scale is the default in R (also, the authors refer the natural scale in a different figure earlier in the text). Thus, the median number of citations per article that rejection may provide an author is on the order of ~0.1.  Even if this result is on the log10 scale, this difference translates to a boost of less than one citation.  While statistically significant, this can hardly be described as a “significant increase” in citation. Still excited?

More importantly, the analysis of the effects of rejection on citation is univariate and ignores all most other possible confounding explanatory variables.  It is easy to imagine a large number of other confounding effects that could lead to this weak difference (number of reviews obtained, choice of original and final journals, number of authors, rejection rate/citation differences among discipline or subdiscipline, etc., etc.). In fact, in panel B of the same figure 4, the authors show a stronger effect of changing discipline on the number of citations in resubmitted manuscripts. Why a deeper multivariate analysis was not performed to back up the headline claim that “rejection improves impact” is hard to understand from a critical perspective. [UPDATE 26/10/2012: Bala Iyengar pointed out to me a page on the author's website that discusses the effects of controlling for year and publishing journal on the citation effect, which led me to re-read the paper and supplemental materials more closely and see that these two factors are in fact controlled for in the main analysis of the paper. No other possible confounding factors are controlled for however.]

So what is going on here? Why did Science allow such a weak effect with a relatively superficial analysis to be published in the one of the supposedly most selective journals? Why are major science media outlets pushing this incredibly small boost in citations that is (possibly) associated with rejection? Likewise, why are scientists so uncritically posting links to the Nature and Scientist news pieces and repeating “Rejection Improves Impact” meme?

I believe the answer to the first two questions is clear: Nature and Science have a vested interest in making the case that it is in the best interest of scientists to submit their most important work to (their) highly selective journals and risk having it be rejected.  This gives Nature and Science first crack at selecting the best science and serves to maintain their hegemony in the scientific publishing marketplace. If this interpretation is true, it is an incredibly self-serving stance for Nature and Science to take, and one that may back-fire since, on the whole, scientists are not stupid people who blindly accept nonsense. More importantly though, using the pages of Science and Nature as a marketing campaign to convince scientists to submit their work to these journals risks their credibility as arbiters of “truth”. If Science and Nature go so far as to publish and hype weak, self-serving scientometric effects to get us to submit our work there, what’s to say that would they not do the same for actual scientific results?

But why are scientists taking the bait on this one?  This is more difficult to understand, but most likely has to do with the possibility that most people repeating this meme have not read the paper. Topsy records over 700 and 150 tweets to the Nature News and Scientist news pieces, but only ~10 posts to the original article in Science. Taken at face value, roughly 80-fold more scientists are reading the news about this article than reading the article itself. To be fair, this is due in part to the fact that the article is not open access and is behind a paywall, whereas the news pieces are freely available**. But this is only the proximal cause. The ultimate cause is likely that many scientists are happy to receive (uncritically, it seems) any justification, however tenuous, for continuing to play the high-impact factor journal sweepstakes. Now we have a scientifically valid reason to take the risk of being rejected by top-tier journals, even if it doesn’t pay off. Right? Right?

The real shame in the “Rejection Improves Impact” spin is that an important take-home message of Calcagno et al. is that the vast majority of papers (>75%) are published in the first journal to which they are submitted.  As a scientific community we should continue to maintain and improve this trend, selecting the appropriate home for our work on initial submission. Justifying pipe-dreams that waste precious time based on self-serving spin that benefits the closed-access publishing industry should be firmly: Rejected.

Don’t worry, it’s probably in the best interest of Science and Nature that you believe this meme.

* To be fair, Science Insider does acknowledge that the effect is weak: “previously rejected papers had a slight bump in the number of times they were cited by other papers” (emphasis mine).

** Following a link available on the author’s website, you can access this article for free here.

References
Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & de Mazancourt, C. (2012). Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns Science DOI: 10.1126/science.1227833

Related Posts

The Cost to Science of the ENCODE Publication Embargo

The big buzz in the genomics twittersphere today is the release of over 30 publications on the human ENCODE project. This is a heroic achievement, both in terms of science and publishing, with many groundbreaking discoveries in biology and pioneering developments in publishing to be found in this set of papers. It is a triumph that all of these papers are freely available to read, and much is being said elsewhere in the blogosphere about the virtues of this project and the lessons learned from the publication of these data. I’d like to pick up here on an important point made by Daniel MacArthur in his post about the delays in the publication of these landmark papers that have arisen from the common practice of embargoing papers in genomics. To be clear, I am not talking about embargoing the use of data (which is also problematic), but embargoing the release of manuscripts that have been accepted for publication after peer review.

MacArthur writes:

Many of us in the genomics community were aware of the progress the [ENCODE] project had been making via conference presentations and hallway conversations with participants. However, many other researchers who might have benefited from early access to the ENCODE data simply weren’t aware of its existence until today’s dramatic announcement – and as a result, these people are 6-12 months behind in their analyses.

It is important to emphasize that these publication delays are by design, and are driven primarily by the journals that set the publication schedules for major genomics papers. I saw first-hand how Nature sets the agenda for major genomics papers and their associated companion papers as part of the Drosophila 12 Genomes Project. This insider’s view left a distinctly bad taste in my mouth about how much control a single journal has over some of the most important community resource papers that are published in Biology.  To give more people insight into this process, I am posting the agenda set by Nature for publication (in reverse chronological order) of the main Drosophila 12 Genomes paper, which went something like this:

7 Nov 2007: papers are published, embargo lifted on main/companion papers
28 Sept 2007: papers must be in production
21 Sept 2007: revised versions of papers received
17 Aug 2007: reviews are returned to authors
27 Jul 2007: papers are submitted

Not only was acceptance of the manuscript essentially assumed by the Nature editorial staff, the entire timeline was spelled out in advance, with an embargo built in to the process from the outset. Seeing this process unfold first hand was shocking to me, and has made me very skeptical of the power that the major journals have to dictate terms about how we, and other journals, publish our work.

Personally, I cannot see how this embargo system serves anyone in science other than the major journals. There is no valid scientific reason that major genome papers and their companions cannot be made available as online accepted preprints, as is now standard practice in the publishing industry. As scientists, we have a duty to ensure that the science we produce is released to the general public and community of scientists as rapidly and openly as possible. We do not have a duty to serve the agenda of a journal to increase their cachet or revenue stream. I am aware that we need to accept delays due to quality control via the peer review and publication process. But the delays due to the normal peer review process are bad enough, as ably discussed recently by Leslie Voshall. Why on earth would we accept that journals build in further unnecessary delays into the publication process?

This of course leads to the pertinent question: how harmful is this system of embargoes? Well, we can estimate put an upper estimate on * this pretty easily from the submission/acceptance dates of the main and companion ENCODE papers (see table below). In general, most ENCODE papers were embargoed for a minimum of 2 months but some were embargoed for up to nearly 7 months. Ignoring (unfairly) the direct impact that these delays may have on the careers of PhD students and post-docs involved, something on the order of 112 months of access to these important papers have been lost to all scientists by this single embargo. Put another way, nearly up to * 10 years of access time to these papers has been collectively lost to science because of the ENCODE embargo. To the extent that these papers are crucial for understanding the human genome, and the consequences this knowledge has for human health, this decade lost to humanity is clearly unacceptable. Let us hope that the ENCODE project puts an end to the era of journal-mandated embargoes in genomics.

DOI Date Received Date Accepted Date published Months in review Months in embargo
nature11247 24-Nov-11 29-May-12 05-Sep-12 6.0 3.2
nature11233 10-Dec-11 15-May-12 05-Sep-12 5.1 3.6
nature11232 15-Dec-11 15-May-12 05-Sep-12 4.9 3.6
nature11212 11-Dec-11 10-May-12 05-Sep-12 4.9 3.8
nature11245 09-Dec-11 22-May-12 05-Sep-12 5.3 3.4
nature11279 09-Dec-11 01-Jun-12 05-Sep-12 5.6 3.1
gr.134445.111 06-Nov-11 07-Feb-12 05-Sep-12 3.0 6.8
gr.134957.111 16-Nov-11 01-May-12 05-Sep-12 5.4 4.1
gr.133553.111 17-Oct-11 05-Jun-12 05-Sep-12 7.5 3.0
gr.134767.111 11-Nov-11 03-May-12 05-Sep-12 5.6 4.0
gr.136838.111 21-Dec-11 30-Apr-12 05-Sep-12 4.2 4.1
gr.127761.111 16-Jun-11 27-Mar-12 05-Sep-12 9.2 5.2
gr.136101.111 09-Dec-11 30-Apr-12 05-Sep-12 4.6 4.1
gr.134890.111 23-Nov-11 10-May-12 05-Sep-12 5.5 3.8
gr.134478.111 07-Nov-11 01-May-12 05-Sep-12 5.7 4.1
gr.135129.111 21-Nov-11 08-Jun-12 05-Sep-12 6.5 2.9
gr.127712.111 15-Jun-11 27-Mar-12 05-Sep-12 9.2 5.2
gr.136366.111 13-Dec-11 04-May-12 05-Sep-12 4.6 4.0
gr.136127.111 16-Dec-11 24-May-12 05-Sep-12 5.2 3.4
gr.135350.111 25-Nov-11 22-May-12 05-Sep-12 5.8 3.4
gr.132159.111 17-Sep-11 07-Mar-12 05-Sep-12 5.5 5.9
gr.137323.112 05-Jan-12 02-May-12 05-Sep-12 3.8 4.1
gr.139105.112 25-Mar-12 07-Jun-12 05-Sep-12 2.4 2.9
gr.136184.111 10-Dec-11 10-May-12 05-Sep-12 4.9 3.8
gb-2012-13-9-r48 21-Dec-11 08-Jun-12 05-Sep-12 5.5 2.9
gb-2012-13-9-r49 28-Mar-12 08-Jun-12 05-Sep-12 2.3 2.9
gb-2012-13-9-r50 04-Dec-11 18-Jun-12 05-Sep-12 6.4 2.5
gb-2012-13-9-r51 23-Mar-12 25-Jun-12 05-Sep-12 3.0 2.3
gb-2012-13-9-r52 09-Mar-12 25-May-12 05-Sep-12 2.5 3.3
gb-2012-13-9-r53 29-Mar-12 19-Jun-12 05-Sep-12 2.6 2.5
Min 2.3 2.3
Max 9.2 6.8
Avg 5.1 3.7
Sum 152.7 112.1

Footnote:

* Based on a converation on twitter with Chris Cole, I’ve revised this to be estimate to reflect the upper bound, rather than a point estimate of time lost to science.

The Roberts/Ashburner Response

A previous post on this blog shared a helpful boilerplate response to editors for politely declining to review for non-Open Access journals, which I received originally from Michael Ashburner. During a quick phone chat today, Ashburner told me that he in fact inherited a version of this response originally from Nobel laureate Richard Roberts, co-discoverer of introns, lead author on the Open Letter to Science calling for a “Genbank” of the scientific literature, and long-time editor of Nucleic Acids Research, one of the first classical journals to move to a fully Open Access model. So to give credit where it is due, I’ve updated the title of the “Just Say No” post to make the attribution of this letter more clear. We owe both Roberts and Ashburner many thanks for paving the way to a better model of scientific communication and leading by example.

Goodbye F1000, Hello Faculty of a Million

Dr. Seuss' The Sneetches

In the children’s story The Sneetches, Dr. Suess’ presents a world where certain members of society are marked by an arbitrary badge of distinction, and a canny opportunist uses this false basis of prestige for his financial gain*. What does this morality tale have to do with the scientific article recommendation service Faculty of 1000?  Read on…

Currently ~3000 papers are published each day in the biosciences**. Navigating this sea of information to find articles relevant to your work is no small matter. Researchers can either sink or swim with the aid of (i) machine-based technologies based on search or text-mining tools or (ii) human-based technologies like blogs or social networking services that highlight relevant work through expert recommendation.

One of the first expert recommendation services was Faculty of 1000, a service launched in 2002 with the aim of “identifying and evaluating the most significant articles from biomedical research publications” though a peer-nominated “Faculty” of experts in various subject domains. Since the launch of F1000, several other mechanisms for expert literature recommendation have also come to the foreground, including academic social bookmarking tools like citeulike or Mendeley, the rise of Research Blogging, and new F1000-like services such as annotatr, The Third Reviewer PaperCritic and TiNYARM.

Shortly after I started my group at the University of Manchester in 2005 I was invited to join the F1000 Faculty, which I gratefully accepted. At the time, I felt that it was a mark of distinction to be invited into this select club, since I felt that it would be a good platform to voice my opinions on what work I thought was notable. I was under no illusion that my induction was based only on merit, since this invitation came from my former post-doc mentor Michael Ashburner. I overlooked this issue at the time, since when you are invited to join the “in-club” as a junior faculty member, it is very tempting since you think things like this will play a positive role in your career progression. [Whether being in F1000 has helped my career I can't say, but certainly it can't have hurt, and I (sheepishly) admit to using it on grant and promotion applications in the past.]

Since then, I’ve tried to contribute to F1000 when I can [PAYWALL], but since it is not a core part of my job, I’ve only contributed ~15 reviews in 5 years. My philosophy has been only to contribute reviews on articles I think are of particular note and might be missed otherwise, not to review major papers in Nature/Science that everyone is already aware of. As time has progressed and it has become harder to commit time to non-essential tasks, I’ve contributed less and less, and the F1000 staff has pestered me frequently with reminders and phone calls to submit reviews. At times the pestering has been so severe that I have considered resigning just to get them off my back. And I’ve noticed that some colleagues I have a lot of respect for have also resigned from F1000, which made me wonder if they were likewise fed up with F1000’s nagging.

This summer, while reading a post on the Tree of Life blog, Jonathan Eisen made a parenthetical remark about quitting F1000, which made me more aware of why their nagging was really getting to me:

I even posted a “dissent” regarding one of [Paul Hebert's] earlier papers on Faculty of 1000 (which I used to contribute to before they become non open access).

This comment made me realize that the F1000 recommendation service is just another closed-access venture for publishers to make money off a product generated for free by the goodwill and labor of academics. Like closed access journals, my University pays twice to get F1000 content — once for my labor and once for the subscription to the service. But unlike a normal closed-access journal, in the case of F1000 there is not even a primary scientific publication to justify the arrangement. So by contributing to F1000, essentially I take time away from my core research and teaching activities to allow a company to commercialize my IP and pay someone to nag me! What’s even more strange about this situation is that there is no rational open-access equivalent of literature review services like F1000. By analogy with the OA publishing of the primary literature, for “secondary” services I would pay a company to post one of my reviews on someone else’s article. (Does Research Blogging for free sound like a better option to anyone?)

Thus I’ve come to realize that is unjustified to contribute secondary commentary to F1000 on Open Access grounds, in the same way it is unjustified to submit primary papers to closed-access journals. If I really support Open Access publishing, then to contribute to F1000 I must either must either be a hypocrite or make an artificial distinction between the primary and secondary literature. But this gets to the crux of the matter: to the extent that recommendation services like F1000 are crucial for researchers to make sense of the onslaught of published data, then surely these critical reviews should be Open for all, just as the primary literature should be. On the other hand, if such services are not crucial, why am I giving away my IP for free to a company to capitalize on?

Well, this question has been on my mind for a while and I have looked into whether there might be evidence that F1000 evaluations have a real scientific worth in terms of highlighting good publications that might provide a reason to keep contributing to the system. On this point the evidence is scant and mixed. An analysis by the Wellcome Trust finds a very weak correlation between F1000 evaluations and the evaluations of an internal panel of experts (driven almost entirely by a few clearly outstanding papers), with the majority of highly cited papers being missed by F1000 reviewers. An analysis by the MRC shows a ~2-fold increase in the median number of citations (from 2 to 4) for F1000 reviewed articles relative to other MRC-funded research. Likewise, an analysis of the Ecology literature shows similar trends, with marginally higher citation rates for F1000 reviewed work, but with many high impact papers being missed. [Added 28 April 2012: Moreover, multifactorial analysis by Priem et al on a range of altmetric measures of impact for 24,331 PLoS articles clearly shows that the "F1000 indicator did not have shared variability with any of the derived factors" and that "Mendeley bookmark counts correlate more closely to Web of Science citations counts than expert ratings of F1000".] Therefore the available evidence indicates that F1000 reviews do not capture the majority of good work being published, and the work that is reviewed is only of marginally higher importance (in terms of citation) than unreviewed work.

So if (i) it goes against my OA principles, (ii) there is no evidence (on average) that my opinion matters quantitatively much more than anyone else’s, and (iii) there are equivalent open access systems to use, why should I continue contributing to F1000? The only answer I can come up with is that by being a F1000 reviewer, I gain a certain prestige for being in the “in club,” as well as by some prestige-by-association for aligning myself with publications or scientists I perceive to be important. When stripped down like this, being a member of F1000 seems pretty close to being a Sneetch with a star, and that the F1000 business model is not too different than that used by Sylvester McMonkey McBean. Realizing this has made me feel more than a bit ashamed for letting the allure of being in the old-boys club and my scientific ego trick me into something I cannot rationally justify.

So, needless to say I have recently decided to resign from F1000. I will instead continue to contribute my tagged articles to citeulike (as I have for several years) and contribute more substantial reviews to this blog via the Research Blogging portal and push the use of other Open literature recommendation systems like PaperCritic, who have recently made their user-supplied content available under a Creative Commons license. (Thanks for listening PaperCritic!).

By supporting these Open services rather than the closed F1000 system (and perhaps convincing others to do the same) I feel more at home among the ranks of the true crowd-sourced “Faculty of 1,000,000″ that we need to help filter the onslaught of publications. And just as Sylvester McMonkey McBean’s Star-On machine provided a disruptive technology for overturning perceptions of prestige by giving everyone a star in The Sneetches, I’m hopeful that these open-access web 2.0 systems will also do some good towards democratizing personal recommendation of the scientific literature.

* Note: This post should in no way be taken as an ad hominem against F1000 or its founder Vitek Tracz, who I respect very much as a pioneer of Open Access biomedical publishing

** This number is an estimate based on the real figure of ~2.5K papers/day in deposited in MEDLINE, extrapolated to the large number of non-biomedical journals that are not indexed by MEDLINE.  If any has better data on this, please comment below.

Just Say No – The Roberts/Ashburner Response

UPDATE: see follow-up post “The Roberts/Ashburner Response” to get more of the story on the origin of this letter.

I had the pleasure of catching up with my post-doc mentor Michael Ashburner today, and among other things we discussed the ongoing development of UKPMC and the importance of open access publishing. Although I consider myself a strong open access advocate, I did not sign the PLoS open letter in 2001, since at the time I was a post-doc and not in a position fully to control where I published. Therefore I couldn’t be sure that I could abide by the manifesto 100%, and didn’t want to put my name to something I couldn’t deliver on. As it turns out this is still the case to a certain degree and (because of collaborations) my freely-available-article-index remains at a respectable 85% (33/39), but alas will never reach the coveted 100% mark.

Nevertheless, I have steadily adopted most of the policies of the open letter, especially as my group has gotten more heavily involved in text-mining research over the years. This became especially true after a nasty encounter with one publisher in 2008 caused campus IT to shutdown my office IP for downloading articles from a journal for which our University has a site license, which radicalized me into more of an open access evangelist. After discussing this event at the time with Ashburner, he reminded me of the manifesto and one of its most powerful tools for changing the landscape of scholarly publishing – refusing to reviewing for journals/publishers who do not submit their content to PubMed Central (see the white-list of journals here).

I have dug this letter out countless times since then and used versions of it when asked to review for non-PMC journals, as it expresses the principles in plain and powerful language. I had another call to dig it out today and thought that I’d post the “Ashburner response” so others have a model to follow if they chose this path.

Enjoy!

From: “Michael Ashburner” <michael.ashburner@xxx.xxx>
Date: 30 August 2008 13:48:03 GMT+01:00
To: “Casey Bergman” <casey.bergman@xxx.xxx>
Subject: Just say No

Dear Editor,

Thank you for your invitation to review for your journal. Because it is not open access and does not provide its back content to PubMed Central, or any similar resource, I regret that I am unwilling to do this.

I would urge you to seriously reconsider both policies and would ask that you send this letter to your co-editors and publisher. In the event that you do change your policy, even to the extent of providing your back content to PubMed Central, or a similar resource, then I will be happy to review for you.

The scientific literature is at present the most significant resource available to researchers. Without access to the literature we cannot do science in any scholarly manner. Your journal refuses to embrace the idea that the purpose of the scientific literature is to communicate knowledge, not to make a profit for publishers. Without the free input of manuscripts and referees’ time your journal would not exist. By and large, the great majority of the work you publish is paid for by taxpayers. We now, either as individuals or as researchers whose grants are top-sliced, have to pay to read our own work and that of our colleagues, either personally or through our institutes’ libraries. I find that, increasingly, literature that is not available by open access is simply being ignored. Moreover, I am very aware that, increasingly, discovering information from the literature relies on some sort of computational analysis. This can only be effective if the entire content of primary research papers is freely available. Finally, by not being an open access journal you are disenfranchising both scientists who cannot afford (or whose institutions cannot afford) to pay for access and the general public.

There are now several good models for open access publication, and I would urge your journal to adopt one of these. There is an extensive literature on open access publishing, and its economic implications. I would be pleased to send you references to this literature.

Yours sincerely,

Michael Ashburner

Related Posts: