From the Library of Prof. William B. Provine

I just saw the sad news that Will Provine, historian of population genetics, died peacefully at his home at the age of 73. Others will no doubt write of Provine’s legacy as a scholar and orator of the highest calibre, a fervent proponent of atheism and evolution that only a preacher’s son could be. I’m moved by his death to recall my experience of having Provine as a lecturer during my undergrad days at Cornell 20 years ago, where his dramatic and entertaining style drew me fully into evolutionary biology, both as a philosophy and as a profession. I can’t say I knew Provine well, but I can say our interactions left a deep impression on me.  He was an incredibly kind and engaging, pulling you onto what he called the “slippery slope” where religious belief must yield to rationalism.

I vividly recall Provine giving me a hard cover copy of the compendium of Dobzhansky’s papers he co-edited on our first meeting after class (pulled from a half-full box at the ready near his desk), and discussing the then-recent death of Motoo Kimura, who he was researching for his as-yet-unpublished history of the Neutral Theory. We met and talked about population genetics and molecular evolution several times after that, and for reasons I can’t quite recall, Provine ended up offering me keys to his personal library in the basement of Corson Hall. I’ll never forget the first time he showed me his library, with bookshelves lining what would have been a lab space, filled with various version of classic works in Genetics, Evolution, Development and History of Science. The delight he had in showing me his shelf of various editions of the Origin of Species was only matched by the impish pleasure he had in showing me the error in chromosome segregation on the spine of the first edition of Dobzhansky’s Genetics and the Origin of Species, or how to decode the edits to text of Fisher’s Genetical Theory of Natural Selection (see figure below).


In my first tour of his library, Provine showed me how to decode the revisions to the 1958 second edition of Fisher’s Genetical Theory of Natural Selection. Notice how the font in paragraph 2 is smaller that in paragraphs 1 and 3. Text in this font was added to the original plates prior to the second printing. Provine then handed me one of the many copies of this book he had on his shelf for me to keep, which is one of my few prized possessions.

His reprint collection was equally impressive (inherited from Sewall Wright from what I understand), with many copies signed, with compliments of the author, by the founders of the Modern Synthesis. Provine’s reprint collection was surpassed in value only by the FlyBase reprint collection in the Dept of Genetics in Cambridge, in my experience. I used Provine’s library to study quite often in my last year or so at Cornell, interrupting work on Alex Kondrashov’s problem sets by browsing early 20th century biology texts. Being able to immerse myself in this trove of incredible books left a lasting effect on me, and I have no doubt was a major factor in deciding to pursue academic research in evolution and genetics. Sadly while no longer physically intact, I am very glad to know the 5,000+ items in Provine’s library have been contributed to the Cornell Library, possibly the best place for the spirit of an atheist and historian to live on.

Keeping Up with the Scientific Literature using Twitterbots: The FlyPapers Experiment

A year ago I created a simple “twitterbot” to stay on top of the Drosophila literature called FlyPapers, which tweets links to new abstracts in Pubmed and preprints in arXiv from a dedicated twitter account (@fly_papers). While most ‘bots on Twitter post spam or creative nonsense, an increasing number of people are exploring the use of twitterbots for more productive academic purposes. For example, Rod Page set up the @evoldir twitterbot way back in 2009 as an alternative to receiving email posts to the Evoldir mailing list, and likewise Gordon McNickle developed the @EcoLog_L twitterbot for the Ecolog-L mailing list. Similar to FlyPapers, others have established twitterbots for domain-specific literature feeds, such as the @BioPapers  for Quantitative Biology preprints on arXiv, @EcoEvoJournals for publications in the areas of Ecology & Evolution and @PlantEcologyBot for papers on Plant Ecology. More recently, Alberto Acerbi developed the @CultEvoBot to post links to blogs and new articles on the topic of cultural evolution. (I recommend reading posts by Rod, Gordon and Alberto for further insight into how and why they established these twitterbots.) One year in, I thought I’d summarize my thoughts on the FlyPapers experiment, and to make good on a promise I made to describe my set-up in case others are interested.

First, a few words on my motivation for creating FlyPapers. I have been receiving a daily update of all papers in the area of Drosophila in one form or another for nearly 10 years. My philosophy is that it is relatively easy to keep up on a daily basis with what is being published, but it’s virtually impossible to catch up when you let the river of information flow for too long. I first started receiving daily email updates from NCBI, which cluttered up my inbox and often got buried. Then I migrated to using RSS on Google Reader, which led to a similar problem of many unread posts accumulating that needed to be marked as “read”. Ultimately, I realized what I want from a personalized publication feed — a flow of links to articles that can be quickly scanned and clicked, but which requires no other action and can be ignored when I’m busy — was better suited to a Twitter client than a RSS reader. Moreover, in the spirit of “maximizing the value of your keystrokes“, it seemed that a feed that was useful for me might also be useful for others, and that Twitter was the natural medium to try sharing this feed since many scientists are already using twitter to post links to papers. Thus FlyPapers was born.

Setting up FlyPapers was straightforward and required no specialist know-how. I first created a dedicated Twitter account with a “catchy” name. Next, I created an account with, which takes a RSS/Twitter/email feed as input and routes the output to the FlyPapers Twitter account. I then set up an RSS feed from NCBI based on a search for the term “Drosophila” and add this as a source to the route. Shortly thereafter, I added a RSS feed for preprints in Arxiv using the search term “Drosophila” and added this to the same route. (Unfortunately, neither PeerJ Preprints nor bioRxiv currently have the ability to set up custom RSS feeds, and thus are not included in the FlyPapers stream.) NCBI and Arxiv only push new articles once a day, and each article is posted automatically as a distinct tweet for ease of viewing, bookmarking and sharing. The only gotcha I experienced in setting the system up was making sure when creating the Pubmed RSS feed to set the “number of items displayed” high enough (=100). If the number of articles posted in one RSS update exceeds the limit you set when you create the Pubmed RSS feed, Pubmed will post a URL to a Pubmed query for the entire set of papers as one RSS item, rather than post links to each individual paper. (For Gordon’s take on how he set up his Twitterbots, see this thread.) [UPDATE 25/2/14: Rob Lanfear has posted detailed instructions for setting up a twitterbot using the strategy I describe above at See his comment below for more information.]

So, has the experiment worked? Personally, I am finding FlyPapers a much more convenient way to stay on top of the Drosophila literature than any previous method I have used. Apparently others are finding this feed useful as well.

One year in, FlyPapers now has 333 followers in 16 countries, which is a far bigger and wider following than I would have ever imagined. Some of the followers are researchers I am familiar with in the Drosophila world, but most are students or post-docs I don’t know, which suggests the feed is finding relevant target audiences via natural processes on Twitter. The account has now posted 3,877 tweets, or ~10-11 tweets per day on average, which gives a rough scale for the amount of research being published annually on Drosophila. Around 10% of tweeted papers are getting retweeted (n=386) or favorited (n=444) by at least one person, and the breadth of topics being favorited/retweeted spans virtually all of Drosophila biology. These facts suggest that developing a twitterbot for domain-specific literature can indeed attract substantial numbers of like-minded individuals, and that automatically tweeting links to articles enables a significant proportion of papers in a field to easily be seen, bookmarked and shared.

Overall, I’m very pleased with the way FlyPapers is developing. I had hoped that one of the outcomes of this experiment would be to help promote Drosophila research, and this appears to be working. I had not expected it would act as a general hub for attracting Drosophila researchers who are active on Twitter, which is a nice surprise. One issue I hadn’t considered a year ago was the potential that ‘bots like FlyPapers might have to “game” Altmetics scores. Frankly, any metric that would be so easily gamed by a primitive bot like FlyPapers probably has no real intrisic value. However, it is true that this bot does add +1 to the twitter count for all Drosophila papers. My thoughts on this are that any attempt to correct the potential influence of ‘bots on Altmetrics scores should unduly not penalize the real human engagement bots can facilitate, so I’d say it is fair to -1 the orginal FlyPapers tweets in an Altmetrics calculation, but retain the retweets created by humans.

One final consequence of putting all new Drosophila literature onto Twitter that I would not have anticipated is that some tweets have been picked up by other social media outlets, including disease-advocacy accounts that quickly pushed basic research findings out to their target audience:

This final point suggests that there may be wider impacts from having more research articles automatically injected into the Twitter ecosystem. Maybe those pesky twitterbots aren’t always so bad after all.


For those interested in setting up their own scientific twitterbot, see Rob Lanfear’s excellent and easy-to-follow instructions here. Peter Carlton has also outlined another method for setting up a twitterbot here, as has Sho Iwamoto here.


Battling Administrivia Using an Intramural Question & Answer Forum

The life of a modern academic involves juggling many disparate tasks, and like a computer using more physical memory than it has, swapping between various tasks leads to inefficiency and low performance in our jobs. Personally, the time fragmentation and friction induced by transitioning from task to task seems to be one of the main sources of stress in my work life.  The main reason for this is that many daily tasks on my to-do list are essential but fiddly and time-consuming administrivia (placing orders, filling in forms, entering marks into a database) that prevent me from getting to the things that I enjoy about being an academic: doing research, interacting with students, reading papers, etc.

I would go so far as to say that the mismatch between the desires of most academics and the reality of their jobs is the main source of academic “burnout” and low morale in what otherwise should be an awesome profession. I would also venture that administrivia is one of the major sources of the long hours we endure, since after wading through the “chaff”, we will (dammit!) put in the time on nights and weekends for the things we are most passionate about to sustain our souls. And based on the frequency of sentiments relating to this topic flowing through my Twitter feed, I’d say the negative impact of adminsitrivia is a pervasive problem in modern academic life, not restricted to any one institute.

While it is tempting to propose ameliorating the administrivia problem by simply eliminating bureaucracy, the growth of the administrative sector in higher education makes this solution a virtual impossibility. I have ultimately become resigned to the fact that the fundamentally inefficient nature of university bureaucracy cannot be structurally reformed and begun to seek other solutions to make my work life better. In doing so, I believe I’ve hit on a simple solution to the adminstrivia problem that I’m hoping might help others as well. In fact, I’m now convinced this solution is simple and powerful enough to actually be effective.

Accepting that it cannot be fully eliminated, my view is that the key to reducing the time and morale burden of administrivia is to realize that most routine tasks in University life are just protocols that require some amount of tacit knowledge about policies or procedures. Thus, all that is needed to reduce the negative impact of administrivia to its lowest possible level is to develop a system whereby accurate and relevant protocols can be placed at one’s fingertips so that they can be completed as fast as possible. The problem is that such protocols either don’t exist, don’t exist in a written form, or exist as scattered documents across various filesystems and offices that you have to expend substantial time finding. So how do we develop such protocols without generating more bureaucracy and exacerbating the problem we are attempting to solve?

My source of inspiration for ameliorating administrivia with minimal overhead comes from the positive experiences I have had using online Question and Answering (Q & A) forums based on the Stack Exchange model (principally the BioStars site for answering questions about bioinformatics).  For those not familiar with such systems, the Q & A model popularized by the Stack Exchange platform (and its clones) is a system that allows questions to be asked and answers to be voted on, moderated, edited and commented on in a very intuitive and user-friendly manner. For some reason I am not able to fully explain, the engineering behind the Q & A model naturally facilitates both knowledge exchange and community building in a way that is on the whole extremely positive, and seems to prevent the worst aspects of human nature commonly found on older internet forums and commenting systems.

So here is my proposal to battling the impact of academic administrivia: implement an intramural, University-specific Q & A forum for academic and administrative staff to pose and answer each other’s practical questions, converting tacit knowledge stored in people’s heads, inboxes and intranets into a single knowledge-bank that can be efficiently used and re-used by others who have the same queries. The need for an “intramural” solution and the reason this strategy cannot be applied globally, as it has for Linux administration, Poker or Biblical Hermeneutics, is that Universities (for better or worse) have their own local policies and procedures that can’t be easily shared or benefit from general worldwide input.

We have been piloting the use of the Open Source Question Answer (OSQA) platform (a clone of Stack Exchange) among a subset of our faculty for about a year, with good uptake and virtually unanimous endorsement from everyone who has used it. We currently require a real name policy for users, have limited the system to questions of procedure only, and have encouraged users to answer their own questions after solving burdensome tasks. To make things easy to administer technically, we are using an out of the box virtual machine of OSQA provided by Bitnami. The anonymized screenshot below gives a flavor of the banal, yet time-consuming queries that arise repeatedly in our institution that such a system makes easier to accomplish. I trust colleagues at other institutions will find similar tasks frustratingly familiar.


The main reason I am posting this idea now is that I am scheduled to give a demo and presentation to my Dean and management team this week to propose rolling this system out to a wider audience. In preparation for this pitch, I’ve been trying to assemble a list of pros and cons that I am sure is incomplete and would benefit from the input of other people familiar with how Universities and Q & A platforms work.

The pros of an intramural Q & A platform for battling administrivia I’ve come up with so far include:

  • Increasing efficiency, leading to higher productivity for both academic and administrative staff;
  • Reducing the sense of frustration about bureaucratic tasks, leading to higher morale;
  • Improving sense of empowerment and community among academic and administrative staff;
  • Providing better documentation of procedures and policies;
  • Serving as an “aide memoire”;
  • Aiding the success of junior academic staff;
  • Ameliorating the effects of administrative turnover;
  • Providing a platform for people who may not speak up in staff meetings to contribute;
  • Allows “best practice” to emerge through crowd-sourcing;
  • Identifying common problems that should be prioritized for improvement;
  • Identifying like-minded problem solvers in a big institution;
  • Integrating easily around existing IT platforms;
  • Ability to be deployed at any scale (lab group, department, faculty, school, etc.)
  • Allows information to be accessible 24/7 when admininstrative offices are closed (H/T @jdenavascues).

I confess struggling to find true cons, but these might include (rejoinders in parentheses):

  • Security risks (can be solved with proper administration and authentication)
  • Inappropriate content (real name policy should minimize, can be solved with proper moderation);
  • Answers might be “impolitic” (real name policy should minimize, can be solved with proper moderation; H/T @DrLabRatOry)
  • Time wasting (unlikely since whole point is to enhance productivity);
  • Lack of uptake (even if the 90-9-1 rule applies, it is an improvement on the status quo);
  • Perceived as threat to administrative staff (far from it, this approach benefits administrative staff as much as academic staff);
  • Information could be come stale (can be solved with proper moderation and periodic updating).

I’d be very interested to get feedback from others about this general strategy (especially by Tues PM 17 Sep 2013), thoughts on related efforts, or how intramural Q & A platforms could be used in other ways in an academic setting beyond battling administrivia in the comments below.

Related Posts:

Twitter Tips for Scientific Journals

The growing influence of social media in the lives of Scientists has come to the forefront again recently with a couple of new papers that provide An Introduction to Social Media for Scientists and a more focussed discussion of The Role of Twitter in the Life Cycle of a Scientific Publication. Bringing these discussions into traditional journal article format is important for spreading the word about social media in Science outside the echo chamber of social media itself. But perhaps more importantly, in my view, is that these motivating papers reflect a desire for Scientists to participate, and urge others to participate, in shaping a new space for scientific exchange in the 21st century.

Just as Scientists themselves are adopting social media, many scientific journals/magazines are as well. However, most discussions about the role of social media in scientific exchange overlook the issue of how we Scientists believe traditional media outlets, like scientific journals, should engage in this new forum. For example in the Darling et al. paper on the The Role of Twitter in the Life Cycle of a Scientific Publication, little is said about the role of journal Twitter accounts in the life cycle of publications beyond noting:

…to encourage fruitful post-publication critique and interactions, scientific journals could appoint dedicated online tweet editors who can storify and post tweets related to their papers.

This oversight is particularly noteworthy for several reasons. First, it is fact that many journals, and journal staff, play active roles in engaging with the scientific debate on social media and are not simply passive players in the new scientific landscape.  Second, Scientists need to be aware that journals extensively monitor our discussions and activity on social media in ways that were not previously possible, and we need to consider how this affects the future of scientific publishing. Third, Scientists should see social media represents an opportunity to establish new working relationships with journals that break down the old models that increasingly seem to harm both Science and Scientists.

In the same way that we Scientists are offering tips/advice to each other for how to participate in the new media, I feel that this conversation should also be extended to what we feel are best practices for journals to engage in the scientific process through social media. To kick this off, I’d like to list some do’s and don’ts for how I think journals should handle their presence on Twitter, based on my experiences following, watching and interacting with journals on Twitter over the last couple of years.

  • Do engage with (and have a presence on) social media. Twitter is rapidly on the uptake with scientists, and is the perfect forum to quickly transmit/receive information to/from your author pool/readership. I find it a little strange in fact if a journal doesn’t have a Twitter account these days.
  • Do establish a social media policy for your official Twitter account. Better yet, make it public, so Scientists know the scope of what we should expect from your account.
  • Don’t use information from Twitter to influence editorial or production processes, such as the acceptance/rejection of papers or choice of reviewers.  This should be an explicit part of your social media policy. Information on social media could be incorrect and by using unverified information from Twitter you could allow competitors/allies to block/promote each other’s work.
  • Don’t use a journal Twitter account as a table of contents for your journal. Email TOCs or RSS feeds exist for this purpose already.
  • Do tweet highlights from your journal or other journals. This is actually what I am looking for in a journal Twitter account, just as I am from the accounts of other Scientists.
  • Do use journal accounts to retweet unmodified comments from Scientists or other media outlets about papers in your journal. This is a good way for Scientists to find other researchers interested in a topic and know what is being said about work in your journal. But leave the original tweet intact, so we can trace it to the originator and so it doesn’t look like you have edited the sentiment to suit your interests.
  • Don’t use journal account to express personal opinions. I find it totally inappropriate that individuals at some journals hide behind the journal name and avatar to use journal twitter accounts as a soapbox to express their personal opinions. This is a really dangerous thing for a journal to do since it reinforces stereotypes about the fickleness of editors that love to wield the power that their journal provides them. It’s also a bad idea since the opinions of one or a few people may unintentionally affect a journal or publisher.
  • Do encourage your staff to create personal accounts and be active on social media. Editors and other journal staff should be encouraged to express personal opinions about science, tweet their own highlights, etc. This is a great way for Scientists to get to know your staff (for better or worse) and build an opinion about who is handling our work at your journal. But it should go without saying that personal opinions should be made through personal accounts, so we can follow/unfollow these people like any other member of the community and so their opinions do not leverage the imprimatur of your journal.
  • Do use journal Twitter accounts to respond to feedback/complaints/queries. Directly replying to comments from the community on Twitter is a great way to build trust in your journal.  If you can’t or don’t want to reply to a query in the open, just reply by asking the person to email your helpdesk. Either way shows good faith that you are listening to our concerns and want to engage. Ignoring comments from Scientists is bad PR and can allow issues to amplify beyond your control, with possible negative impacts on your journal (image) in the long run.
  • Don’t use journal Twitter accounts to tweet from meetings. To me this is a form of expressing personal opinion that looks like you are endorsing certain Scientists/fields/meetings or, worse yet, that you are looking to solicit them to submit their work to your journal, which smacks of desperation and favoritism. Use personal accounts instead to tweet from meetings, since after all what is reported is a personal assessment.

These are just my first thoughts on this issue (anonymised to protect the guilty), which I hope will act as a springboard for others to comment below on how they think journals should manage their presence on Twitter for the benefit of the Scientific community.

A Case for Junior/Senior Partnership Grants

Much has been made in recently years over funding crises in the US and Europe, which are the inevitable result of the Great Recession superimposed on top of the end of exponential growth in Science. Governments hamstrung by austerity measures or lack of political will have been forced to abandon increases in scientific funding, going so far even as to freeze funds for awarded grants in Spain (see translation here). The consequences of this stagnant period of inputs to scientific progress will be felt for many years to come, materially in terms of basic and applied discoveries, but also socially in terms of the impacts on an entire generation of scientists who are just beginning their independent careers.

Why are early stage researchers hit hardest by stagnation or decreases in funding? Simply because access to funding is not a level playing field for all scientists, and is in fact highly dependent on career stage and experience. Therefore, increased competition for resources is expected to hit younger scientists disproportionately harder relative to established researchers because of many factors, including:

  • less experience in the art of writing grants,
  • less experience in reviewing grants,
  • less experience serving on grant panels,
  • shorter scientific and management track record,
  • and a less highly developed social network.

The specific negative effect that a general increase in resource competition has on young researchers is (in my view) the best explanation for the extremely worrying downward trends in the proportion of young PIs receiving NIH grants, and the increasing upward trend in the age to receipt of first RO1 in the USA, shown in the following diagrams from the NIH Rock Talk Blog:

Thankfully, this issue which is being discussed seriously by NIH’s Deputy Director for Extramural Research, Dr. Sally Rockey, as publication of these data attests to.  [I would very much welcome if other funding agencies published similar demographic breakdowns of their funding to address whether this is a global effect.] However, not all see these trends as worrying and interpret them on socially-neutral demographic grounds.

To help combat the inherent age-based iniquities in access research funding, funding agencies typically ring-fence funding for early-stage researchers under a “New Investigator” type umbrella. In fact, Sally Rockey provides a link to an impressive history of initiatives the NIH has undertaken to tackle the New Investigator issue. But what is striking to me is that despite putting a series of different New Investigator mechanisms in place, the negative impacts on early-stage researchers have only worsened over the last three decades. Thus New Investigator programmes are clearly not enough to redress this issue, and new solutions must be sought out. Furthermore ring-fencing funding for junior researchers necessarily creates an us-vs-them mentality, which can have counterproductive repercussions among different scientific cohorts. And while New Investigator programmes are widely supported in principle, trade-offs in resource allocation can lead to unstable to changes in policy, as witnessed in the case of the now-defunct NERC New Investigator programme.

So, what of it? Is this post just another bemoaning the sorry state of affairs in funding for early-stage researchers? No, or at least, not only. Actually, my motivation is to constructively propose a relatively simple (naive?) mechanism to fund research projects that can address the inequities in funding across career stages, but which also has the additional benefit of engendering mentorship and transfer of skills across the generations: the Junior/Senior Partnership Grant. [As with all (good) ideas, such a model has been proposed before by the Women’s Cancer Network, but does not appear to be adopted by major federal funding agencies.]

The idea behind a Junior/Senior Partnership funding “scheme” is simple. Based on some criteria (years since PhD or first tenure-track position, number of successful PI awards, number of wrinkles, etc.) researchers would be classified as Junior or Senior. Based on your classification, to be eligible for an award under such a programme, at least one Junior and one Senior PI would need to be co-applicants on grant and have distinct contributions to the grant and project management. This simple mechanism would ensure that young PIs get a piece of the funding pie and allow them to establish a track record, just as a New Investigator schemes do.  But it would also obviate the need for reform to rely on the altruistic stepping aside by Senior scientists to make way for their Junior colleagues, as there would be positive (financial) incentives for them to lend a hand down the generations. And by reconfiguring resource allocation from “us-vs-them” to “we’re-all-in-this-together,” Junior/Senior Partnership Grants would further provide a natural mechanism for Senior PIs to transfer expertise in grant writing and project management to their Junior colleagues in a meaningful way, rather than in the lip-service manner that is normally paid in most institutions. Finally, and most importantly, the knowledge transfer through such a scheme would strengthen the future expertise base in Science, which all indicators would suggest is currently at risk.

Related Posts:

Accelerating Your Science with arXiv and Google Scholar

As part of my recent conversion to using arXiv, I’ve been struck by how posting preprints arXiv synergizes incredibly well with Google Scholar. I’ve tried to make some of these points on Twitter and elsewhere, but I thought I’d try to summarize here what I see as a very powerful approach to accelerating Open Science using arXiv and several features of the Google Scholar toolkit. Part of the motivation for writing this post is that I’ve tried to make this same pitch to several of my colleagues, and was hoping to be able to point them to a coherent version of this argument, which might be of use for others as well.

A couple of preliminaries. First, the main point of this post is not about trying to convince people to post preprints to arXiv. The benefits of preprinting on arXiv are manifold (early availability of results, allowing others to build on your work sooner, prepublication feedback on your manuscript, feedback from many eyes not just 2-3 reviewers, availability of manuscript in open access format, mechanism to establish scientific priority, opportunity to publicize your work in blogs/twitter, increased duration for citations) and have been ably summarized elsewhere. This post is specifically about how one can get the most out of preprinting on arXiv by using Google Scholar tools.

Secondly, it is important to make sure people are aware of two relatively recent developments in the Google Scholar toolkit beyond the basic Google Scholar search functionality — namely, Google Scholar Citations and Google Scholar Updates. Google Scholar Citations allows users to build a personal profile of their publications, which draws in citation data from the Google Scholar database, allowing you to “check who is citing your publications, graph citations over time, and compute several citation metrics”, which also will “appear in Google Scholar results when people search for your name.” While Google Scholar Citations has been around for a little over a year now, I often find that many Scientists are either not aware that it exists, or have not activated their profile yet, even though it is scarily easy to set up. Another more recent feature available for those with active Google Scholar Citations profiles is called Google Scholar Updates, a tool that can analyze “your articles (as identified in your Scholar profile), scan the entire web looking for new articles relevant to your research, and then show you the most relevant articles when you visit Scholar”. As others have commented, Google Scholar Updates provides a big step forward in sifting through the scientific literature, since it provides a tailored set of articles delivered to your browser based on your previous publication record.

With these preliminaries in mind, what I want to discuss now is how a Google Scholar plays so well with preprints on arXiv to accelerate science when done in the Open. By posting preprint to arXiv and activating your Google Scholar Citation profile, you immediately gain several advantages, including the following:

  1. arXiv preprints are rapidly indexed by Google Scholar (with 1-2 days in my experience) and thus can be discovered easily by others using a standard Google Scholar search.
  2. arXiv preprints are listed in your Google Scholar profile, so when people browse your profile for your most recent papers they will find arXiv preprints at the top of the list (e.g. see Graham Coop’s Google Scholar profile here).
  3. Citations to your arXiv preprints are automatically updated in your Google Scholar profile, allowing you to see who is citing your most recent work.
  4. References included in your arXiv preprints will be indexed by Google Scholar and linked to citations in other people’s Google Scholar profiles, allowing them to find your arXiv preprint via citations to their work.
  5. Inclusion of an arXiv preprint in your Google Scholar profile allows Google Scholar Updates to provide better recommendations for what you should read, which is particularly important when you are moving into a new area of research that you have not previously published on.
  6. [Update June 14, 2013] Once Google Scholar has indexed your preprint on arXiv it will automatically generate a set of Related Articles, which you can browse to identify previously published work related to your manuscript.  This is especially useful at the preprint stage, since you can incorporate related articles you may have missed before submission or during revision.

I probably have overlooked other possible benefits of the synergy between these two technologies, since they are only dawning on me as I become more familiar with these symbiotic scholarly tools myself. What’s abundantly clear to me at this stage though is that by embracing Open Science and using arXiv together with Google Scholar puts you at a fairly substantial competitive advantage in terms of your scholarship, in ways that are simply not possible using the classical approach to publishing in biology.

Why You Should Reject the “Rejection Improves Impact” Meme

Over the last two weeks, a meme has been making the rounds in the scientific twittersphere that goes something like “Rejection of a scientific manuscript improves its eventual impact”.  This idea is based a recent analysis of patterns of manuscript submission reported in Science by Calcagno et al., which has been actively touted in the scientific press and seems to have touched a nerve with many scientists.

Nature News reported on this article on the first day of its publication (11 Oct 2012), with the statement that “papers published after having first been rejected elsewhere receive significantly more citations on average than ones accepted on first submission” (emphasis mine). The Scientist led its piece on the same day entitled “The Benefits of Rejection” with the claim that “Chances are, if a researcher resubmits her work to another journal, it will be cited more often”. Science Insider led the next day with the claim that “Rejection before publication is rare, and for those who are forced to revise and resubmit, the process will boost your citation record”. Influential science media figure Ed Yong tweeted “What doesn’t kill you makes you stronger – papers get more citations if they were initially rejected”. The message from the scientific media is clear: submitting your papers to selective journals and having them rejected is ultimately worth it, since you’ll get more citations when they are published somewhere lower down the scientific publishing food chain.

I will take on faith that the primary result of Calcagno et al. that underlies this meme is sound, since it has been vetted by the highest standard of editorial and peer review at Science magazine. However, I do note that it not possible to independently verify this result since the raw data for this analysis was not made available at the time of publication (contravening Science’s “Making Data Maximally Available Policy“), and has not been made available even after being queried. What I want to explore here is why this meme is so uncritically being propagated in the scientific press and twittersphere.

As succinctly noted by Joe Pickrell, anyone who takes even a cursory look at the basis for this claim would see that it is at best a weak effect*, and is clearly being overblown by the media and scientists alike.

Taken at face value, the way I read this graph is that papers that are rejected then published elsewhere have a median value of ~0.95 citations, whereas papers that are accepted at the first journal they are submitted to have a median value of ~0.90 citations. Although not explicitly stated in the figure legend or in the main text, I assume these results are on a natural log scale since, based on the font and layout, this plot was most likely made in R and the natural scale is the default in R (also, the authors refer the natural scale in a different figure earlier in the text). Thus, the median number of citations per article that rejection may provide an author is on the order of ~0.1.  Even if this result is on the log10 scale, this difference translates to a boost of less than one citation.  While statistically significant, this can hardly be described as a “significant increase” in citation. Still excited?

More importantly, the analysis of the effects of rejection on citation is univariate and ignores all most other possible confounding explanatory variables.  It is easy to imagine a large number of other confounding effects that could lead to this weak difference (number of reviews obtained, choice of original and final journals, number of authors, rejection rate/citation differences among discipline or subdiscipline, etc., etc.). In fact, in panel B of the same figure 4, the authors show a stronger effect of changing discipline on the number of citations in resubmitted manuscripts. Why a deeper multivariate analysis was not performed to back up the headline claim that “rejection improves impact” is hard to understand from a critical perspective. [UPDATE 26/10/2012: Bala Iyengar pointed out to me a page on the author’s website that discusses the effects of controlling for year and publishing journal on the citation effect, which led me to re-read the paper and supplemental materials more closely and see that these two factors are in fact controlled for in the main analysis of the paper. No other possible confounding factors are controlled for however.]

So what is going on here? Why did Science allow such a weak effect with a relatively superficial analysis to be published in the one of the supposedly most selective journals? Why are major science media outlets pushing this incredibly small boost in citations that is (possibly) associated with rejection? Likewise, why are scientists so uncritically posting links to the Nature and Scientist news pieces and repeating “Rejection Improves Impact” meme?

I believe the answer to the first two questions is clear: Nature and Science have a vested interest in making the case that it is in the best interest of scientists to submit their most important work to (their) highly selective journals and risk having it be rejected.  This gives Nature and Science first crack at selecting the best science and serves to maintain their hegemony in the scientific publishing marketplace. If this interpretation is true, it is an incredibly self-serving stance for Nature and Science to take, and one that may back-fire since, on the whole, scientists are not stupid people who blindly accept nonsense. More importantly though, using the pages of Science and Nature as a marketing campaign to convince scientists to submit their work to these journals risks their credibility as arbiters of “truth”. If Science and Nature go so far as to publish and hype weak, self-serving scientometric effects to get us to submit our work there, what’s to say that would they not do the same for actual scientific results?

But why are scientists taking the bait on this one?  This is more difficult to understand, but most likely has to do with the possibility that most people repeating this meme have not read the paper. Topsy records over 700 and 150 tweets to the Nature News and Scientist news pieces, but only ~10 posts to the original article in Science. Taken at face value, roughly 80-fold more scientists are reading the news about this article than reading the article itself. To be fair, this is due in part to the fact that the article is not open access and is behind a paywall, whereas the news pieces are freely available**. But this is only the proximal cause. The ultimate cause is likely that many scientists are happy to receive (uncritically, it seems) any justification, however tenuous, for continuing to play the high-impact factor journal sweepstakes. Now we have a scientifically valid reason to take the risk of being rejected by top-tier journals, even if it doesn’t pay off. Right? Right?

The real shame in the “Rejection Improves Impact” spin is that an important take-home message of Calcagno et al. is that the vast majority of papers (>75%) are published in the first journal to which they are submitted.  As a scientific community we should continue to maintain and improve this trend, selecting the appropriate home for our work on initial submission. Justifying pipe-dreams that waste precious time based on self-serving spin that benefits the closed-access publishing industry should be firmly: Rejected.

Don’t worry, it’s probably in the best interest of Science and Nature that you believe this meme.

* To be fair, Science Insider does acknowledge that the effect is weak: “previously rejected papers had a slight bump in the number of times they were cited by other papers” (emphasis mine).

** Following a link available on the author’s website, you can access this article for free here.

Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & de Mazancourt, C. (2012). Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns Science DOI: 10.1126/science.1227833

Related Posts