From the Library of Prof. William B. Provine

I just saw the sad news that Will Provine, historian of population genetics, died peacefully at his home at the age of 73. Others will no doubt write of Provine’s legacy as a scholar and orator of the highest calibre, a fervent proponent of atheism and evolution that only a preacher’s son could be. I’m moved by his death to recall my experience of having Provine as a lecturer during my undergrad days at Cornell 20 years ago, where his dramatic and entertaining style drew me fully into evolutionary biology, both as a philosophy and as a profession. I can’t say I knew Provine well, but I can say our interactions left a deep impression on me.  He was an incredibly kind and engaging, pulling you onto what he called the “slippery slope” where religious belief must yield to rationalism.

I vividly recall Provine giving me a hard cover copy of the compendium of Dobzhansky’s papers he co-edited on our first meeting after class (pulled from a half-full box at the ready near his desk), and discussing the then-recent death of Motoo Kimura, who he was researching for his as-yet-unpublished history of the Neutral Theory. We met and talked about population genetics and molecular evolution several times after that, and for reasons I can’t quite recall, Provine ended up offering me keys to his personal library in the basement of Corson Hall. I’ll never forget the first time he showed me his library, with bookshelves lining what would have been a lab space, filled with various version of classic works in Genetics, Evolution, Development and History of Science. The delight he had in showing me his shelf of various editions of the Origin of Species was only matched by the impish pleasure he had in showing me the error in chromosome segregation on the spine of the first edition of Dobzhansky’s Genetics and the Origin of Species, or how to decode the edits to text of Fisher’s Genetical Theory of Natural Selection (see figure below).

IMG_0405a

In my first tour of his library, Provine showed me how to decode the revisions to the 1958 second edition of Fisher’s Genetical Theory of Natural Selection. Notice how the font in paragraph 2 is smaller that in paragraphs 1 and 3. Text in this font was added to the original plates prior to the second printing. Provine then handed me one of the many copies of this book he had on his shelf for me to keep, which is one of my few prized possessions.

His reprint collection was equally impressive (inherited from Sewall Wright from what I understand), with many copies signed, with compliments of the author, by the founders of the Modern Synthesis. Provine’s reprint collection was surpassed in value only by the FlyBase reprint collection in the Dept of Genetics in Cambridge, in my experience. I used Provine’s library to study quite often in my last year or so at Cornell, interrupting work on Alex Kondrashov’s problem sets by browsing early 20th century biology texts. Being able to immerse myself in this trove of incredible books left a lasting effect on me, and I have no doubt was a major factor in deciding to pursue academic research in evolution and genetics. Sadly while no longer physically intact, I am very glad to know the 5,000+ items in Provine’s library have been contributed to the Cornell Library, possibly the best place for the spirit of an atheist and historian to live on.

Keeping Up with the Scientific Literature using Twitterbots: The FlyPapers Experiment

UPDATE (9 Nov 2014): For those interested in setting up their own scientific twitterbot, see Rob Lanfear’s excellent and easy-to-follow instructions here. Peter Carlton has also outlined another method for setting up a twitterbot here, as has Sho Iwamoto here.

UPDATE (20 Dec 2022): The @fly_papers twitterbot has been permanently suspended due to changes in Twitter’s terms of service. An alternative @flypapers bot is now alive on Mastadon.

A year ago I created a simple “twitterbot” to stay on top of the Drosophila literature called FlyPapers, which tweets links to new abstracts in Pubmed and preprints in arXiv from a dedicated twitter account (@fly_papers). While most ‘bots on Twitter post spam or creative nonsense, an increasing number of people are exploring the use of twitterbots for more productive academic purposes. For example, Rod Page set up the @evoldir twitterbot way back in 2009 as an alternative to receiving email posts to the Evoldir mailing list, and likewise Gordon McNickle developed the @EcoLog_L twitterbot for the Ecolog-L mailing list. Similar to FlyPapers, others have established twitterbots for domain-specific literature feeds, such as the @BioPapers  for Quantitative Biology preprints on arXiv, @EcoEvoJournals for publications in the areas of Ecology & Evolution and @PlantEcologyBot for papers on Plant Ecology. More recently, Alberto Acerbi developed the @CultEvoBot to post links to blogs and new articles on the topic of cultural evolution. (I recommend reading posts by Rod, Gordon and Alberto for further insight into how and why they established these twitterbots.) One year in, I thought I’d summarize my thoughts on the FlyPapers experiment, and to make good on a promise I made to describe my set-up in case others are interested.

First, a few words on my motivation for creating FlyPapers. I have been receiving a daily update of all papers in the area of Drosophila in one form or another for nearly 10 years. My philosophy is that it is relatively easy to keep up on a daily basis with what is being published, but it’s virtually impossible to catch up when you let the river of information flow for too long. I first started receiving daily email updates from NCBI, which cluttered up my inbox and often got buried. Then I migrated to using RSS on Google Reader, which led to a similar problem of many unread posts accumulating that needed to be marked as “read”. Ultimately, I realized what I want from a personalized publication feed — a flow of links to articles that can be quickly scanned and clicked, but which requires no other action and can be ignored when I’m busy — was better suited to a Twitter client than a RSS reader. Moreover, in the spirit of “maximizing the value of your keystrokes“, it seemed that a feed that was useful for me might also be useful for others, and that Twitter was the natural medium to try sharing this feed since many scientists are already using twitter to post links to papers. Thus FlyPapers was born.

Setting up FlyPapers was straightforward and required no specialist know-how. I first created a dedicated Twitter account with a “catchy” name. Next, I created an account with dlvr.it, which takes a RSS/Twitter/email feed as input and routes the output to the FlyPapers Twitter account. I then set up an RSS feed from NCBI based on a search for the term “Drosophila” and add this as a source to the dlvr.it route. Shortly thereafter, I added a RSS feed for preprints in Arxiv using the search term “Drosophila” and added this to the same dlvr.it route. (Unfortunately, neither PeerJ Preprints nor bioRxiv currently have the ability to set up custom RSS feeds, and thus are not included in the FlyPapers stream.) NCBI and Arxiv only push new articles once a day, and each article is posted automatically as a distinct tweet for ease of viewing, bookmarking and sharing. The only gotcha I experienced in setting the system up was making sure when creating the Pubmed RSS feed to set the “number of items displayed” high enough (=100). If the number of articles posted in one RSS update exceeds the limit you set when you create the Pubmed RSS feed, Pubmed will post a URL to a Pubmed query for the entire set of papers as one RSS item, rather than post links to each individual paper. (For Gordon’s take on how he set up his Twitterbots, see this thread.) [UPDATE 25/2/14: Rob Lanfear has posted detailed instructions for setting up a twitterbot using the strategy I describe above at https://github.com/roblanf/phypapers. See his comment below for more information.]

So, has the experiment worked? Personally, I am finding FlyPapers a much more convenient way to stay on top of the Drosophila literature than any previous method I have used. Apparently others are finding this feed useful as well.

https://twitter.com/StuartJFGrice/status/429362778642456576

One year in, FlyPapers now has 333 followers in 16 countries, which is a far bigger and wider following than I would have ever imagined. Some of the followers are researchers I am familiar with in the Drosophila world, but most are students or post-docs I don’t know, which suggests the feed is finding relevant target audiences via natural processes on Twitter. The account has now posted 3,877 tweets, or ~10-11 tweets per day on average, which gives a rough scale for the amount of research being published annually on Drosophila. Around 10% of tweeted papers are getting retweeted (n=386) or favorited (n=444) by at least one person, and the breadth of topics being favorited/retweeted spans virtually all of Drosophila biology. These facts suggest that developing a twitterbot for domain-specific literature can indeed attract substantial numbers of like-minded individuals, and that automatically tweeting links to articles enables a significant proportion of papers in a field to easily be seen, bookmarked and shared.

Overall, I’m very pleased with the way FlyPapers is developing. I had hoped that one of the outcomes of this experiment would be to help promote Drosophila research, and this appears to be working. I had not expected it would act as a general hub for attracting Drosophila researchers who are active on Twitter, which is a nice surprise. One issue I hadn’t considered a year ago was the potential that ‘bots like FlyPapers might have to “game” Altmetics scores. Frankly, any metric that would be so easily gamed by a primitive bot like FlyPapers probably has no real intrisic value. However, it is true that this bot does add +1 to the twitter count for all Drosophila papers. My thoughts on this are that any attempt to correct the potential influence of ‘bots on Altmetrics scores should unduly not penalize the real human engagement bots can facilitate, so I’d say it is fair to -1 the orginal FlyPapers tweets in an Altmetrics calculation, but retain the retweets created by humans.

One final consequence of putting all new Drosophila literature onto Twitter that I would not have anticipated is that some tweets have been picked up by other social media outlets, including disease-advocacy accounts that quickly pushed basic research findings out to their target audience:

This final point suggests that there may be wider impacts from having more research articles automatically injected into the Twitter ecosystem. Maybe those pesky twitterbots aren’t always so bad after all.

RELATED POSTS:

Battling Administrivia Using an Intramural Question & Answer Forum

The life of a modern academic involves juggling many disparate tasks, and like a computer using more physical memory than it has, swapping between various tasks leads to inefficiency and low performance in our jobs. Personally, the time fragmentation and friction induced by transitioning from task to task seems to be one of the main sources of stress in my work life.  The main reason for this is that many daily tasks on my to-do list are essential but fiddly and time-consuming administrivia (placing orders, filling in forms, entering marks into a database) that prevent me from getting to the things that I enjoy about being an academic: doing research, interacting with students, reading papers, etc.

I would go so far as to say that the mismatch between the desires of most academics and the reality of their jobs is the main source of academic “burnout” and low morale in what otherwise should be an awesome profession. I would also venture that administrivia is one of the major sources of the long hours we endure, since after wading through the “chaff”, we will (dammit!) put in the time on nights and weekends for the things we are most passionate about to sustain our souls. And based on the frequency of sentiments relating to this topic flowing through my Twitter feed, I’d say the negative impact of adminsitrivia is a pervasive problem in modern academic life, not restricted to any one institute.

While it is tempting to propose ameliorating the administrivia problem by simply eliminating bureaucracy, the growth of the administrative sector in higher education makes this solution a virtual impossibility. I have ultimately become resigned to the fact that the fundamentally inefficient nature of university bureaucracy cannot be structurally reformed and begun to seek other solutions to make my work life better. In doing so, I believe I’ve hit on a simple solution to the adminstrivia problem that I’m hoping might help others as well. In fact, I’m now convinced this solution is simple and powerful enough to actually be effective.

Accepting that it cannot be fully eliminated, my view is that the key to reducing the time and morale burden of administrivia is to realize that most routine tasks in University life are just protocols that require some amount of tacit knowledge about policies or procedures. Thus, all that is needed to reduce the negative impact of administrivia to its lowest possible level is to develop a system whereby accurate and relevant protocols can be placed at one’s fingertips so that they can be completed as fast as possible. The problem is that such protocols either don’t exist, don’t exist in a written form, or exist as scattered documents across various filesystems and offices that you have to expend substantial time finding. So how do we develop such protocols without generating more bureaucracy and exacerbating the problem we are attempting to solve?

My source of inspiration for ameliorating administrivia with minimal overhead comes from the positive experiences I have had using online Question and Answering (Q & A) forums based on the Stack Exchange model (principally the BioStars site for answering questions about bioinformatics).  For those not familiar with such systems, the Q & A model popularized by the Stack Exchange platform (and its clones) is a system that allows questions to be asked and answers to be voted on, moderated, edited and commented on in a very intuitive and user-friendly manner. For some reason I am not able to fully explain, the engineering behind the Q & A model naturally facilitates both knowledge exchange and community building in a way that is on the whole extremely positive, and seems to prevent the worst aspects of human nature commonly found on older internet forums and commenting systems.

So here is my proposal to battling the impact of academic administrivia: implement an intramural, University-specific Q & A forum for academic and administrative staff to pose and answer each other’s practical questions, converting tacit knowledge stored in people’s heads, inboxes and intranets into a single knowledge-bank that can be efficiently used and re-used by others who have the same queries. The need for an “intramural” solution and the reason this strategy cannot be applied globally, as it has for Linux administration, Poker or Biblical Hermeneutics, is that Universities (for better or worse) have their own local policies and procedures that can’t be easily shared or benefit from general worldwide input.

We have been piloting the use of the Open Source Question Answer (OSQA) platform (a clone of Stack Exchange) among a subset of our faculty for about a year, with good uptake and virtually unanimous endorsement from everyone who has used it. We currently require a real name policy for users, have limited the system to questions of procedure only, and have encouraged users to answer their own questions after solving burdensome tasks. To make things easy to administer technically, we are using an out of the box virtual machine of OSQA provided by Bitnami. The anonymized screenshot below gives a flavor of the banal, yet time-consuming queries that arise repeatedly in our institution that such a system makes easier to accomplish. I trust colleagues at other institutions will find similar tasks frustratingly familiar.

Untitled

The main reason I am posting this idea now is that I am scheduled to give a demo and presentation to my Dean and management team this week to propose rolling this system out to a wider audience. In preparation for this pitch, I’ve been trying to assemble a list of pros and cons that I am sure is incomplete and would benefit from the input of other people familiar with how Universities and Q & A platforms work.

The pros of an intramural Q & A platform for battling administrivia I’ve come up with so far include:

  • Increasing efficiency, leading to higher productivity for both academic and administrative staff;
  • Reducing the sense of frustration about bureaucratic tasks, leading to higher morale;
  • Improving sense of empowerment and community among academic and administrative staff;
  • Providing better documentation of procedures and policies;
  • Serving as an “aide memoire”;
  • Aiding the success of junior academic staff;
  • Ameliorating the effects of administrative turnover;
  • Providing a platform for people who may not speak up in staff meetings to contribute;
  • Allows “best practice” to emerge through crowd-sourcing;
  • Identifying common problems that should be prioritized for improvement;
  • Identifying like-minded problem solvers in a big institution;
  • Integrating easily around existing IT platforms;
  • Ability to be deployed at any scale (lab group, department, faculty, school, etc.)
  • Allows information to be accessible 24/7 when admininstrative offices are closed (H/T @jdenavascues).

I confess struggling to find true cons, but these might include (rejoinders in parentheses):

  • Security risks (can be solved with proper administration and authentication)
  • Inappropriate content (real name policy should minimize, can be solved with proper moderation);
  • Answers might be “impolitic” (real name policy should minimize, can be solved with proper moderation; H/T @DrLabRatOry)
  • Time wasting (unlikely since whole point is to enhance productivity);
  • Lack of uptake (even if the 90-9-1 rule applies, it is an improvement on the status quo);
  • Perceived as threat to administrative staff (far from it, this approach benefits administrative staff as much as academic staff);
  • Information could be come stale (can be solved with proper moderation and periodic updating).

I’d be very interested to get feedback from others about this general strategy (especially by Tues PM 17 Sep 2013), thoughts on related efforts, or how intramural Q & A platforms could be used in other ways in an academic setting beyond battling administrivia in the comments below.

Related Posts:

Twitter Tips for Scientific Journals

The growing influence of social media in the lives of Scientists has come to the forefront again recently with a couple of new papers that provide An Introduction to Social Media for Scientists and a more focussed discussion of The Role of Twitter in the Life Cycle of a Scientific Publication. Bringing these discussions into traditional journal article format is important for spreading the word about social media in Science outside the echo chamber of social media itself. But perhaps more importantly, in my view, is that these motivating papers reflect a desire for Scientists to participate, and urge others to participate, in shaping a new space for scientific exchange in the 21st century.

Just as Scientists themselves are adopting social media, many scientific journals/magazines are as well. However, most discussions about the role of social media in scientific exchange overlook the issue of how we Scientists believe traditional media outlets, like scientific journals, should engage in this new forum. For example in the Darling et al. paper on the The Role of Twitter in the Life Cycle of a Scientific Publication, little is said about the role of journal Twitter accounts in the life cycle of publications beyond noting:

…to encourage fruitful post-publication critique and interactions, scientific journals could appoint dedicated online tweet editors who can storify and post tweets related to their papers.

This oversight is particularly noteworthy for several reasons. First, it is fact that many journals, and journal staff, play active roles in engaging with the scientific debate on social media and are not simply passive players in the new scientific landscape.  Second, Scientists need to be aware that journals extensively monitor our discussions and activity on social media in ways that were not previously possible, and we need to consider how this affects the future of scientific publishing. Third, Scientists should see social media represents an opportunity to establish new working relationships with journals that break down the old models that increasingly seem to harm both Science and Scientists.

In the same way that we Scientists are offering tips/advice to each other for how to participate in the new media, I feel that this conversation should also be extended to what we feel are best practices for journals to engage in the scientific process through social media. To kick this off, I’d like to list some do’s and don’ts for how I think journals should handle their presence on Twitter, based on my experiences following, watching and interacting with journals on Twitter over the last couple of years.

  • Do engage with (and have a presence on) social media. Twitter is rapidly on the uptake with scientists, and is the perfect forum to quickly transmit/receive information to/from your author pool/readership. I find it a little strange in fact if a journal doesn’t have a Twitter account these days.
  • Do establish a social media policy for your official Twitter account. Better yet, make it public, so Scientists know the scope of what we should expect from your account.
  • Don’t use information from Twitter to influence editorial or production processes, such as the acceptance/rejection of papers or choice of reviewers.  This should be an explicit part of your social media policy. Information on social media could be incorrect and by using unverified information from Twitter you could allow competitors/allies to block/promote each other’s work.
  • Don’t use a journal Twitter account as a table of contents for your journal. Email TOCs or RSS feeds exist for this purpose already.
  • Do tweet highlights from your journal or other journals. This is actually what I am looking for in a journal Twitter account, just as I am from the accounts of other Scientists.
  • Do use journal accounts to retweet unmodified comments from Scientists or other media outlets about papers in your journal. This is a good way for Scientists to find other researchers interested in a topic and know what is being said about work in your journal. But leave the original tweet intact, so we can trace it to the originator and so it doesn’t look like you have edited the sentiment to suit your interests.
  • Don’t use journal account to express personal opinions. I find it totally inappropriate that individuals at some journals hide behind the journal name and avatar to use journal twitter accounts as a soapbox to express their personal opinions. This is a really dangerous thing for a journal to do since it reinforces stereotypes about the fickleness of editors that love to wield the power that their journal provides them. It’s also a bad idea since the opinions of one or a few people may unintentionally affect a journal or publisher.
  • Do encourage your staff to create personal accounts and be active on social media. Editors and other journal staff should be encouraged to express personal opinions about science, tweet their own highlights, etc. This is a great way for Scientists to get to know your staff (for better or worse) and build an opinion about who is handling our work at your journal. But it should go without saying that personal opinions should be made through personal accounts, so we can follow/unfollow these people like any other member of the community and so their opinions do not leverage the imprimatur of your journal.
  • Do use journal Twitter accounts to respond to feedback/complaints/queries. Directly replying to comments from the community on Twitter is a great way to build trust in your journal.  If you can’t or don’t want to reply to a query in the open, just reply by asking the person to email your helpdesk. Either way shows good faith that you are listening to our concerns and want to engage. Ignoring comments from Scientists is bad PR and can allow issues to amplify beyond your control, with possible negative impacts on your journal (image) in the long run.
  • Don’t use journal Twitter accounts to tweet from meetings. To me this is a form of expressing personal opinion that looks like you are endorsing certain Scientists/fields/meetings or, worse yet, that you are looking to solicit them to submit their work to your journal, which smacks of desperation and favoritism. Use personal accounts instead to tweet from meetings, since after all what is reported is a personal assessment.

These are just my first thoughts on this issue (anonymised to protect the guilty), which I hope will act as a springboard for others to comment below on how they think journals should manage their presence on Twitter for the benefit of the Scientific community.

A Case for Junior/Senior Partnership Grants

Much has been made in recently years over funding crises in the US and Europe, which are the inevitable result of the Great Recession superimposed on top of the end of exponential growth in Science. Governments hamstrung by austerity measures or lack of political will have been forced to abandon increases in scientific funding, going so far even as to freeze funds for awarded grants in Spain (see translation here). The consequences of this stagnant period of inputs to scientific progress will be felt for many years to come, materially in terms of basic and applied discoveries, but also socially in terms of the impacts on an entire generation of scientists who are just beginning their independent careers.

Why are early stage researchers hit hardest by stagnation or decreases in funding? Simply because access to funding is not a level playing field for all scientists, and is in fact highly dependent on career stage and experience. Therefore, increased competition for resources is expected to hit younger scientists disproportionately harder relative to established researchers because of many factors, including:

  • less experience in the art of writing grants,
  • less experience in reviewing grants,
  • less experience serving on grant panels,
  • shorter scientific and management track record,
  • and a less highly developed social network.

The specific negative effect that a general increase in resource competition has on young researchers is (in my view) the best explanation for the extremely worrying downward trends in the proportion of young PIs receiving NIH grants, and the increasing upward trend in the age to receipt of first RO1 in the USA, shown in the following diagrams from the NIH Rock Talk Blog:

Thankfully, this issue which is being discussed seriously by NIH’s Deputy Director for Extramural Research, Dr. Sally Rockey, as publication of these data attests to.  [I would very much welcome if other funding agencies published similar demographic breakdowns of their funding to address whether this is a global effect.] However, not all see these trends as worrying and interpret them on socially-neutral demographic grounds.

To help combat the inherent age-based iniquities in access research funding, funding agencies typically ring-fence funding for early-stage researchers under a “New Investigator” type umbrella. In fact, Sally Rockey provides a link to an impressive history of initiatives the NIH has undertaken to tackle the New Investigator issue. But what is striking to me is that despite putting a series of different New Investigator mechanisms in place, the negative impacts on early-stage researchers have only worsened over the last three decades. Thus New Investigator programmes are clearly not enough to redress this issue, and new solutions must be sought out. Furthermore ring-fencing funding for junior researchers necessarily creates an us-vs-them mentality, which can have counterproductive repercussions among different scientific cohorts. And while New Investigator programmes are widely supported in principle, trade-offs in resource allocation can lead to unstable to changes in policy, as witnessed in the case of the now-defunct NERC New Investigator programme.

So, what of it? Is this post just another bemoaning the sorry state of affairs in funding for early-stage researchers? No, or at least, not only. Actually, my motivation is to constructively propose a relatively simple (naive?) mechanism to fund research projects that can address the inequities in funding across career stages, but which also has the additional benefit of engendering mentorship and transfer of skills across the generations: the Junior/Senior Partnership Grant. [As with all (good) ideas, such a model has been proposed before by the Women’s Cancer Network, but does not appear to be adopted by major federal funding agencies.]

The idea behind a Junior/Senior Partnership funding “scheme” is simple. Based on some criteria (years since PhD or first tenure-track position, number of successful PI awards, number of wrinkles, etc.) researchers would be classified as Junior or Senior. Based on your classification, to be eligible for an award under such a programme, at least one Junior and one Senior PI would need to be co-applicants on grant and have distinct contributions to the grant and project management. This simple mechanism would ensure that young PIs get a piece of the funding pie and allow them to establish a track record, just as a New Investigator schemes do.  But it would also obviate the need for reform to rely on the altruistic stepping aside by Senior scientists to make way for their Junior colleagues, as there would be positive (financial) incentives for them to lend a hand down the generations. And by reconfiguring resource allocation from “us-vs-them” to “we’re-all-in-this-together,” Junior/Senior Partnership Grants would further provide a natural mechanism for Senior PIs to transfer expertise in grant writing and project management to their Junior colleagues in a meaningful way, rather than in the lip-service manner that is normally paid in most institutions. Finally, and most importantly, the knowledge transfer through such a scheme would strengthen the future expertise base in Science, which all indicators would suggest is currently at risk.

Related Posts:

Accelerating Your Science with arXiv and Google Scholar

As part of my recent conversion to using arXiv, I’ve been struck by how posting preprints arXiv synergizes incredibly well with Google Scholar. I’ve tried to make some of these points on Twitter and elsewhere, but I thought I’d try to summarize here what I see as a very powerful approach to accelerating Open Science using arXiv and several features of the Google Scholar toolkit. Part of the motivation for writing this post is that I’ve tried to make this same pitch to several of my colleagues, and was hoping to be able to point them to a coherent version of this argument, which might be of use for others as well.

A couple of preliminaries. First, the main point of this post is not about trying to convince people to post preprints to arXiv. The benefits of preprinting on arXiv are manifold (early availability of results, allowing others to build on your work sooner, prepublication feedback on your manuscript, feedback from many eyes not just 2-3 reviewers, availability of manuscript in open access format, mechanism to establish scientific priority, opportunity to publicize your work in blogs/twitter, increased duration for citations) and have been ably summarized elsewhere. This post is specifically about how one can get the most out of preprinting on arXiv by using Google Scholar tools.

Secondly, it is important to make sure people are aware of two relatively recent developments in the Google Scholar toolkit beyond the basic Google Scholar search functionality — namely, Google Scholar Citations and Google Scholar Updates. Google Scholar Citations allows users to build a personal profile of their publications, which draws in citation data from the Google Scholar database, allowing you to “check who is citing your publications, graph citations over time, and compute several citation metrics”, which also will “appear in Google Scholar results when people search for your name.” While Google Scholar Citations has been around for a little over a year now, I often find that many Scientists are either not aware that it exists, or have not activated their profile yet, even though it is scarily easy to set up. Another more recent feature available for those with active Google Scholar Citations profiles is called Google Scholar Updates, a tool that can analyze “your articles (as identified in your Scholar profile), scan the entire web looking for new articles relevant to your research, and then show you the most relevant articles when you visit Scholar”. As others have commented, Google Scholar Updates provides a big step forward in sifting through the scientific literature, since it provides a tailored set of articles delivered to your browser based on your previous publication record.

With these preliminaries in mind, what I want to discuss now is how a Google Scholar plays so well with preprints on arXiv to accelerate science when done in the Open. By posting preprint to arXiv and activating your Google Scholar Citation profile, you immediately gain several advantages, including the following:

  1. arXiv preprints are rapidly indexed by Google Scholar (with 1-2 days in my experience) and thus can be discovered easily by others using a standard Google Scholar search.
  2. arXiv preprints are listed in your Google Scholar profile, so when people browse your profile for your most recent papers they will find arXiv preprints at the top of the list (e.g. see Graham Coop’s Google Scholar profile here).
  3. Citations to your arXiv preprints are automatically updated in your Google Scholar profile, allowing you to see who is citing your most recent work.
  4. References included in your arXiv preprints will be indexed by Google Scholar and linked to citations in other people’s Google Scholar profiles, allowing them to find your arXiv preprint via citations to their work.
  5. Inclusion of an arXiv preprint in your Google Scholar profile allows Google Scholar Updates to provide better recommendations for what you should read, which is particularly important when you are moving into a new area of research that you have not previously published on.
  6. [Update June 14, 2013] Once Google Scholar has indexed your preprint on arXiv it will automatically generate a set of Related Articles, which you can browse to identify previously published work related to your manuscript.  This is especially useful at the preprint stage, since you can incorporate related articles you may have missed before submission or during revision.

I probably have overlooked other possible benefits of the synergy between these two technologies, since they are only dawning on me as I become more familiar with these symbiotic scholarly tools myself. What’s abundantly clear to me at this stage though is that by embracing Open Science and using arXiv together with Google Scholar puts you at a fairly substantial competitive advantage in terms of your scholarship, in ways that are simply not possible using the classical approach to publishing in biology.

Why You Should Reject the “Rejection Improves Impact” Meme

ResearchBlogging.org

Over the last two weeks, a meme has been making the rounds in the scientific twittersphere that goes something like “Rejection of a scientific manuscript improves its eventual impact”.  This idea is based a recent analysis of patterns of manuscript submission reported in Science by Calcagno et al., which has been actively touted in the scientific press and seems to have touched a nerve with many scientists.

Nature News reported on this article on the first day of its publication (11 Oct 2012), with the statement that “papers published after having first been rejected elsewhere receive significantly more citations on average than ones accepted on first submission” (emphasis mine). The Scientist led its piece on the same day entitled “The Benefits of Rejection” with the claim that “Chances are, if a researcher resubmits her work to another journal, it will be cited more often”. Science Insider led the next day with the claim that “Rejection before publication is rare, and for those who are forced to revise and resubmit, the process will boost your citation record”. Influential science media figure Ed Yong tweeted “What doesn’t kill you makes you stronger – papers get more citations if they were initially rejected”. The message from the scientific media is clear: submitting your papers to selective journals and having them rejected is ultimately worth it, since you’ll get more citations when they are published somewhere lower down the scientific publishing food chain.

I will take on faith that the primary result of Calcagno et al. that underlies this meme is sound, since it has been vetted by the highest standard of editorial and peer review at Science magazine. However, I do note that it not possible to independently verify this result since the raw data for this analysis was not made available at the time of publication (contravening Science’s “Making Data Maximally Available Policy“), and has not been made available even after being queried. What I want to explore here is why this meme is so uncritically being propagated in the scientific press and twittersphere.

As succinctly noted by Joe Pickrell, anyone who takes even a cursory look at the basis for this claim would see that it is at best a weak effect*, and is clearly being overblown by the media and scientists alike.

https://twitter.com/joe_pickrell/status/256756126140477442

Taken at face value, the way I read this graph is that papers that are rejected then published elsewhere have a median value of ~0.95 citations, whereas papers that are accepted at the first journal they are submitted to have a median value of ~0.90 citations. Although not explicitly stated in the figure legend or in the main text, I assume these results are on a natural log scale since, based on the font and layout, this plot was most likely made in R and the natural scale is the default in R (also, the authors refer the natural scale in a different figure earlier in the text). Thus, the median number of citations per article that rejection may provide an author is on the order of ~0.1.  Even if this result is on the log10 scale, this difference translates to a boost of less than one citation.  While statistically significant, this can hardly be described as a “significant increase” in citation. Still excited?

More importantly, the analysis of the effects of rejection on citation is univariate and ignores all most other possible confounding explanatory variables.  It is easy to imagine a large number of other confounding effects that could lead to this weak difference (number of reviews obtained, choice of original and final journals, number of authors, rejection rate/citation differences among discipline or subdiscipline, etc., etc.). In fact, in panel B of the same figure 4, the authors show a stronger effect of changing discipline on the number of citations in resubmitted manuscripts. Why a deeper multivariate analysis was not performed to back up the headline claim that “rejection improves impact” is hard to understand from a critical perspective. [UPDATE 26/10/2012: Bala Iyengar pointed out to me a page on the author’s website that discusses the effects of controlling for year and publishing journal on the citation effect, which led me to re-read the paper and supplemental materials more closely and see that these two factors are in fact controlled for in the main analysis of the paper. No other possible confounding factors are controlled for however.]

So what is going on here? Why did Science allow such a weak effect with a relatively superficial analysis to be published in the one of the supposedly most selective journals? Why are major science media outlets pushing this incredibly small boost in citations that is (possibly) associated with rejection? Likewise, why are scientists so uncritically posting links to the Nature and Scientist news pieces and repeating “Rejection Improves Impact” meme?

I believe the answer to the first two questions is clear: Nature and Science have a vested interest in making the case that it is in the best interest of scientists to submit their most important work to (their) highly selective journals and risk having it be rejected.  This gives Nature and Science first crack at selecting the best science and serves to maintain their hegemony in the scientific publishing marketplace. If this interpretation is true, it is an incredibly self-serving stance for Nature and Science to take, and one that may back-fire since, on the whole, scientists are not stupid people who blindly accept nonsense. More importantly though, using the pages of Science and Nature as a marketing campaign to convince scientists to submit their work to these journals risks their credibility as arbiters of “truth”. If Science and Nature go so far as to publish and hype weak, self-serving scientometric effects to get us to submit our work there, what’s to say that would they not do the same for actual scientific results?

But why are scientists taking the bait on this one?  This is more difficult to understand, but most likely has to do with the possibility that most people repeating this meme have not read the paper. Topsy records over 700 and 150 tweets to the Nature News and Scientist news pieces, but only ~10 posts to the original article in Science. Taken at face value, roughly 80-fold more scientists are reading the news about this article than reading the article itself. To be fair, this is due in part to the fact that the article is not open access and is behind a paywall, whereas the news pieces are freely available**. But this is only the proximal cause. The ultimate cause is likely that many scientists are happy to receive (uncritically, it seems) any justification, however tenuous, for continuing to play the high-impact factor journal sweepstakes. Now we have a scientifically valid reason to take the risk of being rejected by top-tier journals, even if it doesn’t pay off. Right? Right?

The real shame in the “Rejection Improves Impact” spin is that an important take-home message of Calcagno et al. is that the vast majority of papers (>75%) are published in the first journal to which they are submitted.  As a scientific community we should continue to maintain and improve this trend, selecting the appropriate home for our work on initial submission. Justifying pipe-dreams that waste precious time based on self-serving spin that benefits the closed-access publishing industry should be firmly: Rejected.

Don’t worry, it’s probably in the best interest of Science and Nature that you believe this meme.

* To be fair, Science Insider does acknowledge that the effect is weak: “previously rejected papers had a slight bump in the number of times they were cited by other papers” (emphasis mine).

** Following a link available on the author’s website, you can access this article for free here.

References
Calcagno, V., Demoinet, E., Gollner, K., Guidi, L., Ruths, D., & de Mazancourt, C. (2012). Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns Science DOI: 10.1126/science.1227833

Related Posts

The Logistics of Scientific Growth in the 21st Century

ResearchBlogging.org

Over the last few months, I’ve noticed a growing number of reports about declining opportunities and increasing pressure for early stage academic researchers (Ph.D. students, post-docs and junior faculty). For example, the Washington Post published an article in early July about trends in the U.S. scientific job market entitled “U.S. pushes for more scientists, but the jobs aren’t there.” This post generated over 3,500 comments on the WaPo website alone and was highly discussed in the twittersphere. In mid July, Inside Higher Ed reported that an ongoing study revealed a recent, precipitous drop in the interest of STEM (Science/Technology/Engineering/Mathematics) Ph.D. students wishing to pursue an academic tenure-track career. These results confirmed those published in PLoS ONE in May that showed the interest to pursue an academic career of STEM students surveyed in 2010 showed evidence of a decline during the course of Ph.D. studies:

Figure 1. Percent of STEM Ph.D. judging a career to be “extremely attractive”. Taken from Saurman & Roach (2012).

Even for those lucky enough to get an academic appointment, the bad news seems to be that it is getting harder to establish a research program.  For example, the average age for a researcher to get their first NIH grant (a virtual requirement for tenure for many biologists in the US) is now 42 years old. National Public Radio quips “50 is the new 30, if you’re a promising scientist.”

I’ve found these reports very troubling since, after over nearly fifteen years of slogging it out since my undergrad to achieve the UK equivalent of a “tenured” academic position, I am acutely aware of the how hard the tenure track is for junior scientists at this stage in history. On a regular basis I see how the current system negatively affects the lives of talented students, post-docs and early-stage faculty. I have for some time wanted to write about my point of view on this issue since I see these trends as indicators of bigger changes in the growth of science than individuals may be aware of.  I’ve finally been inspired to do so by a recent piece by Euan Ritchie and Joern Fischer published in The Conversation entitled “Cracks in the ivory tower: is academia’s culture sustainable?“, which I think hits the nail on head about the primary source of the current problems in academics: the deeply flawed philosophy that “more is always better”.

My view is that the declining opportunities and increasing malaise among early-stage academics is a by-product of the fact that the era of exponential growth in academic research is over.  That’s nonsense, you say, the problems we are experiencing now are because of the current global economic downturn. What’s happening now is a temporary blip, things will return to happier days when we get back to “normal” economic growth and governments increase investment in research. Nonsense, I say. This has nothing to do with the current economic climate and instead has more to do with long-term trends in the growth of scientific activity over the last three centuries.

My views are almost entirely derived from a book written by Derek de Solla Price entitled Little Science, Big Science. Price was a scientist-cum-historian who published this slim tome in 1963 based a series of lectures at Brookhaven National Lab in 1962. It was a very influential book in the 1960s and 1970s, since it introduced citation analysis to a wide audience. Along with Eugene Garfield of ISI/Impact Factor fame (or infamy, depending on your point of view), Price is credited as being one of the founding fathers of Scientometrics. Sadly, this important book is now out of print, the Wikipedia page on this book is a stub with no information, and Google books has not scanned it into their electronic library, showing just how far the ideas in this book are out of the current consciousness. I am not the first to lament that Price’s writings have been ignored in recent years.

In a few short chapters, Price covers large-scale trends in the growth of science and the scientific literature from its origins in the 17th century, which I urge readers to explore for themselves. I will focus here only on one of his key points that relates to the matter at hand — the pinch we are currently feeling in science. Price shows that as scientific disciplines matured in the 20th century, they achieved a characteristic exponential growth rate, which appears linear on a logarithmic scale. This can be seen terms of both the output of scientific papers (Figure 2) or scientists themselves (Figure 3).

Figure 2. Taken from de Solla Price 1963.

Figure 4. A model of logistic growth for Science in the late 20th and early 21st century (taken from de Solla Price 1963).

Figure 3. Taken from de Solla Price 1963.

Price showed that there was a roughly constant doubling time for different forms of scientific output (number of journals, number of papers, number of scientists, etc.) of about 10-15 years. That is, the amount of scientific output at a given point in history is twice as large as it was 10-15 years before. This incessant growth is why we all feel like it is so hard to keep up on the literature (and incidentally why I believe that text mining is now an essential tool). And these observations led Price to make the famous claim that “Eighty to 90 per cent of all the scientists who have ever lived are alive now”.

Crucially, Price pointed out that the doubling time of the number of scientists is much shorter than the doubling time of the overall human population (~50 years). Thus, the proportion of scientists relative to the total human population has been increasing for decades, if not centuries. Price makes the startling but obvious outcomes of this observation very clear: either everyone on earth will be a scientist one day, or the growth rate of science must decrease from its previous long-term trends. He then goes on to argue that the most likely outcome is the latter, and that scientific growth rates will change from exponential to logistic growth and reach saturation sometime within 100 years from the publication of his book in 1963 (Figure 4):

Figure 4. A model of logistic growth for Science (taken from de Solla Price 1963).

So maybe the bad news circulating in labs, coffee rooms and over the internet is not a short-term trend based on the current economic downturn, but instead reflects the product of a long-term trend in the history of science?  Perhaps the crunch that we are currently experiencing in academic research now is the byproduct of the fact that we are in Price’s transition from exponential to logistic growth in science? If so, the pressures we are experiencing now may simply reflect that the current rate of production of scientists is no longer matched to the long-term demand for scientists in society.

Whether or not this model of growth in science is true is clearly debatable (please do so below!). But if we are in the midst of making the transition from exponential to logistic growth in science, then there are a number of important implications that I feel scientists at all stages of their careers should be aware of:

1) For PhD students and post-docs: you have every right to be feeling like the opportunities in science may not be there for you as they were for your supervisors and professors. This message sucks, I know, but one important take-home message from this is that it may not have anything to do with your abilities; it may just have to do with when you came along in history. I am not saying that there will be no opportunities in the future, just fewer as a proportion of the total number of jobs in society relative to current levels. I’d argue that this is a cautiously optimistic view, since anticipating the long-term trends will help you develop more realistic and strategic approaches to making career choices.

2) For early-stage academics: your career trajectory is going to be more limited that you anticipated going into this gig. Sorry mate, but your lab is probably not going to be as big as you might think it should be, you will probably get fewer grants, and you will have more competition for resources than you witnessed in your PhD or post-doc supervisor’s lab. Get used it. If you think you have it hard, see point 1). You are lucky to have a job in science. Also bear in mind that the people judging your career progression may hold expectations that are no longer relevant, and as a result you may have more conflict with senior members of staff during the earlier phases of your career than you expect. Most importantly, if you find that this new reality is true for you, then do your best to adjust your expectations for PhD  students and post-docs as well.

3) For established academics: you came up during the halcyon days of growth in science, so bear in mind that you had it easy relative to those trying to make it today. So when you set your expectations for your students or junior colleagues in terms of performance, recruitment or tenure, be sure to take on board that they have it much harder now than you did at the corresponding point in your career [see points 1) and 2)]. A corollary of this point is that anyone actually succeeding in science now and in the future is (on average) probably better trained and works harder than you (at the corresponding point in your career), so on the whole you are probably dealing with someone who is more qualified for their job than you would be.  So don’t judge your junior colleagues with out-of-date views (that you might not be able to achieve yourself in the current climate) and promote values from a bygone era of incessant growth. Instead, adjust your views of success for the 21st century and seek to promote a sustainable model of scientific career development that will fuel innovation for the next hundred years.

References

de Solla Price D (1963) Little Science. Big Science. New York: Columbia University Press.

Kealey T (2000). More is less. Economists and governments lag decades behind Derek Price’s thinking Nature, 405 (6784) PMID: 10830939

Sauermann H, & Roach M (2012). Science PhD career preferences: levels, changes, and advisor encouragement. PloS one, 7 (5) PMID: 22567149

Related Posts:

Top N Reasons To Do A Ph.D. or Post-Doc in Bioinformatics/Computational Biology

For the last few years I’ve given a talk to incoming Ph.D. students in Molecular Biology on why they should consider doing Computational Biology research. I’m fairly passionate about making this pitch, since I strongly believe all 21st century Biologists should have a greater (or lesser) degree of computational training, and that the best time to gain that training is during a Ph.D. or a Post-Doc.

I’ve decided to post an expanded version of the reasons I give for why Biology trainees should gain computational skills in hopes of encouraging a wider audience to consider a research path in Computational Biology. For simplicity, I define the field of Computational Biology to include Bioinformatics as well, although there are important distinctions between these two disciplines. Also, I note that this list is geared towards convincing students with a background in Molecular Biology to consider moving into Computational Biology, but core aspects and variants of the arguments here should apply to people with backgrounds in other disciplines (e.g. Ecology, Neuroscience) as well. Here we go…

0. Computing is the key skill set for 21st century biology: As time progresses, Biology is becoming a more quantitative science. Over the last three centuries, biology has transformed from an observational science into an experimental science into a data science. As the low-hanging fruit gets picked, fundamental discoveries are getting harder to make using observation and experiment alone. In the future, new discoveries will require leveraging big datasets and using advanced analytical methods. Big data and complex models require computational skills. Full stop. There is no way to escape this reality.

But if you don’t take my word for it, listen to what Nobel-prize winning pioneer of molecular biology Walter Gilbert, who made this same argument about the future of biology over 20 years ago:

To use this flood of [sequence] knowledge, which will pour across the computer networks of the world, biologists not only must become computer literate, but also change their approach to the problem of understanding life.

Or listen to Nobel-prize winning pioneer of molecular biology Sydney Brenner, who has been banging on about this issue for years:

I spent many hours persuading people that computing was not only going to be the essential tool for biological research but would also provide models for analyzing complexity…The development of sequencing techniques and their widespread application has generated enormous databases of information, and the need for computers is no longer questioned

1. Computational skills are highly transferable: Let’s face it, not everyone doing a Ph.D. or Post-Doc. in Biology is going to go on to a career in academic research. The Washington Post recently reported that “only 14 percent of those with a Ph.D. in biology and the life sciences now land a coveted academic position within five years“. So if there is high probability that your Ph.D. or Post-Doc training will need to be used outside of academic research, why not acquire the most broadly applicable skill set that you can? Experimental skills only transfer to laboratory jobs in the biosciences or medical job market. Computational skills transfer across this sector, plus a much wider market outside of the (bio)science. Increasing your computational chops won’t just give you a better chance at landing a job. It will have added benefits in your own life as well, since you will have a deeper appreciation for how computers work and more mastery of when you interact with computers in your daily life.

2. Computing will help improve your core scientific skills: Biology is inherently a messy subject. While some Biologists are rigorously trained in how to cope with this messiness through good experimental design and statistical analysis (here’s looking at you my Ecologist sisters and brothers), the sad truth is that many (most?) Biologists have bad habits when it comes to data collection and analysis.  Computing forces you to confront and tame the very human tendency to do science in ad hoc ways and therefore it naturally develops core scientific skills such as: logically planning experiments, collecting data consistently, developing reproducible methodology, and analysing your data with proper statistical methodology. So even if you can’t be convinced to abandon the bench or field forever, computational training will develop scientific best-practice that crosses-over and enhances your experimental skills set.

3. You should use you Ph.D./Post-Doc to develop new skills: Most Biologists come into their Ph.D. with some experimental training from high school and undergraduate studies. OK, so maybe this training isn’t cutting edge and you haven’t done advanced research to really hone your experimental skills, but nevertheless you do have some amount of training under your belt. In contrast, the vast majority of Biology Ph.D. students have no training in scientific computing skills beyond using Excel or a GUI-based statistics package. So use your Ph.D. or Post-Doc. time to for what it should be — training in something new, not just further developing a skill set that you already have.

My view is that the best time to train in Computational Biology is during a Ph.D., and the last chance to do this is likely to be as a Post-Doc. This is because during your Ph.D. you have time, secure funding and a departmental structure to protect you that you will never have again in your career. Gaining computational skills as a Post-Doc is also a great option, but shorter contracts, greater PI dependency, and higher expectations to publish mean that you typically don’t have as much time to re-train as you would during a Ph.D. Good luck finding the time to re-tool as a PI.

4. You will develop a more unique skill set in Biology: As noted above, the vast majority of Biologists have experimental training, but very few have advanced Computational training. While this is (thankfully!) changing, you will still be at a competitive advantage for at least a decade or more in terms of getting results in post-genomic Biology if you can code. And because you will be able to get results that many others cannot, plus the fact that you will have skills that set you apart from the herd, you will be more competitive on the job market.

5. You will publish more papers: While it may not always feel like it, a Ph.D. or  Post-Doc goes by quickly. Therefore, you don’t have a lot of time to waste time with experiments that fail, if you want to stay in the game. Don’t get me wrong, Computational Biology will provide you more than your fair share of failed experiments, but crucially they will fail in hours/days instead of weeks/months, and therefore allow you to move on to something that works more quickly. As a result, you are very likely to publish more papers per unit time in Computational Biology. Whether you believe the old chestnut that experimental papers are somehow “harder” and therefore have more worth (I don’t), it is clear that publication remains the hard currency of science. Moreover, the adage that search committees “know how to count even if they can’t read” is still as true as ever. More seriously, what employers and funding agencies want to see is junior researchers who have good ideas and can take them to completion. Publication is the proof that you can finish projects. Computational Biology will allow you to demonstrate that you are a finisher, and that you have what it takes to succeed in science, a little bit faster than the next person.

6. You will have more flexibility in your research: I would say one of the greatest thing about being a Computational Biologist is that you are not as constrained in your research as you are when you do Experimental Biology. Sure, you can only work on projects that are amenable to computational analysis, but this scope is vast — from Computational Neuroscience to Theoretical Ecology and anything and everything in between. You can also move from flexibly from topic to topic more easily than you can if your skill set is linked to specific experimental techniques. This flexibility in scope allows you to satisfy your intellectual curiosity or chase the latest trend as you wish.  Most importantly for trainees, the flexibility (and low-cost, see below) afforded by Computational Biology research allows you to make the case to your PI to develop your own research programme earlier in your career. This is crucial since the more experience you have designing independent projects early in your career, the more likely you will be to succeed if/when you make it to the big time.

7. You will have more flexibility in working practices: ‘Nuff said:

Seriously though, Computational Biology has many pluses when it come to balancing work and life, but still maintaining a high level of productivity. Unlike being chained to the bench, you can do Computational Biology from pretty much anywhere, and telecommuting/working from home are standard practices in Computational Biology. Over the longer term, this flexibility in work practice helps you to accommodate career-breaks, manage the tough times life will throw at you, and make big life decisions like starting a family easier, since you can integrate coding and submitting jobs to the cluster into your life much better than you can integrate racing back to the lab to flip stocks or harvest cells. Let me say it loud and clear right here: if you want to have a career in academic Biological research and also have a family, choosing to do a Ph.D or Post-Doc in Computational Biology will be more likely to get you to this goal than if you are stuck in the lab. This is not just true for women, as I and others can attest to:

8. Computational research is cost-effective: With the wealth of publicly available data now available, Computational Biology research is cheaper than most experimental work that requires a large consumables budget. This is important for a number of reasons. Primarily, work in Computational Biology is less dependent on grant funding, and therefore you don’t have to be a slave to trends or waste inordinate time chasing grant funding — you can actually just get on with the job of doing the science you want to do. This is especially important in tough economic times like the present moment. As mentioned above, the reduced cost of Computational Biology research also allows trainees to design their own research at an earlier career stage, since you will not be as reliant on a PI to authorize expenditure for your project. Cost-efficiency is also very important when you are starting your group and for maintaining continuity of productivity when riding out troughs in funding or group size. Finally, the cost-efficiency of Computational Biology allows researchers in developing scientific economies to be on equal parity with researchers in rich countries. In my opinion, trainees from BRICS nations and other developing economies (sorry to use this somewhat judgemental term) should really consider choosing Computational Biology as a way to get to the top of the class globally without being limited by the need for big budgets.

9. A successful scientist ends up in an office: This is the kicker. If you succeed and get that “coveted” PI position, you will ultimately end up stuck in an office. True, some brave souls still find time to make it into the lab to do experiments, but they are a rare breed. The truth is that the native habitat for an academic researchers is sitting in their office in front of their computer. You can’t do a lick of wet lab or field work from the office, but you can still do Computational Biology research from behind a desk! As noted by Webb Miller, one of the most highly-cited bioinformaticians ever, continuing to do your own research is also one of the best ways to stay motivated about your work over the long haul of a career. Remember that the long-term goal is to be a “Principal Investigator”, not an “In Principle Investigator,” so if you’ve really wanted to do research since you were young, then ask yourself: why train in skills you will never ultimately use for the majority of your career, while somebody else in your lab gets to have fun making all the discoveries?

[10. You will understand why lists should start with the number zero.]

A major reason I have for posting this list is to start more discussion about the benefits of doing research in Computational Biology. I have deliberately made this a top N (not a top 10 list) so that good ideas can be added to the above. I’ll update this post with good suggestions from the comments, and give full credit to the originator.