Incentivising open data & reproducible research through pre-publication private access to NGS data at EBI

Yesterday Ewan Birney posted a series of tweets expressing surprise that more people don’t take advantage of ENA’s programmatic access to submit and store next-generation sequencing (NGS) data to EBI, that I tried to respond to in broken twitter English. This post attempts to clarify how I think ENA’s system could be improved in ways that I think would benefit both data archiving and reproducible research, and possibly increase uptake and sustainability of the service.

I’ve been a heavy consumer of NGS data from EBI for a couple of years, mainly thanks to their plain-vanilla fastq.gz downloads and clean REST interface for extracting NGS metadata. But I’ve only just recently gone through the process of submitting NGS data to ENA myself, first using their web portal and more recently taking advantage of REST-based programmatic access. Aside from the issue of how best to transfer many big files to EBI in an automatic way (which I’ve blogged about here), I’ve been quite impressed by how well-documented and efficient ENA’s NGS submission process is. For those who’ve had bad experiences submitting to SRA, I agree with Ewan that ENA provides a great service, and I’d suggest giving EBI a try.

In brief, the current ENA submission process entails:

  1. transfer of user’s NGS data to EBI’s “dropbox”, which is basically a private storage area on EBI’s servers that requires user/password authentication (done by user).
  2. creation and submission of metadata files with information about runs and samples (done by user)
  3. validation of data/metadata and creation of accession numbers for the projects/experiments/samples/runs (done by EBI)
  4. conversion of submitted NGS data to EBI formatted version, giving new IDs to each read and connecting appropriate metadata to each NGS data file (done by EBI)
  5. public release of accession-number based annotated data (done by EBI on the user’s release date or after publication)

Where I see the biggest room for improvement is in the “hupped” phase when data is submitted but private.  During this phase, I can store data at EBI privately for up to two years, and thus keep a remote back-up of my data for free, which is great, but only in its original submitted format  I can’t, however, access the exact version of my data that will ultimately become public, i.e. using the REST interface with what will be the published accession numbers on data with converted read IDs.  For these reasons, I can’t write pipelines that use the exact data that will be referenced in a paper, and thus I cannot fully verify that the results I publish can be reproduced by someone else. Additionally, I can’t “proof” what my submission looks like, and thus I have to wait until the submission is live to make any corrections to my data/metadata if they haven’t been converted as intended. As a work around, I’ve been releasing data pre-publication, doing data checks and programming around the live data to ensure that my pipelines and results are reproducible. I suspect not all labs would be comfortable doing this, mainly for fear of getting scooped using their own data.

In experiencing ENA’s data submission system from the twin viewpoints of a data producer and consumer, I’ve had a few thoughts about how to improve the system that could also address the issue of wider community uptake. The first change I would suggest as a simple improvement to EBI’s current service would be to allow REST/browser access to a private, live version of formatted NGS data/metadata during the “hupped” phase with simple HTTP-based password authentication.  This would allow users to submit and store their data privately, but also to have access to the “final” product prior to release. This small change could have many benefits, including:

  • incentivising submission of NGS data early in the life-cycle of a project rather than as an after-thought during publication,
  • reducing the risk of local data loss or failure to submit NGS data at the time of publication,
  • allowing distributed project partners to access big data files from a single, high-bandwith, secure location,
  • allowing quality checks on final version of data/metadata prior to publication/data release, and
  • allowing analysis pipelines to use the final archived version of data/metadata, ensuring complete reproducibility and unified integration with other public datasets.

A second change, which I suspect is more difficult to implement, would be to allow users to pay to store their data for longer than a fixed period of time. I’d say two years is around the lower time limit from when data comes off a sequencer to a paper being published. Thus, I suspect there are many users who are reluctant to submit and store data at ENA prior to paper submission, since their data might be made public before they are ready to share. But if users could pay a modest monthly/quarterly fee to store their data privately past the free period up until publication, this might encourage them to deposit early and gain the benefits of storing/checking/using the live data, without fear that their data will be released earlier than they would like. This change could also lead to a new, low-risk funding stream for EBI, since they would only be charging for more time to private access for data already that is already on disk.

The extended pay-for-privacy model works well for both the user and the community, and could ultimately encourage more early open data release. Paying users will benefit from replicated, offsite storage in publication-ready formats without fear of getting scooped. This will come as a great benefit to many users who are currently struggling with local NGS data storage issues. Reciprocally, the community benefits because contributors who want to pay for extended private data end up supporting common infrastructure disproportionately more than those who release data publicly early. And since it becomes increasingly costly to keep your data private, there is ultimately an incentive to make your data public. This scheme would especially benefit preservation of the large amounts of usable data that go stale or never see the light of day because of delays or failures to write up and thus never get submitted to ENA. And of course, once published, private data would be made openly available immediately, all in a well-formatted and curated manner that the community can benefit from. What’s not to like?

Thoughts on if, or how, these half-baked ideas could be turned into reality are much appreciated in the comments below.

Simplifying Access to Paywalled Literature with Mobile Vouchers

Increasingly I read new scientific papers on a mobile device, often at home in the evening when I’m not on my university’s network. Most of the articles I read come from scientists on Twitter, Twitterbots or RSS feeds, which I try to read directly from my Twitter or RSS clients (Tweetbot and Feedly for iOS, respectively). Virtually every day, I hit paywalls trying to read non-open access papers from these sources, which aggravate me, waste my time, and require a variety of workarounds to (legally) access papers that differ depending on the publisher/journal.

For publishers that expose an obvious “Institutional login” option, I will typically try to log in using the UK Federation Shibboleth authentication system, which uses my university credentials. But Tweetbot and Feedly don’t store my Shibboleth user/pass, so for each article I either have to manually enter my user/pass, or open the page in Safari where my Shibboleth user/pass are stored. This app switch breaks my flow and leads to tab proliferation, neither of which are optimal. Some journals that use an institutional login temporarily store my details for around a week so I don’t have to do this every time I read a paper, but I still find myself entering the my details for the same journals over and over.

For journals that don’t have an institutional login option or hide this option from plain view, I tend to switch from Twitter/RSS to my IPad Settings in order to log in to my university VPN. The VPN login on my iPad similarly does not store my password, requiring me to type in my university password over and over. This wouldn’t be such a big deal, but my university’s requirement of including one uppercase Egyptian hieroglyph and one lowercase Celtic rune makes entering my password with the iOS keyboard a hassle.

In going through this frustrating routine yet again today trying to access an article in Genetics, I stumbled on a nice feature that I hadn’t seen before called “Mobile Vouchers” that allows me to avoid this rigmarole in the future. As explained on the Genetics Mobile Voucher FAQ:

A voucher is a code that will tie your mobile device to your institution’s subscriptions. This voucher will grant you access to protected content while not on your institution’s network. Each mobile device must be vouched for individually and vouchers are only valid for the publisher for which it is issued.

Obtaining a voucher is super easy. If you are not on your university network, you first need to be logged into your VPN to obtain a voucher. Once on your university network, just visit http://www.genetics.org/voucher/get, enter your name/email address and then submit. This will issue a voucher that you can use immediately to authenticate your device (it will also email you with this information). Voilà, no paywalls for Genetics on your iPad for the next six months or so. In addition to decreasing frustration and increasing flow for scientists, I can see this technology being really useful for PhD students, postdocs and visiting scientists to retain access to the literature for a few months after the end of their positions.

I was surprised I hadn’t seen this before, since it eliminates one of my chronic annoyances as a consumer of the digital scientific literature. Maybe others would disagree, but I would say that publishers haven’t done a very good job of advertising this very useful feature. Googling around, I didn’t find much on mobile vouchers other than a SlideShare presentation from Highwire press from 2011, which suggests the technology has been around for some time:

 

I also couldn’t find much information on which journals offer this service, but a few google searches led me to the following list of publishers/journals that offer mobile vouchers. It appears that most of these journals use HighWire press to serve their content, and that vouchers can operate at the publisher (e.g. Oxford University Press) or journal (e.g. Genetics, PNAS) scale. The OUP voucher is particularly useful since it covers Molecular Biology and Evolution and Bioinformatics, which (together with Genetics) are the journals I hit paywalls for most frequently. Since these vouchers do expire eventually, I thought it would be good to bookmark these links for future use and to highlight this very useful tech tip. Links to other publishers and any other information on mobile vouchers would be most welcome in the comments.

Oxford University Press
http://services.oxfordjournals.org/site/subscriptions/mobile-voucher-faq.xhtml

Royal Society
http://admincenter.royalsocietypublishing.org/cgi/voucher-use

Rockefeller Press
http://www.rupress.org/site/subscriptions/mobile-voucher-faq.xhtml

Lyell
http://www.lyellcollection.org/site/subscriptions/mobile-voucher-faq.xhtml

Sage
http://online.sagepub.com/site/subscriptions/mobile-voucher-faq.xhtml

BMJ
http://journals.bmj.com/site/subscriptions/mobile-voucher-faq.xhtml

AACR
http://www.aacrjournals.org/site/Access/mobile_vouchers.xhtml

Genetics
http://www.genetics.org/site/subscriptions/mobile-voucher-faq.xhtml

PNAS
http://www.pnas.org/site/subscriptions/mobile-voucher-faq.xhtml

JBC
http://www.jbc.org/site/subscriptions/mobile-voucher-faq.xhtml

Endocrine
http://www.eje-online.org/site/subscriptions/mobile-voucher-faq.xhtml

J. Neuroscience
http://www.jneurosci.org/site/subscriptions/mobile-voucher-faq.xhtml

GeoScienceWorld
http://www.geoscienceworld.org/site/subscriptions/mobile-voucher-faq.xhtml

Economic Geology
http://www.segweb.org/SEG/Publications/SEG/_Publications/Mobile_Vouchers.aspx

Launch of the PLOS Text Mining Collection

Just a quick post to announce that the PLOS Text Mining Collection is now live!

This PLOS Collection arose out of a twitter conversation with Theo Bloom last year, and has come together through the hard work of the authors of the papers in the Collection, the PLOS Collections team (in particular Sam Moore and Jennifer Horsely), and my co-organizers Larry Hunter and Andrey Rzhetsky. Many thanks to all for seeing this effort to completion.

Because of the large body of work in the area of text mining published in PLOS, we struggled with how best to present all these papers in the collection without diluting the experience for the reader. In the end, we decided only to highlight new work from the last two years and major reviews/tutorials at the time of launch. However, as this is a living collection, new articles will be included in the future, and the aim is to include previously published work as well. We hope to see many more papers in the area of text mining published in the PLOS family of journals in the future.

An overview of the PLOS Text Mining Collection is below (cross-posted at the PLOS EveryONE blog) and a commentary on Collection is available at the Official PLOS Blog entitled “A mine of information – the PLOS Text Mining Collection“.

Background to the PLOS Text Mining Collection

Text Mining is an interdisciplinary field combining techniques from linguistics, computer science and statistics to build tools that can efficiently retrieve and extract information from digital text. Over the last few decades, there has been increasing interest in text mining research because of the potential commercial and academic benefits this technology might enable. However, as with the promises of many new technologies, the benefits of text mining are still not clear to most academic researchers.

This situation is now poised to change for several reasons. First, the rate of growth of the scientific literature has now outstripped the ability of individuals to keep pace with new publications, even in a restricted field of study. Second, text-mining tools have steadily increased in accuracy and sophistication to the point where they are now suitable for widespread application. Finally, the rapid increase in availability of digital text in an Open Access format now permits text-mining tools to be applied more freely than ever before.

To acknowledge these changes and the growing body of work in the area of text mining research, today PLOS launches the Text Mining Collection, a compendium of major reviews and recent highlights published in the PLOS family of journals on the topic of text mining. As one of the major publishers of the Open Access scientific literature, it is perhaps no coincidence that research in text mining in PLOS journals is flourishing. As noted above, the widespread application and societal benefits of text mining is most easily achieved under an Open Access model of publishing, where the barriers to obtaining published articles are minimized and the ability to remix and redistribute data extracted from text is explicitly permitted. Furthermore, PLOS is one of the few publishers who is actively promoting text mining research by providing an open Application Programming Interface to mine their journal content.

Text Mining in PLOS

Since virtually the beginning of its history [1], PLOS has actively promoted the field of text mining by publishing reviews, opinions, tutorials and dozens of primary research articles in this area in PLOS Biology, PLOS Computational Biology and, increasingly, PLOS ONE. Because of the large number of text mining papers in PLOS journals, we are only able to highlight a subset of these works in the first instance of the PLOS Text Mining Collection. These include major reviews and tutorials published over the last decade [1][2][3][4][5][6], plus a selection of research papers from the last two years [7][8][9][10][11][12][13][14][15][16][17][18][19] and three new papers arising from the call for papers for this collection [20][21][22].
The research papers included in the collection at launch provide important overviews of the field and reflect many exciting contemporary areas of research in text mining, such as:

  • methods to extract textual information from figures [7];
  • methods to cluster [8] and navigate [15] the burgeoning biomedical literature;
  • integration of text-mining tools into bioinformatics workflow systems [9];
  • use of text-mined data in the construction of biological networks [10];
  • application of text-mining tools to non-traditional textual sources such as electronic patient records [11] and social media [12];
  • generating links between the biomedical literature and genomic databases [13];
  • application of text-mining approaches in new areas such as the Environmental Sciences [14] and Humanities [16][17];
  • named entity recognition [18];
  • assisting the development of ontologies [19];
  • extraction of biomolecular interactions and events [20][21]; and
  • assisting database curation [22].

Looking Forward

As this is a living collection, it is worth discussing two issues we hope to see addressed in articles that are added to the PLOS text mining collection in the future: scaling up and opening up. While application of text mining tools to abstracts of all biomedical papers in the MEDLINE database is increasingly common, there have been remarkably few efforts that have applied text mining to the entirety of the full text articles in a given domain, even in the biomedical sciences [4][23]. Therefore, we hope to see more text mining applications scaled up to use the full text of all Open Access articles. Scaling up will maximize the utility of text-mining technologies and the uptake by end users, but also demonstrate that demand for access to full text articles exists by the text mining and wider academic communities.

Likewise, we hope to see more text-mining software systems made freely or openly available in the future. As an example of the state of affairs in the field, only 25% of the research articles highlighted in the PLOS text mining collection at launch provide source code or executable software of any kind [13][16][19][21]. The lack of availability of software or source code accompanying published research articles is, of course, not unique to the field of text mining. It is a general problem limiting progress and reproducibility in many fields of science, which authors, reviewers and editors have a duty to address. Making release of open source software the rule, rather than the exception, should further catalyze advances in text mining, as it has in other fields of computational research that have made extremely rapid progress in the last decades (such as genome bioinformatics).

By opening up the code base in text mining research, and deploying text-mining tools at scale on the rapidly growing corpus of full-text Open Access articles, we are confident this powerful technology will make good on its promise to catalyze scholarly endeavors in the digital age.

References

1. Dickman S (2003) Tough mining: the challenges of searching the scientific literature. PLoS biology 1: e48. doi:10.1371/journal.pbio.0000048.
2. Rebholz-Schuhmann D, Kirsch H, Couto F (2005) Facts from Text—Is Text Mining Ready to Deliver? PLoS Biol 3: e65. doi:10.1371/journal.pbio.0030065.
3. Cohen B, Hunter L (2008) Getting started in text mining. PLoS computational biology 4: e20. doi:10.1371/journal.pcbi.0040020.
4. Bourne PE, Fink JL, Gerstein M (2008) Open access: taking full advantage of the content. PLoS computational biology 4: e1000037+. doi:10.1371/journal.pcbi.1000037.
5. Rzhetsky A, Seringhaus M, Gerstein M (2009) Getting Started in Text Mining: Part Two. PLoS Comput Biol 5: e1000411. doi:10.1371/journal.pcbi.1000411.
6. Rodriguez-Esteban R (2009) Biomedical Text Mining and Its Applications. PLoS Comput Biol 5: e1000597. doi:10.1371/journal.pcbi.1000597.
7. Kim D, Yu H (2011) Figure text extraction in biomedical literature. PloS one 6: e15338. doi:10.1371/journal.pone.0015338.
8. Boyack K, Newman D, Duhon R, Klavans R, Patek M, et al. (2011) Clustering More than Two Million Biomedical Publications: Comparing the Accuracies of Nine Text-Based Similarity Approaches. PLoS ONE 6: e18029. doi:10.1371/journal.pone.0018029.
9. Kolluru B, Hawizy L, Murray-Rust P, Tsujii J, Ananiadou S (2011) Using workflows to explore and optimise named entity recognition for chemistry. PloS one 6: e20181. doi:10.1371/journal.pone.0020181.
10. Hayasaka S, Hugenschmidt C, Laurienti P (2011) A network of genes, genetic disorders, and brain areas. PloS one 6: e20907. doi:10.1371/journal.pone.0020907.
11. Roque F, Jensen P, Schmock H, Dalgaard M, Andreatta M, et al. (2011) Using electronic patient records to discover disease correlations and stratify patient cohorts. PLoS computational biology 7: e1002141. doi:10.1371/journal.pcbi.1002141.
12. Salathé M, Khandelwal S (2011) Assessing Vaccination Sentiments with Online Social Media: Implications for Infectious Disease Dynamics and Control. PLoS Comput Biol 7: e1002199. doi:10.1371/journal.pcbi.1002199.
13. Baran J, Gerner M, Haeussler M, Nenadic G, Bergman C (2011) pubmed2ensembl: a resource for mining the biological literature on genes. PloS one 6: e24716. doi:10.1371/journal.pone.0024716.
14. Fisher R, Knowlton N, Brainard R, Caley J (2011) Differences among major taxa in the extent of ecological knowledge across four major ecosystems. PloS one 6: e26556. doi:10.1371/journal.pone.0026556.
15. Hossain S, Gresock J, Edmonds Y, Helm R, Potts M, et al. (2012) Connecting the dots between PubMed abstracts. PloS one 7: e29509. doi:10.1371/journal.pone.0029509.
16. Ebrahimpour M, Putniņš TJ, Berryman MJ, Allison A, Ng BW-H, et al. (2013) Automated authorship attribution using advanced signal classification techniques. PLoS ONE 8: e54998. doi:10.1371/journal.pone.0054998.
17. Acerbi A, Lampos V, Garnett P, Bentley RA (2013) The Expression of Emotions in 20th Century Books. PLoS ONE 8: e59030. doi:10.1371/journal.pone.0059030.
18. Groza T, Hunter J, Zankl A (2013) Mining Skeletal Phenotype Descriptions from Scientific Literature. PLoS ONE 8: e55656. doi:10.1371/journal.pone.0055656.
19. Seltmann KC, Pénzes Z, Yoder MJ, Bertone MA, Deans AR (2013) Utilizing Descriptive Statements from the Biodiversity Heritage Library to Expand the Hymenoptera Anatomy Ontology. PLoS ONE 8: e55674. doi:10.1371/journal.pone.0055674.
20. Van Landeghem S, Bjorne J, Wei C-H, Hakala K, Pyysal S, et al. (2013) Large-Scale Event Extraction from Literature with Multi-Level Gene Normalization. PLOS ONE 8: e55814. doi:10.1371/journal.pone.0055814
21. Liu H, Hunter L, Keselj V, Verspoor K (2013) Approximate Subgraph Matching-based Literature Mining for Biomedical Events and Relations. PLoS ONE 8(4): e60954. doi:10.1371/journal.pone.0060954
22. Davis A, Weigers T, Johnson R, Lay J, Lennon-Hopkins K, et al. (2013) Text mining effectively scores and ranks the literature for improving chemical-gene-disease curation at the Comparative Toxicogenomics Database. PLOS ONE 8: e58201. doi:10.1371/journal.pone.0058201
23. Bergman CM (2012) Why Are There So Few Efforts to Text Mine the Open Access Subset of PubMed Central? https://caseybergman.wordpress.com/2012/03/02/why-are-there-so-few-efforts-to-text-mine-the-open-access-subset-of-pubmed-central/.

Accelerating Your Science with arXiv and Google Scholar

As part of my recent conversion to using arXiv, I’ve been struck by how posting preprints arXiv synergizes incredibly well with Google Scholar. I’ve tried to make some of these points on Twitter and elsewhere, but I thought I’d try to summarize here what I see as a very powerful approach to accelerating Open Science using arXiv and several features of the Google Scholar toolkit. Part of the motivation for writing this post is that I’ve tried to make this same pitch to several of my colleagues, and was hoping to be able to point them to a coherent version of this argument, which might be of use for others as well.

A couple of preliminaries. First, the main point of this post is not about trying to convince people to post preprints to arXiv. The benefits of preprinting on arXiv are manifold (early availability of results, allowing others to build on your work sooner, prepublication feedback on your manuscript, feedback from many eyes not just 2-3 reviewers, availability of manuscript in open access format, mechanism to establish scientific priority, opportunity to publicize your work in blogs/twitter, increased duration for citations) and have been ably summarized elsewhere. This post is specifically about how one can get the most out of preprinting on arXiv by using Google Scholar tools.

Secondly, it is important to make sure people are aware of two relatively recent developments in the Google Scholar toolkit beyond the basic Google Scholar search functionality — namely, Google Scholar Citations and Google Scholar Updates. Google Scholar Citations allows users to build a personal profile of their publications, which draws in citation data from the Google Scholar database, allowing you to “check who is citing your publications, graph citations over time, and compute several citation metrics”, which also will “appear in Google Scholar results when people search for your name.” While Google Scholar Citations has been around for a little over a year now, I often find that many Scientists are either not aware that it exists, or have not activated their profile yet, even though it is scarily easy to set up. Another more recent feature available for those with active Google Scholar Citations profiles is called Google Scholar Updates, a tool that can analyze “your articles (as identified in your Scholar profile), scan the entire web looking for new articles relevant to your research, and then show you the most relevant articles when you visit Scholar”. As others have commented, Google Scholar Updates provides a big step forward in sifting through the scientific literature, since it provides a tailored set of articles delivered to your browser based on your previous publication record.

With these preliminaries in mind, what I want to discuss now is how a Google Scholar plays so well with preprints on arXiv to accelerate science when done in the Open. By posting preprint to arXiv and activating your Google Scholar Citation profile, you immediately gain several advantages, including the following:

  1. arXiv preprints are rapidly indexed by Google Scholar (with 1-2 days in my experience) and thus can be discovered easily by others using a standard Google Scholar search.
  2. arXiv preprints are listed in your Google Scholar profile, so when people browse your profile for your most recent papers they will find arXiv preprints at the top of the list (e.g. see Graham Coop’s Google Scholar profile here).
  3. Citations to your arXiv preprints are automatically updated in your Google Scholar profile, allowing you to see who is citing your most recent work.
  4. References included in your arXiv preprints will be indexed by Google Scholar and linked to citations in other people’s Google Scholar profiles, allowing them to find your arXiv preprint via citations to their work.
  5. Inclusion of an arXiv preprint in your Google Scholar profile allows Google Scholar Updates to provide better recommendations for what you should read, which is particularly important when you are moving into a new area of research that you have not previously published on.
  6. [Update June 14, 2013] Once Google Scholar has indexed your preprint on arXiv it will automatically generate a set of Related Articles, which you can browse to identify previously published work related to your manuscript.  This is especially useful at the preprint stage, since you can incorporate related articles you may have missed before submission or during revision.

I probably have overlooked other possible benefits of the synergy between these two technologies, since they are only dawning on me as I become more familiar with these symbiotic scholarly tools myself. What’s abundantly clear to me at this stage though is that by embracing Open Science and using arXiv together with Google Scholar puts you at a fairly substantial competitive advantage in terms of your scholarship, in ways that are simply not possible using the classical approach to publishing in biology.

Suggesting Reviewers in the Era of arXiv and Twitter

Along with many others in the evolutionary genetics community, I’ve recently converted to using arXiv as a preprint server for new papers from my lab. In so doing, I’ve confronted an unexpected ethical question concerning pre-printing and the use of social media, which I was hoping to generate some discussion about as this practice becomes more common in the scientific community. The question concerns the suggestion of reviewers for a journal submission of a paper that has previously been submitted to arXiv and then subsequently discussed on social media platforms like Twitter. Specifically put, the question is: is it ethical to suggest reviewers for a journal submission based on tweets about your arXiv preprint?

To see how this ethical issue arises, I’ll first describe my current workflow for submitting to arXiv and publicizing it on Twitter. Then, I’ll propose an alternative that might be considered to be “gaming” the system, and discuss precedents in the pre-social media world that might inform the resolution of this issue.

My current workflow for submission to arXiv and announcement on twitter is as follows:

  1. submit manuscript to a journal with suggested reviewers based on personal judgement;
  2. deposit the same version of the manuscript that was submitted to journal in arXiv;
  3. wait until arXiv submission is live and then tweet links to the arXiv preprint.

From doing this a few times (as well as benefiting from additional Twitter exposure via Haldane’s Sieve), I’ve realized that there can often be fairly substantive feedback about an arXiv submission via twitter in the form of who (re)tweets links to it and what people are saying about the manuscript. It doesn’t take much thought to realize that this information could potentially be used to influence a journal submission in the form of which reviewers to suggest or oppose using an alternative workflow:

  1. submit manuscript to arXiv;
  2. wait until arXiv submission is live and then tweet about it;
  3. moniter and assimilate feedback from Twitter;
  4. submit manuscript to journal with suggested and opposed reviewers based on Twitter activity.

This second workflow incidentally also arises under the first workflow if your initial journal submission is rejected, since there would naturally be a time lag in which it would be difficult to fully ignore activity on Twitter about an arXiv submission.

Now, I want to be clear that I haven’t and don’t intend to use the second workflow (yet), since I have not fully decided if this an ethical approach to suggesting reviewers. Nevertheless, I lean towards the view that it is no more or less ethical than the current mechanisms of selecting suggested reviewers based on: (1) perceived allies/rivals with relevant expertise or (2) informal feedback on the work in question presented at meetings.

In the former case of using who you perceive to be for or against your work, you are relying on personal experience and subjective opinions about researchers in your field, both good and bad, to inform your choice of suggested or opposed reviewers. This is some sense no different qualitatively to using information on Twitter prior to journal submission, but is instead based on a closed network using past information, rather than an open network using information specific to the piece of work in question. The latter case of suggesting reviewers based on feedback from meeting presentations is perhaps more similar to the matter at hand, and I suspect would be considered by most scientists to be a perfectly valid mechanism to suggest or oppose reviewers for a journal submission.

Now, of course I recognize that suggested reviewers are just that, and editors can use or ignore these suggestions as they wish, so this issue may in fact be moot. However, based on my experience, suggested reviewers are indeed frequently used by editors (if not, why would they be there?). Thus resolving whether smoking out opinions on Twitter is considered “fair play” is probably something the scientific community should consider more thoroughly in the near future, and I’d be happy to hear what other folks think about this in the comments below.

Announcing the PLoS Text Mining Collection

Based on a spur of the moment tweet earlier this year, and a positive follow up from Theo Bloom, I’m very happy to announce that PLoS has now put the wheels in motion to develop a Collection of articles that highlight the importance of Text Mining research. The Call for Papers has just been announced today, and I’m very excited to see this effort highlight the synergy between Open Access, Altmetrics and Text Mining research. I’m particularly keen to see someone take the reigns on writing a good description of the API for PLoS (and other publishers). And a good lesson to all to be careful to watch what you tweet!

The Call for Paper below is cross posted at the PLoS Blog

Call for Papers: PLoS Text Mining Collection

The Public Library of Science (PLoS) seeks submissions in the broad field of text-mining research for a collection to be launched across all of its journals in 2013. All submissions submitted before October 30th, 2012 will be considered for the launch of the collection. Please read the following post for further information on how to submit your article.

The scientific literature is exponentially increasing in size, with thousands of new papers published every day. Few researchers are able to keep track of all new publications, even in their own field, reducing the quality of scholarship and leading to undesirable outcomes like redundant publication. While social media and expert recommendation systems provide partial solutions to the problem of keeping up with the literature, systematically identifying relevant articles and extracting key information from them can only come through automated text-mining technologies.

Research in text mining has made incredible advances over the last decade, driven through community challenges and increasingly sophisticated computational technologies. However, the promise of text mining to accelerate and enhance research largely has not yet been fulfilled, primarily since the vast majority of the published scientific literature is not published under an Open Access model. As Open Access publishing yields an ever-growing archive of unrestricted full-text articles, text mining will play an increasingly important role in drilling down to essential research and data in scientific literature in the 21st century scholarly landscape.

As part of its commitment to realizing the maximal utility of Open Access literature, PLoS is launching a collection of articles dedicated to highlighting the importance of research in the area of text mining. The launch of this Text Mining Collection complements related PLoS Collections on Open Access and Altmetrics (forthcoming), as well as the recent release of the PLoS Application Programming Interface, which provides an open API to PLoS journal content.

As part of this Text Mining Collection, we are making a call for high quality submissions that advance the field of text-mining research, including:

  • New methods for the retrieval or extraction of published scientific facts
  • Large-scale analysis of data extracted from the scientific literature
  • New interfaces for accessing the scientific literature
  • Semantic enrichment of scientific articles
  • Linking the literature to scientific databases
  • Application of text mining to database curation
  • Approaches for integrating text mining into workflows
  • Resources (ontologies, corpora) to improve text mining research

Please note that all submissions submitted before October 30th, 2012 will be considered for the launch of the collection (expected early 2013); submissions after this date will still be considered for the collection, but may not appear in the collection at launch.

Submission Guidelines
If you wish to submit your research to the PLoS Text Mining Collection, please consider the following when preparing your manuscript:

All articles must adhere to the submission guidelines of the PLoS journal to which you submit.
Standard PLoS policies and relevant publication fees apply to all submissions.
Submission to any PLoS journal as part of the Text Mining Collection does not guarantee publication.

When you are ready to submit your manuscript to the collection, please log in to the relevant PLoS manuscript submission system and mention the Collection’s name in your cover letter. This will ensure that the staff is aware of your submission to the Collection. The submission systems can be found on the individual journal websites.

Please contact Samuel Moore (smoore@plos.org) if you would like further information about how to submit your research to the PLoS Text Mining Collection.

Organizers
Casey Bergman (University of Manchester)
Lawrence Hunter (University of Colorado-Denver)
Andrey Rzhetsky (University of Chicago)

 

Did Finishing the Drosophila Genome Legitimize Open Access Publishing?

I’m currently reading Glyn Moody‘s (2003) “Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine, and Business” and greatly enjoying the writing as well as the whirlwind summary of the history of Bioinformatics and the (Human) Genome Project(s). Most of what Moody says that I am familiar with is quite accurate, and his scholarship is thorough, so I find his telling of the story compelling. One claim I find new and curious in this book is in his discussion of the sequencing of the Drosphila melanogaster genome, more precisely the “finishing” of this genome, and its impact on the legitimacy of Open Access publishing.

The sequencing of D. melanogaster was done as a collaboration with between the Berkeley Drosophila Genome Project and Celera, as a test case to prove that whole-genome shotgun sequencing could be applied to large animal genomes.  I won’t go into the details here, but it is a widely regarded fact that the Adams et al. (2000) and Myers et al. (2000) papers in Science demonstrated the feasibility of whole-genome shotgun sequencing, but it was a lesser-known paper by Celniker et al. (2002) in Genome Biology which reported the “finished” D. melanogaster genome that proved the accuracy of whole-genome shotgun sequencing assembly. No controversy here.

More debatable is what Moody goes on to write about the Celniker et al. (2002) paper:

This was an important paper, then, and one that had a significance that went beyond its undoubted scientific value. For it appeared neither in Science, as the previous Drosophila papers had done, nor in Nature, the obvious alternative. Instead, it was published in Genome Biology. This describes itself as “a journal, delivered over the web.” That is, the Web is the primary medium, with the printed version offering a kind of summary of the online content in a convenient portable form. The originality of Genome Biology does not end there: all of its main research articles are available free online.

A description then follows of the history and virtues of PubMed Central and the earliest Open Access biomedical publishers BioMed Central and PLoS. Moody (emphasis mine) then returns to the issue of:

…whether a journal operating on [Open Access] principles could attract top-ranked scientists. This question was answered definitively in the affirmative with the announcement and analysis of the finished Drosophila sequence in January 2003. This key opening paper’s list of authors included not only [Craig] Venter, [Gene] Myers, and [Mark] Adams, but equally stellar representatives of the academic world of Science, such as Gerald Rubin, the boss of the fruit fly genome project, and Richard Gibbs, head of sequencing at Baylor College. Alongside this paper there were no less than nine other weighty contributions, including one on Apollo, a new tool for viewing and editing sequence annotation. For its own Drosophila extravaganza of March 2000, Science had marshalled seven paper in total. Clearly, Genome Biology had arrived, and with it a new commercial publishing model based on the latest way of showing the data.

This passage resonated with me since I was working at the BDGP at the time this special issue on the finishing of the Drosophila genome in Genome Biology was published, and was personally introduced to Open Access publishing through this event.  I recall Rubin walking the hallways of building 64 on his periodic visits promoting this idea, motivating us all to work hard to get our papers together by the end of 2002 for this unique opportunity. I also remember lugging around stacks of the printed issue at the Fly meeting in Chicago in 2003, plying unsuspecting punters with a copy of a journal that most people had never heard of, and having some of my first conversations with people on Open Access as a consequence.

What Moody doesn’t capture in this telling is the fact the Rubin’s decision to publish in Genome Biology almost surely owes itself to the influence that Mike Eisen had on Rubin and others in the genomics community in Berkeley at the time. Eisen and Rubin had recently collaborated on a paper, Eisen had made inroads in Berkeley on the Open Access issue by actively recruiting signatories for the PLoS open letter the year before, and Eisen himself published his first Open Access paper in Oct 2002 in Genome Biology. So clearly the idea of publishing in Open Access journals, and in particular Genome Biology, was in the air at the time. So it may not have been as bold of a step for Rubin to take as Moody implies.

Nevertheless, it is a point that may have some truth, and I think it is interesting to consider if indeed the long-standing open data philosophy of the Drosophila genetics community that led to the Genome Biology special issue was a key turning point in the widespread success of Open Access publishing over the next decade. Surely the movement would have taken off anyways at some point. But in late 2002, when the BioMed Central journals were the only place to publish gold Open Access articles, few people had tested the waters since the launch of BMC journals in 2000. While we cannot replay the tape, Moody’s claim is plausible in my view and it is interesting to ask whether widespread buy-in to Open Access publishing in biology might have been delyaed if Rubin had not insisted that the efforts of the Berkeley Drosophila Genome Project be published under and Open Access model?

UPDATE 25 March 2012

After tweeting this post, here is what Eisen and Moody have to say:

UPDATE 19 May 2012

It appears that the publication of another part of the Drosophila (meta)genome, its Wolbachia endosymbiont, played and important role in the conversion of Jonathan Eisen to supporting Open Access. Read more here.