Archive for the 'genomics' Category

Directed Genome Sequencing: the Key to Deciphering the Fabric of Life in 1993

Seeing the #AAASmtg hashtag flowing on my twitter stream over the last few days reminded my that my former post-doc advisor Sue Celniker must be enjoying her well-deserved election to the American Association for the Advancement of Science (AAAS). Sue has made a number of major contributions to Drosophila genomics, and I personally owe her for the chance to spend my journeyman years with her and so many other talented people in the Berkeley Drosophila Genome Project. I even would go so far as to say that it was Sue’s 1995 paper with Ed Lewis on the “Complete sequence of the bithorax complex of Drosophila” that first got me interested in “genomics.” I remember being completely in awe of the Genbank accession from this paper which was over 300,000 bp long! Man, this had to be the future. (In fact the accession number for the BX-C region, U31961, is etched in my brain like some telephone numbers from my childhood.) By the time I arrived at BDGP in 2001, the sequencing of the BX-C was already ancient history, as was the directed sequencing strategy used for this project.  These rapid changes made discovery of a set of discarded propaganda posters collecting dust in Reed George’s office that were made at the time (circa 1993) extolling the virtues of “Directed Genome Sequencing” as the key to “Deciphering the Fabric of Life” all the more poignant. I dug a photo I took of one of these posters today to commemerate the recognition of this pioneering effort (below). Here’s to a bygone era, and hats off to pioneers like Sue who paved the road for the rest of us in (Drosophila) genomics!

Directed-genome-sequencing

On The Neutral Sequence Fallacy

ResearchBlogging.org

Beginning in the late 1960s, Motoo Kimura overturned over a century of “pan-selectionist” thinking in evolutionary biology by proposing what has come to be called The Neutral Theory of Molecular Evolution. The Neutral Theory in its basic form states that the dynamics of the majority of changes observed at the molecular level are governed by the force of Genetic Drift, rather than Darwinian (i.e. Positive) Natural Selection. As with all paradigm shifts in Science, there was much of controversy over the Neutral Theory in its early years, but nevertheless the Neutral Theory has firmly established itself as the null hypothesis for studies of evolution at the molecular level since the mid-1980s.

Despite its widespread adoption, over the last ten years or so there has been a worrying increase in abuse of terminology concerning the Neutral Theory, which I will collectively term here the “Neutral Sequence Fallacy” (inspired by T. Ryan Gregory’s Platypus Fallacy). The Neutral Sequence Fallacy arises when the distinct concepts of functional constraint and selective neutrality are conflated, leading to the mistaken description of functionally unconstrained sequences as being “Neutral”. The Fallacy, in short, is to assign the term Neutral to a particular biomolecular sequence.

The Neutral Sequence Fallacy now routinely causes problems in the fields of evolutionary and genome biology, both in terms of generating conceptual muddles as well as shifting the goalposts needed to reject the null model of sequence evolution. I have intended to write about this problem for years in order to put a halt to this growing abuse of Neutral terminology, but unfortunately never found the time. However, this issue has unfortunately reared its head more strongly in the last few days with new forms of the Neutral Sequence Fallacy arising in the context of discussions about the ENCODE project, motivating a rough version of this critique to finally see the light of day. Here I will try to sketch out the origins of the Neutral Sequence Fallacy, in its original pre-genomic form that was debunked by Kimura while he was alive, and in its modern post-genomic form that has proliferated unchecked since the early comparative genomic era.

The Neutral Sequence Fallacy draws on several misconceptions about the Neutral Theory, and begins with the abbreviation of the theory’s name from its full form (The Neutral Mutation – Random Drift Hypothesis) to its colloquial form (The Neutral Theory). This abbreviation de-emphasizes that the concept of selective neutrality applies to mutations (i.e. variants, alleles), not biomolecular sequences (i.e. regions of the genome, proteins). Simply put, only variants of a sequence can be neutral or non-neutral, not sequences themselves.

The key misconception that permits the Neutral Sequence Fallacy to flourish is the incorrect notion that if a sequence is neutrally evolving, it implies a lack of functional constraint operating on that sequence, and vice versa. Other ways to state this misconception are: “a sequence is Neutral if it is under no selective constraint” or conversely “selective constraint rejects Neutrality”. This misconception arose originally in the 1970s, shortly after the proposal of The Neutral Theory when many researchers were first coming to terms with what the theory meant. This misconception became prevalent enough that it was the first to be addressed head-on by Kimura (1983) nearly 30 years ago in section 3.6 of his book The Neutral Theory of Molecular Evolution entitled “On some misunderstandings and criticisms” (emphasis is mine):

Since a number of criticisms and comments have been made regarding my neutral theory, often based on misunderstandings, I would like to take this opportunity to discuss some of them. The neutral theory by no means claims that the genes involved are functionless as mistakenly suggested by Zuckerkandl (1978). They may or may not be, but what the neutral theory assumes is that the mutant forms of each gene participating in molecular evolution are selectively nearly equivalent, that is, they can do the job equally well in terms of survival and reproduction of the individual. (p. 50)

As pointed out by Kimura and Ohta (1977), functional constraints are consistent with neutral substitutions within a class of mutants. For example, if a group of amino acids are constrained to be hydrophilic, there can be random changes within the codons producing such amino acids…There is, of course, negative selection against hydrophobic mutants in this region, but, as mentioned before, negative selection does not contradict the neutral theory.  (p. 53)

It is understandable how this misconception arises, because in the limit of zero functional constraint (e.g. in a non-functional pseudogene), all alleles become effectively equivalent to one another and are therefore selectively neutral. However, this does not mean that an unconstrained sequence is Neutral (unless we redefine the meaning of Neutrality, see below), because a sequence itself cannot be Neutral, only variants of a sequence can be Neutral with respect to each other.

It is crucial in this context to understand that the Neutral Theory accommodates all levels of selective constraint, and sequences under selective constraint can evolve Neutrally (see formal statement of this in Equation 5.1 of Kimura 1983). This point is often lost on many people. Until you get this, you don’t understand the Neutral Theory. A simple example shows how this is true. Consider a single codon in a protein coding region that codes for a degenerate amino acid. Deletion of the third codon position would creat a frameshift, and thus a third position “silent” site is indeed functional. However, alternative codons for this amino acid are functionally equivalent and evolve (close to) neutrally. The fact that these alternative alleles evolve neutrally has to do with their equivalence of function, not the degree of their functional constraint.

~~~~

To demonstrate the The Neutral Sequence Fallacy, I’d like to point out a few clear examples of this misconception in action.  The majority of transgressions in this area come from the genomics community where people may not have been formally trained in evolution, but I am sad to say that an increasing number of evolutionary biologists are also falling victim to The Neutral Sequence Fallacy these days. My reckoning is that the The Neutral Sequence Fallacy gained traction again in the post-genomic era around the time of the mouse genome paper by Waterston et al. (2002). In this widely-read paper, putatively unconstrained ancestral repeats were referred to (incorrectly) as “neutrally evolving DNA”, and used to estimate the fraction of the human genome under selective constraint. This analysis culminated with the following question: “How can we cleanly separate neutral and selected sequences?”. Under the Neutral Theory, this question makes no sense. First, sequences cannot be neutral; and second the framework used to detect functional constraints by comparative genomics assumes Neutral evolution of both classes of sites (unconstrained and constrained) – i.e. most changes between species are driven by Genetic Drift not Positive Selection. The proper formulation of this question should have been: “How can we cleanly separate unconstrained and constrained sequences?”.

Here is another clear example of the Neutral Sequence Fallacy in action from Lunter et al. (2006):

Figure 5 from Lunter et al. (2006). Notice how in the top panel, regions of the genome are contrasted as being “Neutral” vs. “Functional”. Here the term “Neutral” is being used incorrectly to mean selectively unconstrained. The bottom panel shows how indels are suppressed in Functional regions leading to intergap segments.

Here are a couple of more examples of the Neutral Sequence Fallacy in action, right in the title of fairly high-profile comparative genomics papers:

Title from Enlistki et al (2003). Notice that the functional class of “Regulatory DNA” is incorrectly contrasted as being the complement of nonfunctional “Neutral Sites”. In fact, both classes of sites are assumed to evolve neutrally in the authors’ model.

Title from Chin et al (2005). As above, notice how the concept of “Functionally conserved” is incorrectly stated to be the opposite of “Neutral sequence” and both classes of sites are assumed to evolve neutrally in the authors’ model.

I don’t mean to single these papers out, they just happen to represent very clear examples of the Neutral Sequence Fallacy in action. In fact, the Lunter et al. (2006) paper is one of my all time favorites, but it bugs the hell out of me when I have to unpick student’s misconceptions after they read it. Frustratingly, the list of papers repeating the Neutral Sequence Fallacy is long and growing. I have recently started to collect them as a citeulike library to provide examples for students to understand how not to make this common mistake. (If anyone else would like to contribute to this effort, please let me know — there is much work to be done to reverse this trend.)

~~~~

So what’s the big deal here?  Some would argue that these authors actually know what they are talking about, but they just happen to be using the wrong terminology. I wish that this were the case, but very often it is not. In many papers that I read or review that perpetrate the Neutral Sequence Fallacy, I usually find further examples of seriously flawed evolutionary reasoning, suggesting that they actually do not have a deep understanding of the issues at hand. In fact, evidence of the Neutral Sequence Fallacy is usually a clear hallmark in a paper that the authors are most likely practicing population genetics or molecular evolution without a license. This leads to a Neutral Sequence Fallacy of the 1st Kind: where authors do not understand the difference between the concepts functional constraint and selective neutrality. The problems for the Neutral Theory caused by violations of the 1st Kind are deep and clear. Because the Neutral Theory is not fully understood, it is possible to construct a straw-man version of the null hypothesis of Neutrality that can easily be “rejected” simply by finding evidence of selective constraint. Furthermore, because selectively unconstrained sequences are asserted (incorrectly) to be “Neutral” without actually evaluating their mode of evolution, this conceptual error undermines the entire value of the Neutral Theory as a null hypothesis testing framework.

But some authors really do know the difference between these ideas, and just happen to be using the term “Neutral” as shorthand for the term “Unconstrained.” Increasingly, I see some of my respected peers making this mistake in print who are card-carrying molecular evolutionists and do know their stuff. In these cases what is happening is a Neutral Sequence Fallacy of the 2nd Kind: understanding the difference between functional constraint and selective neutrality, but using lazy terminology that confuses these ideas in print. This is most often found in the context of studies on noncoding DNA where, in the absence of the genetic code to conveniently constrain terminology, people use terms like “neutral standard” or “neutral region” or “neutral sites” or “neutral proxy” in place of  “putatively unconstrained”. While the meaning of violations of the 2nd Kind can be overlooked and parsed correctly by experts in molecular evolution (I hope), this sloppy language causes substantial confusion about the Neutral Theory by students or non-evolutionary biologists who are new to the field, and leads to whole swathes of subsequent violations of the 1st Kind. Moreover, defining sequences as Neutral serves those with an Adaptationist agenda: since a control region is defined as being Neutral, all mutations that occur in that region must therefore be neutral as well, and thus any potential complications of the non-neutrality of mutations in one’s control region are conveniently swept under the carpet. Violations of the 2nd Kind are often quite insidious since they are generally perpetrated by people with some authority in evolutionary biology, often who are unaware of their misuse of terminology and who will vigorously deny that they are using terms which perpetuate a classical misconception laid to rest by Kimura 30 years ago.

~~~~

Which brings us to the most recent incarnation of the Neutral Sequence Fallacy in the context of the ENCODE project. In a companion post explaining the main findings of the ENCODE Project, Ewan Birney describes how the ENCODE Project reinforced recent findings that many biochemical events operate on the genome that are highly reproducible, but have no known function. In describing these event, Birney states:

I really hate the phrase “biological noise” in this context. I would argue that “biologically neutral” is the better term, expressing that there are totally reproducible, cell-type-specific biochemical events that natural selection does not care about. This is similar to the neutral theory of amino acid evolution, which suggests that most amino acid changes are not selected either for or against…Whichever term you use, we can agree that some of these events are “neutral” and are not relevant for evolution.

Under the standard view of the Neutral Theory, Birney misuses the term “Neutral” here to mean lack of functional constraint, repeating the classical form of the Neutral Sequence Fallacy. Because of this, I argue that Birney’s proposed terminology be rejected, since it will perpetuate a classic misconception in Biology. Instead, I propose the term “biologically inert”.

But wait a minute, you say, this is actually a transgression of the 2nd Kind. Really what is going on here is a matter of semantics. Birney knows the difference between functional constraint and selective neutrality. He is just formalizing the creeping misuse of the term Neutral to mean “Nonfunctional” that has been happening over the last decade.  If so, then I argue he is proposing to assign to the term Neutral the primary misconception of the Neutral Theory previously debunked by Kimura. This is a very dangerous proposal, since it will lead to further confusion in genomics arising from the “overloading” of the term Neutral (Kimura’s meaning: selectively equivalent; Birney’s meaning: no functional constraint). This muddle will subsequently prevent most scientists from properly understanding the Neutral Theory, and lead to many further examples of the Neutral Sequence Fallacy of both Kinds.

In my view, semantic switches like this are dangerous in Science, since they massively hinder communication and, therefore, progress. Semantic switches also lead to a distortion of understanding about key concepts in science. A famous case in point is Watson’s semantic switch of Crick’s term “Central Dogma” that corrupted Crick’s beautifully crafted original concept into the watered down textbook misinterpretation that is most often repeated: “DNA makes RNA make protein” (See Larry Moran’s blog for more on this).  Some may say this is the great thing about language, the same word can mean different things to different people. This view is best characterized in the immortal words of Humpty-Dumpty in Lewis Carroll’s Through the Looking Glass:

Others, including myself, disagree and prefer to have fixed definitions for scientific terms.

In a second recent case of the Neutral Sequence Fallacy creeping into discussions in the context of ENCODE, Michael Eisen proposes that we develop a “A neutral theory of molecular function” to interpret the meaning of these reproducible biochemical events that have no known function. Inspired by the introduction of a new null hypothesis in evolutionary biology ushered in by the Neutral Theory, Eisen calls for a new “neutral null hypothesis” that requires the molecular functions to be proven, not assumed. I laud any attempt to promote the use of null models for hypothesis testing in molecular biology, and whole-heartedly agree with Eisen’s main message about the need for a null model for molecular function.

But I disagree with Eisen’s proposal for a “neutral null hypothesis”, which from my reading of his piece, directly couples the null hypothesis for function with the null hypothesis for sequence evolution. By synonymizing the Ho of the functional model with the Ho of the evolutionary model, regions of the genome that would fail to reject the null functional model (i.e. have no functional constraint) will then be conflated with “being Neutral” (incorrect) or evolving neutrally (potentially correct), whereas those regions that reject the null functional model will be immediately considered as evolving non-neutrally (which may not always be the case since functional regions can evolve neutrally). While I assume this is not what is intended by Eisen, this is almost inevitably the outcome of suggesting a “neutral null hypothesis” in the context of biomolecular sequences. A “neutral null hypothesis for molecular function” makes it all to easy to merge the concepts of functional constraint and selective neutrality, which will inevitably lead many to the Neutral Sequence Fallacy. As Kimura does, Eisen should formally decouple the concept of functional constraint on a sequence from the mode of evolution by which that sequence evolves. Eisen should instead be promoting a “A null model of molecular function” that cleanly separates the concepts of function and evolution (an example of such a null model is embodied in Sean Eddy’s Random Genome Project). If not, I fear this conflation of concepts, like Birney’s semantic switch, will lead to more examples of the Neutral Sequence Fallacy of both Kinds.

~~~~

The Neutral Sequence Fallacy shares many sociological similarities with the chronic misuse and misconceptions about the concept of Homology. As discussed by Marabotti and Facchiano in their article “When it comes to homology, bad habits die hard“, there was a peak of misuse of the term Homology in the mid-1980s, which lead to backlash of many publications demanding more rigorous use of the term Homology. Despite this backlash and the best efforts of many scientists to stem the tide of misuse of Homology, ~43% of abstracts surveyed in 2007 use Homology incorrectly, down from 51% in 1986 before the assault on its misuse began. As anyone teaching the concept knows, unpicking misconceptions about Homology vs. Similarity is crucial for getting students to understand evolutionary theory. I argue that the same is true for the distinction between Functional Constraint and Selective Neutrality. When it comes to Functional Constraints on biomolecular sequences, our choice of terminology should be anything but Neutral.

References:

Chin CS, Chuang JH, & Li H (2005). Genome-wide regulatory complexity in yeast promoters: separation of functionally conserved and neutral sequence. Genome research, 15 (2), 205-13 PMID: 15653830

Elnitski L, Hardison RC, Li J, Yang S, Kolbe D, Eswara P, O’Connor MJ, Schwartz S, Miller W, & Chiaromonte F (2003). Distinguishing regulatory DNA from neutral sites. Genome research, 13 (1), 64-72 PMID: 12529307

Lunter G, Ponting CP, & Hein J (2006). Genome-wide identification of human functional DNA using a neutral indel model. PLoS computational biology, 2 (1) PMID: 16410828

Marabotti A, & Facchiano A (2009). When it comes to homology, bad habits die hard. Trends in biochemical sciences, 34 (3), 98-9 PMID: 19181528

Waterston RH, Lindblad-Toh K, Birney E, Rogers J, Abril JF, Agarwal P, Agarwala R, Ainscough R, Alexandersson M, An P, Antonarakis SE, Attwood J, Baertsch R, Bailey J, Barlow K, Beck S, Berry E, Birren B, Bloom T, Bork P, Botcherby M, Bray N, Brent MR, Brown DG, Brown SD, Bult C, Burton J, Butler J, Campbell RD, Carninci P, Cawley S, Chiaromonte F, Chinwalla AT, Church DM, Clamp M, Clee C, Collins FS, Cook LL, Copley RR, Coulson A, Couronne O, Cuff J, Curwen V, Cutts T, Daly M, David R, Davies J, Delehaunty KD, Deri J, Dermitzakis ET, Dewey C, Dickens NJ, Diekhans M, Dodge S, Dubchak I, Dunn DM, Eddy SR, Elnitski L, Emes RD, Eswara P, Eyras E, Felsenfeld A, Fewell GA, Flicek P, Foley K, Frankel WN, Fulton LA, Fulton RS, Furey TS, Gage D, Gibbs RA, Glusman G, Gnerre S, Goldman N, Goodstadt L, Grafham D, Graves TA, Green ED, Gregory S, Guigó R, Guyer M, Hardison RC, Haussler D, Hayashizaki Y, Hillier LW, Hinrichs A, Hlavina W, Holzer T, Hsu F, Hua A, Hubbard T, Hunt A, Jackson I, Jaffe DB, Johnson LS, Jones M, Jones TA, Joy A, Kamal M, Karlsson EK, Karolchik D, Kasprzyk A, Kawai J, Keibler E, Kells C, Kent WJ, Kirby A, Kolbe DL, Korf I, Kucherlapati RS, Kulbokas EJ, Kulp D, Landers T, Leger JP, Leonard S, Letunic I, Levine R, Li J, Li M, Lloyd C, Lucas S, Ma B, Maglott DR, Mardis ER, Matthews L, Mauceli E, Mayer JH, McCarthy M, McCombie WR, McLaren S, McLay K, McPherson JD, Meldrim J, Meredith B, Mesirov JP, Miller W, Miner TL, Mongin E, Montgomery KT, Morgan M, Mott R, Mullikin JC, Muzny DM, Nash WE, Nelson JO, Nhan MN, Nicol R, Ning Z, Nusbaum C, O’Connor MJ, Okazaki Y, Oliver K, Overton-Larty E, Pachter L, Parra G, Pepin KH, Peterson J, Pevzner P, Plumb R, Pohl CS, Poliakov A, Ponce TC, Ponting CP, Potter S, Quail M, Reymond A, Roe BA, Roskin KM, Rubin EM, Rust AG, Santos R, Sapojnikov V, Schultz B, Schultz J, Schwartz MS, Schwartz S, Scott C, Seaman S, Searle S, Sharpe T, Sheridan A, Shownkeen R, Sims S, Singer JB, Slater G, Smit A, Smith DR, Spencer B, Stabenau A, Stange-Thomann N, Sugnet C, Suyama M, Tesler G, Thompson J, Torrents D, Trevaskis E, Tromp J, Ucla C, Ureta-Vidal A, Vinson JP, Von Niederhausern AC, Wade CM, Wall M, Weber RJ, Weiss RB, Wendl MC, West AP, Wetterstrand K, Wheeler R, Whelan S, Wierzbowski J, Willey D, Williams S, Wilson RK, Winter E, Worley KC, Wyman D, Yang S, Yang SP, Zdobnov EM, Zody MC, & Lander ES (2002). Initial sequencing and comparative analysis of the mouse genome. Nature, 420 (6915), 520-62 PMID: 12466850

Credits:

Thanks to Chip Aquadro for originally pointing out to me when I perpetrated the Neutral Sequence Fallacy (of the 1st Kind!) during a journal club as an undergraduate in his lab. I can distinctly recall hot, embarrassment of the moment while being schooled in this important issue by a master. Thanks also to Alan Moses, who was the first of many people I converted to the light on this issue, and who has encouraged me since to write this up for a wider audience. Thanks also to Douda Bensasson for putting up with me ranting about this issue for years, and helpful comments on this post.

Related Posts:

The Cost to Science of the ENCODE Publication Embargo

The big buzz in the genomics twittersphere today is the release of over 30 publications on the human ENCODE project. This is a heroic achievement, both in terms of science and publishing, with many groundbreaking discoveries in biology and pioneering developments in publishing to be found in this set of papers. It is a triumph that all of these papers are freely available to read, and much is being said elsewhere in the blogosphere about the virtues of this project and the lessons learned from the publication of these data. I’d like to pick up here on an important point made by Daniel MacArthur in his post about the delays in the publication of these landmark papers that have arisen from the common practice of embargoing papers in genomics. To be clear, I am not talking about embargoing the use of data (which is also problematic), but embargoing the release of manuscripts that have been accepted for publication after peer review.

MacArthur writes:

Many of us in the genomics community were aware of the progress the [ENCODE] project had been making via conference presentations and hallway conversations with participants. However, many other researchers who might have benefited from early access to the ENCODE data simply weren’t aware of its existence until today’s dramatic announcement – and as a result, these people are 6-12 months behind in their analyses.

It is important to emphasize that these publication delays are by design, and are driven primarily by the journals that set the publication schedules for major genomics papers. I saw first-hand how Nature sets the agenda for major genomics papers and their associated companion papers as part of the Drosophila 12 Genomes Project. This insider’s view left a distinctly bad taste in my mouth about how much control a single journal has over some of the most important community resource papers that are published in Biology.  To give more people insight into this process, I am posting the agenda set by Nature for publication (in reverse chronological order) of the main Drosophila 12 Genomes paper, which went something like this:

7 Nov 2007: papers are published, embargo lifted on main/companion papers
28 Sept 2007: papers must be in production
21 Sept 2007: revised versions of papers received
17 Aug 2007: reviews are returned to authors
27 Jul 2007: papers are submitted

Not only was acceptance of the manuscript essentially assumed by the Nature editorial staff, the entire timeline was spelled out in advance, with an embargo built in to the process from the outset. Seeing this process unfold first hand was shocking to me, and has made me very skeptical of the power that the major journals have to dictate terms about how we, and other journals, publish our work.

Personally, I cannot see how this embargo system serves anyone in science other than the major journals. There is no valid scientific reason that major genome papers and their companions cannot be made available as online accepted preprints, as is now standard practice in the publishing industry. As scientists, we have a duty to ensure that the science we produce is released to the general public and community of scientists as rapidly and openly as possible. We do not have a duty to serve the agenda of a journal to increase their cachet or revenue stream. I am aware that we need to accept delays due to quality control via the peer review and publication process. But the delays due to the normal peer review process are bad enough, as ably discussed recently by Leslie Voshall. Why on earth would we accept that journals build in further unnecessary delays into the publication process?

This of course leads to the pertinent question: how harmful is this system of embargoes? Well, we can estimate put an upper estimate on * this pretty easily from the submission/acceptance dates of the main and companion ENCODE papers (see table below). In general, most ENCODE papers were embargoed for a minimum of 2 months but some were embargoed for up to nearly 7 months. Ignoring (unfairly) the direct impact that these delays may have on the careers of PhD students and post-docs involved, something on the order of 112 months of access to these important papers have been lost to all scientists by this single embargo. Put another way, nearly up to * 10 years of access time to these papers has been collectively lost to science because of the ENCODE embargo. To the extent that these papers are crucial for understanding the human genome, and the consequences this knowledge has for human health, this decade lost to humanity is clearly unacceptable. Let us hope that the ENCODE project puts an end to the era of journal-mandated embargoes in genomics.

DOI Date Received Date Accepted Date published Months in review Months in embargo
nature11247 24-Nov-11 29-May-12 05-Sep-12 6.0 3.2
nature11233 10-Dec-11 15-May-12 05-Sep-12 5.1 3.6
nature11232 15-Dec-11 15-May-12 05-Sep-12 4.9 3.6
nature11212 11-Dec-11 10-May-12 05-Sep-12 4.9 3.8
nature11245 09-Dec-11 22-May-12 05-Sep-12 5.3 3.4
nature11279 09-Dec-11 01-Jun-12 05-Sep-12 5.6 3.1
gr.134445.111 06-Nov-11 07-Feb-12 05-Sep-12 3.0 6.8
gr.134957.111 16-Nov-11 01-May-12 05-Sep-12 5.4 4.1
gr.133553.111 17-Oct-11 05-Jun-12 05-Sep-12 7.5 3.0
gr.134767.111 11-Nov-11 03-May-12 05-Sep-12 5.6 4.0
gr.136838.111 21-Dec-11 30-Apr-12 05-Sep-12 4.2 4.1
gr.127761.111 16-Jun-11 27-Mar-12 05-Sep-12 9.2 5.2
gr.136101.111 09-Dec-11 30-Apr-12 05-Sep-12 4.6 4.1
gr.134890.111 23-Nov-11 10-May-12 05-Sep-12 5.5 3.8
gr.134478.111 07-Nov-11 01-May-12 05-Sep-12 5.7 4.1
gr.135129.111 21-Nov-11 08-Jun-12 05-Sep-12 6.5 2.9
gr.127712.111 15-Jun-11 27-Mar-12 05-Sep-12 9.2 5.2
gr.136366.111 13-Dec-11 04-May-12 05-Sep-12 4.6 4.0
gr.136127.111 16-Dec-11 24-May-12 05-Sep-12 5.2 3.4
gr.135350.111 25-Nov-11 22-May-12 05-Sep-12 5.8 3.4
gr.132159.111 17-Sep-11 07-Mar-12 05-Sep-12 5.5 5.9
gr.137323.112 05-Jan-12 02-May-12 05-Sep-12 3.8 4.1
gr.139105.112 25-Mar-12 07-Jun-12 05-Sep-12 2.4 2.9
gr.136184.111 10-Dec-11 10-May-12 05-Sep-12 4.9 3.8
gb-2012-13-9-r48 21-Dec-11 08-Jun-12 05-Sep-12 5.5 2.9
gb-2012-13-9-r49 28-Mar-12 08-Jun-12 05-Sep-12 2.3 2.9
gb-2012-13-9-r50 04-Dec-11 18-Jun-12 05-Sep-12 6.4 2.5
gb-2012-13-9-r51 23-Mar-12 25-Jun-12 05-Sep-12 3.0 2.3
gb-2012-13-9-r52 09-Mar-12 25-May-12 05-Sep-12 2.5 3.3
gb-2012-13-9-r53 29-Mar-12 19-Jun-12 05-Sep-12 2.6 2.5
Min 2.3 2.3
Max 9.2 6.8
Avg 5.1 3.7
Sum 152.7 112.1

Footnote:

* Based on a converation on twitter with Chris Cole, I’ve revised this to be estimate to reflect the upper bound, rather than a point estimate of time lost to science.

Top N Reasons To Do A Ph.D. or Post-Doc in Bioinformatics/Computational Biology

For the last few years I’ve given a talk to incoming Ph.D. students in Molecular Biology on why they should consider doing Computational Biology research. I’m fairly passionate about making this pitch, since I strongly believe all 21st century Biologists should have a greater (or lesser) degree of computational training, and that the best time to gain that training is during a Ph.D. or a Post-Doc.

I’ve decided to post an expanded version of the reasons I give for why Biology trainees should gain computational skills in hopes of encouraging a wider audience to consider a research path in Computational Biology. For simplicity, I define the field of Computational Biology to include Bioinformatics as well, although there are important distinctions between these two disciplines. Also, I note that this list is geared towards convincing students with a background in Molecular Biology to consider moving into Computational Biology, but core aspects and variants of the arguments here should apply to people with backgrounds in other disciplines (e.g. Ecology, Neuroscience) as well. Here we go…

0. Computing is the key skill set for 21st century biology: As time progresses, Biology is becoming a more quantitative science. Over the last three centuries, biology has transformed from an observational science into a experimental science into a data science. As the low-hanging fruit gets picked, fundamental discoveries are getting harder to make using observation and experiment alone. In the future, new discoveries will require leveraging big datasets and using advanced analytical methods. Big data and complex models require computational skills. Full stop. There is no way to escape this reality.

But if you don’t take my word for it, listen to what Nobel-prize winning pioneer of molecular biology Walter Gilbert, who made this same argument about the future of biology over 20 years ago:

To use this flood of [sequence] knowledge, which will pour across the computer networks of the world, biologists not only must become computer literate, but also change their approach to the problem of understanding life.

Or listen to Nobel-prize winning pioneer of molecular biology Sydney Brenner, who has been banging on about this issue for years:

I spent many hours persuading people that computing was not only going to be the essential tool for biological research but would also provide models for analyzing complexity…The development of sequencing techniques and their widespread application has generated enormous databases of information, and the need for computers is no longer questioned

1. Computational skills are highly transferrable: Let’s face it, not everyone doing a Ph.D. or Post-Doc. in Biology is going to go on to a career in academic research. The Washington Post recently reported that “only 14 percent of those with a Ph.D. in biology and the life sciences now land a coveted academic position within five years“. So if there is high probability that your Ph.D. or Post-Doc training will need to be used outside of academic research, why not aquire the most broadly applicable skill set that you can? Experimental skills only transfer to laboratory jobs in the the biosciences or medical job market. Computational skills transfer across this sector, plus a much wider market outside of the (bio)science. Increasing your computational chops won’t just give you a better chance at landing a job. It will have added benefits in your own life as well, since you will have a deeper appreciation for how computers work and more mastery of when you interact with computers in your daily life.

2. Computing will help improve your core scientific skills: Biology is inherently a messy subject. While some Biologists are rigorously trained in how to cope with this messiness through good experimental design and statistical analysis (here’s looking at you my Ecologist sisters and brothers), the sad truth is that many (most?) Biologists have bad habits when it comes to data collection and analysis.  Computing forces you to confront and tame the very human tendency to do science in ad hoc ways and therefore it naturally develops core scientific skills such as: logically planning experiments, collecting data consistently, developing reproducible methodology, and analysing your data with proper statistical methodology. So even if you can’t be convinced to abandon the bench or field forever, computational training will develop scientific best-practice that crosses-over and enhances your experimental skills set.

3. You should use you Ph.D./Post-Doc to develop new skills: Most Biologists come into their Ph.D. with some experimental training from high school and undergraduate studies. OK, so maybe this training isn’t cutting edge and you haven’t done advanced research to really hone your experimental skills, but neverthless you do have some amount of training under your belt. In contrast, the vast majority of Biology Ph.D. students have no training in scientific computing skills beyond using Excel or a GUI-based statistics package. So use your Ph.D. or Post-Doc. time to for what it should be — training in something new, not just further developing a skill set that you already have.

My view is that the best time to train in Computational Biology is during a Ph.D., and the last chance to do this is likely to be as a Post-Doc. This is because during your Ph.D. you have time, secure funding and a departmental structure to protect you that you will never have again in your career. Gaining computational skills as a Post-Doc is also a great option, but shorter contracts, greater PI dependency, and higher expectations to publish mean that you typically don’t have as much time to re-train as you would during a Ph.D. Good luck finding the time to re-tool as a PI.

4. You will develop a more unique skill set in Biology: As noted above, the vast majority of Biologists have experimental training, but very few have advanced Computational training. While this is (thankfully!) changing, you will still be at a competitive advantage for at least a decade or more in terms of getting results in post-genomic Biology if you can code. And because you will be able to get results that many others cannot, plus the fact that you will have skills that set you apart from the herd, you will be more competitive on the job market. Straight up.

5. You will publish more papers: While it may not always feel like it, a Ph.D. or  Post-Doc goes by quickly. Therefore, you don’t have a lot of time to waste time with experiments that fail, if you want to stay in the game. Don’t get me wrong, Computational Biology will provide you more than your fair share of failed experiments, but crucially they will fail in hours/days instead of weeks/months, and therefore allow you to move on to something that works more quickly. As a result, you are very likely to publish more papers per unit time in Computational Biology. Whether you believe the old chestunut that experimental papers are somehow “harder” and therefore have more worth (I don’t), it is clear that publication remains the hard currency of science. Moreover, the adage that search comittees “know how to count even if they can’t read” is still as true as ever. More seriously, what employers and funding agencies want to see is junior researchers who have good ideas and can take them to completion. Publication is the proof that you can finish projects. Computational Biology will allow you to demonstrate that you are a finisher, and that you have what it takes to succeed in science, a little bit faster than the next guy or gal.

6. You will have more flexibility in your research: I would say one of the greatest thing about being a Computational Biologist is that you are not as constrained in your research as you are when you do Experimental Biology. Sure, you can only work on projects that are amenable to computational analysis, but this scope is vast — from Computational Neuroscience to Theoretical Ecology and anything and everything in between. You can also move from flexibly from topic to topic more easily than you can if your skill set is linked to specific experimental techniques. This flexibility in scope allows you to satisfy your intellectual curiosty or chase the latest trend as you wish.  Most importantly for trainees, the flexibility (and low cost, see below) afforded by Computational Biology research allows you to make the case to your PI to develop your own research programme earlier in your career. This is crucial since the more experience you have designing independent projects early in your career, the more likely you will be to succeed if/when you make it to the big time.

7. You will have more flexibility in working practices: ‘Nuff said:

Seriously though, Computational Biology has many pluses when it come to balancing work and life, but still maintaining a high level of productivity. Unlike being chained to the bench, you can do Computational Biology from pretty much anywhere, and telecommuting/working from home are standard practices in Computational Biology. Over the longer term, this flexibilty in work practice helps you to accommodate career-breaks, manage the tough times life will throw at you, and make big life decisions like starting a family easier, since you can integrate coding and submitting jobs to the cluster into your life much better than you can integrate racing back to the lab to flip stocks or harvest cells. Let me say it loud and clear right here: if you want to have a career in academic Biological research and also have a family, choosing to do a Ph.D or Post-Doc in Computational Biology will be more likely to get you to this goal than if you are stuck in the lab. This is not just true for women, as I and others can attest to:

8. Computational research is cost-effective: With the wealth of publicly avaiable data now available, Computational Biology research is cheaper than most experimental work that requires a large consumables budget. This is important for a number of reasons. Primarily, work in Computational Biology is less dependent on grant funding, and therefore you don’t have to be a slave to trends or waste inordinate time chasing grant funding — you can actually just get on with the job of doing the science you want to do. This is especially important in tough economic times like the present moment. As mentioned above, the reduced cost of Computational Biology research also allows trainees to design their own research at an earlier career stage, since you will not be as reliant on a PI to authorize expenditure for your project. Cost-efficiency is also very important when you are starting your group and for maintaining continuity of productivity when riding out troughs in funding or group size. Finally, the cost-efficiency of Computational Biology allows researchers in developing scientific economies to be on equal parity with researchers in rich countries. In my opinion, trainees from BRICS nations and other developing economies (sorry to use this somewhat judgemental term) should really consider choosing Computational Biology as a way to get to the top of the class globally without being limited by the need for big budgets.

9. A successful scientist ends up in an office: This is the kicker. If you succeed and get that “coveted” PI position, you will ultimately end up stuck in an office. True, some brave souls still find time to make it into the lab to do experiments, but they are a rare breed. The truth is that the native habitat for an academic researchers is sitting in their office in front of their computer. You can’t do a lick of wet lab or field work from the office, but you can still do Computational Biology research from behind a desk! As noted by Webb Miller, one of the most highly-cited bioinformaticians ever, continuing to do your own research is also one of the best ways to stay motivated about your work over the long haul of a career. Remember that the long term goal is to be a “Principal Investigator”, not an “In Principle Investigator,” so if you’ve really wanted to do research since you were young, then ask yourself: why train in skills you will never ultimately use for the majority of your career, while somebody else in your lab gets to have fun making all the discoveries?

[10. You will understand why lists should start with the number zero.]

A major reason I have for posting this list is to start more discussion about the benefits of doing research in Computational Biology. I have deliberately made this a top N (not a top 10 list) so that good ideas can be added to the above. I’ll update this post with good suggestions from the comments, and give full credit to the originator.

Will the Democratization of Sequencing Undermine Openness in Genomics?

It is no secret, nor is it an accident, that the success of genome biology over the last two decades owes itself in large part to the Open Science ideals and practices that underpinned the Human Genome Project. From the development of the Bermuda principles in 1996 to the Ft. Lauderdale agreement in 2003, leaders in the genomics community fought for rapid, pre-publication data release policies that have (for the most part) protected the interests of genome sequencing centers and the research community alike.

As a consequence, progress in genomic data acquisition and analysis has been incredibly fast, leading to major basic and medical breakthroughs, thousands of publications, and ultimately to new technologies that now permit extremely high-throughput DNA sequencing. These new sequencing technologies now give individual groups sequencing capabilities that were previously only acheivable by large sequencing centers. This development makes it timely to ask: how do the data release policies for primary genome sequences apply in the era of next-generation sequencing (NGS)?

My reading of the the history of genome sequence release policies condenses the key issues as follows:

  • The Bermuda Principles say that assemblies of primary genomic sequences of human and other organims should be made within 24 hrs of their production
  • The Ft. Lauderdale Agreement says that whole genome shotgun reads should be deposited in public repositories within one week of generation. (This agreement was also encouraged to be applied to other types of data from “community resource projects” – defined as research project specifically devised and implemented to create a set of data, reagents or other material whose primary utility will be as a resource for the broad scientific community.)

Thus, the agreed standard in the genomics field is that raw sequence data from the primary genomic sequence of organisms should be made available within a week of generation. In my view this also applies to so-called “resequencing” efforts (like the 1000 Genomes Project), since genomic data from a new strain or individual is actually a new primary genome sequence.

The key question concerning genomic data release policies in the NGS era, then, is do these data release policies apply only to sequencing centers or to any group producing primary genomic data? Now that you are a sequencing center, are you also bound by the obligations that sequencing centers have followed for a decade or more? This is an important issue to discuss for it’s own sake in order to promote Open Science, but also for the conundrums it throws up about data release policies in genomics. For example, if individual groups who are sequencing genomes are not bound by the same data release policies as sequencing centers, then a group at e.g. Sanger or Baylor working on a genome is actually now put at a competetive disadvantage in the NGS era because they would be forced to release their data.

I argue that if the wider research community does not abide by the current practices of early data release in genomics, the democratization of sequencing will lead to the slow death of openness in genomics. We could very well see a regression to the mean behavior of data hording (I sometimes call this “data mine, mine, mining”) that is sadly characteristic of most of biological sciences. In turn this could decelerate progress in genomics, leading to a backlog of terabytes of un(der)analyzed data rotting on disks around the world. Are you prepared to standby, do nothing and bear witness to this bleak future? ; )

While many individual groups collecting primary genomic sequence data may hesitate to embrace the idea of pre-publication data release, it should be noted that there is also a standard procedure in place for protecting the interests of the data producer to have first chance to publish (or co-publish) large-scale analysis of the data, while permitting the wider research community to have early access. The Ft. Lauderdale agreeement recognized that:

…very early data release model could potentially jeopardize the standard scientific practice that the investigators who generate primary data should have both the right and responsibility to publish the work in a peer-reviewed journal. Therefore, NHGRI agreed to the inclusion of a statement on the sequence trace data permitting the scientific community to use these unpublished data for all purposes, with the sole exception of publication of the results of a complete genome sequence assembly or other large-scale analyses in advance of the sequence producer’s initial publication.

This type of data producer protection proviso has being taken up by some community-led efforts to release large amounts of primary sequence data prior to publiction, as laudably done by the Drosophila Population Genomics Project (Thanks Chuck!)

While the Ft. Lauderdale agreement in principle tries to balance the interests of the data producers and consumers, it is not without failings. As Mike Eisen points out on his blog:

In practice [the Ft. Lauderdale privoso] has also given data producers the power to create enormous consortia to analyze data they produce, effectively giving them disproportionate credit for the work of large communities. It’s a horrible policy that has significantly squelched the development of a robust genome analysis community that is independent of the big sequencing centers.

Eisen rejects the Ft. Lauderdale agreement in favor of a new policy he entitles The Batavia Open Genomic Data Licence.  The Batavia License does not require an embargo period or the need to inform data producers of how they intend to use the data, as is expected under the Ft. Lauderdale agreement, but it requires that groups using the data publish in an open access journal. Therefore the Batavia License is not truly open either, and I fear that it imposes unnecessary restrictions that will prevent its widespread uptake. The only truly Open Science policy for data release is a Creative Commons (CC-BY or CC-Zero) style license that has no restrictions other than attribution, a precedent that was established last year for the E. coli TY-2482 genome sequence (BGI you rock!).

A CC-style license will likely be too liberal for most labs generating their own data, and thus I argue we may be better off pushing for a individual groups to use a Ft. Lauderdale style agreement to encourage the (admittedly less than optimal) status quo to be taken up by the wider community. Another option is for researchers to release their data early via “data publications” such as those being developed by journals such as GigaScience and F1000 Reports.

Whatever the mechanism, I join with Eisen in calling for wider participation for the research to community to release their primary genomic sequence data. Indeed, it would be a truly sad twist of fate if the wider research community does not follow the genomic data release policies in the post-NGS era that were put in place in the pre-NGS era in order to protect their interests. I for one will do my best in the coming years to reciprocate the generosity that has made Drosophila genomics community so great (in the long tradition of openness dating back to the Morgan school), by releasing any primary sequence data produced by my lab prior to publication. Watch this space.

Did Finishing the Drosophila Genome Legitimize Open Access Publishing?

I’m currently reading Glyn Moody‘s (2003) “Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine, and Business” and greatly enjoying the writing as well as the whirlwind summary of the history of Bioinformatics and the (Human) Genome Project(s). Most of what Moody says that I am familiar with is quite accurate, and his scholarship is thorough, so I find his telling of the story compelling. One claim I find new and curious in this book is in his discussion of the sequencing of the Drosphila melanogaster genome, more precisely the “finishing” of this genome, and its impact on the legitimacy of Open Access publishing.

The sequencing of D. melanogaster was done as a collaboration with between the Berkeley Drosophila Genome Project and Celera, as a test case to prove that whole-genome shotgun sequencing could be applied to large animal genomes.  I won’t go into the details here, but it is a widely regarded fact that the Adams et al. (2000) and Myers et al. (2000) papers in Science demonstrated the feasibility of whole-genome shotgun sequencing, but it was a lesser-known paper by Celniker et al. (2002) in Genome Biology which reported the “finished” D. melanogaster genome that proved the accuracy of whole-genome shotgun sequencing assembly. No controversy here.

More debatable is what Moody goes on to write about the Celniker et al. (2002) paper:

This was an important paper, then, and one that had a significance that went beyond its undoubted scientific value. For it appeared neither in Science, as the previous Drosophila papers had done, nor in Nature, the obvious alternative. Instead, it was published in Genome Biology. This describes itself as “a journal, delivered over the web.” That is, the Web is the primary medium, with the printed version offering a kind of summary of the online content in a convenient portable form. The originality of Genome Biology does not end there: all of its main research articles are available free online.

A description then follows of the history and virtues of PubMed Central and the earliest Open Access biomedical publishers BioMed Central and PLoS. Moody (emphasis mine) then returns to the issue of:

…whether a journal operating on [Open Access] principles could attract top-ranked scientists. This question was answered definitively in the affirmative with the announcement and analysis of the finished Drosophila sequence in January 2003. This key opening paper’s list of authors included not only [Craig] Venter, [Gene] Myers, and [Mark] Adams, but equally stellar representatives of the academic world of Science, such as Gerald Rubin, the boss of the fruit fly genome project, and Richard Gibbs, head of sequencing at Baylor College. Alongside this paper there were no less than nine other weighty contributions, including one on Apollo, a new tool for viewing and editing sequence annotation. For its own Drosophila extravaganza of March 2000, Science had marshalled seven paper in total. Clearly, Genome Biology had arrived, and with it a new commercial publishing model based on the latest way of showing the data.

This passage resonated with me since I was working at the BDGP at the time this special issue on the finishing of the Drosophila genome in Genome Biology was published, and was personally introduced to Open Access publishing through this event.  I recall Rubin walking the hallways of building 64 on his periodic visits promoting this idea, motivating us all to work hard to get our papers together by the end of 2002 for this unique opportunity. I also remember lugging around stacks of the printed issue at the Fly meeting in Chicago in 2003, plying unsuspecting punters with a copy of a journal that most people had never heard of, and having some of my first conversations with people on Open Access as a consequence.

What Moody doesn’t capture in this telling is the fact the Rubin’s decision to publish in Genome Biology almost surely owes itself to the influence that Mike Eisen had on Rubin and others in the genomics community in Berkeley at the time. Eisen and Rubin had recently collaborated on a paper, Eisen had made inroads in Berkeley on the Open Access issue by actively recruiting signatories for the PLoS open letter the year before, and Eisen himself published his first Open Access paper in Oct 2002 in Genome Biology. So clearly the idea of publishing in Open Access journals, and in particular Genome Biology, was in the air at the time. So it may not have been as bold of a step for Rubin to take as Moody implies.

Nevertheless, it is a point that may have some truth, and I think it is interesting to consider if indeed the long-standing open data philosophy of the Drosophila genetics community that led to the Genome Biology special issue was a key turning point in the widespread success of Open Access publishing over the next decade. Surely the movement would have taken off anyways at some point. But in late 2002, when the BioMed Central journals were the only place to publish gold Open Access articles, few people had tested the waters since the launch of BMC journals in 2000. While we cannot replay the tape, Moody’s claim is plausible in my view and it is interesting to ask whether widespread buy-in to Open Access publishing in biology might have been delyaed if Rubin had not insisted that the efforts of the Berkeley Drosophila Genome Project be published under and Open Access model?

UPDATE 25 March 2012

After tweeting this post, here is what Eisen and Moody have to say:

UPDATE 19 May 2012

It appears that the publication of another part of the Drosophila (meta)genome, its Wolbachia endosymbiont, played and important role in the conversion of Jonathan Eisen to supporting Open Access. Read more here.


Twitter Updates


Follow

Get every new post delivered to your Inbox.

Join 72 other followers