A Case for Junior/Senior Partnership Grants

Much has been made in recently years over funding crises in the US and Europe, which are the inevitable result of the Great Recession superimposed on top of the end of exponential growth in Science. Governments hamstrung by austerity measures or lack of political will have been forced to abandon increases in scientific funding, going so far even as to freeze funds for awarded grants in Spain (see translation here). The consequences of this stagnant period of inputs to scientific progress will be felt for many years to come, materially in terms of basic and applied discoveries, but also socially in terms of the impacts on an entire generation of scientists who are just beginning their independent careers.

Why are early stage researchers hit hardest by stagnation or decreases in funding? Simply because access to funding is not a level playing field for all scientists, and is in fact highly dependent on career stage and experience. Therefore, increased competition for resources is expected to hit younger scientists disproportionately harder relative to established researchers because of many factors, including:

  • less experience in the art of writing grants,
  • less experience in reviewing grants,
  • less experience serving on grant panels,
  • shorter scientific and management track record,
  • and a less highly developed social network.

The specific negative effect that a general increase in resource competition has on young researchers is (in my view) the best explanation for the extremely worrying downward trends in the proportion of young PIs receiving NIH grants, and the increasing upward trend in the age to receipt of first RO1 in the USA, shown in the following diagrams from the NIH Rock Talk Blog:

Thankfully, this issue which is being discussed seriously by NIH’s Deputy Director for Extramural Research, Dr. Sally Rockey, as publication of these data attests to.  [I would very much welcome if other funding agencies published similar demographic breakdowns of their funding to address whether this is a global effect.] However, not all see these trends as worrying and interpret them on socially-neutral demographic grounds.

To help combat the inherent age-based iniquities in access research funding, funding agencies typically ring-fence funding for early-stage researchers under a “New Investigator” type umbrella. In fact, Sally Rockey provides a link to an impressive history of initiatives the NIH has undertaken to tackle the New Investigator issue. But what is striking to me is that despite putting a series of different New Investigator mechanisms in place, the negative impacts on early-stage researchers have only worsened over the last three decades. Thus New Investigator programmes are clearly not enough to redress this issue, and new solutions must be sought out. Furthermore ring-fencing funding for junior researchers necessarily creates an us-vs-them mentality, which can have counterproductive repercussions among different scientific cohorts. And while New Investigator programmes are widely supported in principle, trade-offs in resource allocation can lead to unstable to changes in policy, as witnessed in the case of the now-defunct NERC New Investigator programme.

So, what of it? Is this post just another bemoaning the sorry state of affairs in funding for early-stage researchers? No, or at least, not only. Actually, my motivation is to constructively propose a relatively simple (naive?) mechanism to fund research projects that can address the inequities in funding across career stages, but which also has the additional benefit of engendering mentorship and transfer of skills across the generations: the Junior/Senior Partnership Grant. [As with all (good) ideas, such a model has been proposed before by the Women’s Cancer Network, but does not appear to be adopted by major federal funding agencies.]

The idea behind a Junior/Senior Partnership funding “scheme” is simple. Based on some criteria (years since PhD or first tenure-track position, number of successful PI awards, number of wrinkles, etc.) researchers would be classified as Junior or Senior. Based on your classification, to be eligible for an award under such a programme, at least one Junior and one Senior PI would need to be co-applicants on grant and have distinct contributions to the grant and project management. This simple mechanism would ensure that young PIs get a piece of the funding pie and allow them to establish a track record, just as a New Investigator schemes do.  But it would also obviate the need for reform to rely on the altruistic stepping aside by Senior scientists to make way for their Junior colleagues, as there would be positive (financial) incentives for them to lend a hand down the generations. And by reconfiguring resource allocation from “us-vs-them” to “we’re-all-in-this-together,” Junior/Senior Partnership Grants would further provide a natural mechanism for Senior PIs to transfer expertise in grant writing and project management to their Junior colleagues in a meaningful way, rather than in the lip-service manner that is normally paid in most institutions. Finally, and most importantly, the knowledge transfer through such a scheme would strengthen the future expertise base in Science, which all indicators would suggest is currently at risk.

Related Posts:

Goodbye F1000, Hello Faculty of a Million

Dr. Seuss' The Sneetches

In the children’s story The Sneetches, Dr. Suess’ presents a world where certain members of society are marked by an arbitrary badge of distinction, and a canny opportunist uses this false basis of prestige for his financial gain*. What does this morality tale have to do with the scientific article recommendation service Faculty of 1000?  Read on…

Currently ~3000 papers are published each day in the biosciences**. Navigating this sea of information to find articles relevant to your work is no small matter. Researchers can either sink or swim with the aid of (i) machine-based technologies based on search or text-mining tools or (ii) human-based technologies like blogs or social networking services that highlight relevant work through expert recommendation.

One of the first expert recommendation services was Faculty of 1000, a service launched in 2002 with the aim of “identifying and evaluating the most significant articles from biomedical research publications” though a peer-nominated “Faculty” of experts in various subject domains. Since the launch of F1000, several other mechanisms for expert literature recommendation have also come to the foreground, including academic social bookmarking tools like citeulike or Mendeley, the rise of Research Blogging, and new F1000-like services such as annotatr, The Third Reviewer PaperCritic and TiNYARM.

Shortly after I started my group at the University of Manchester in 2005 I was invited to join the F1000 Faculty, which I gratefully accepted. At the time, I felt that it was a mark of distinction to be invited into this select club, since I felt that it would be a good platform to voice my opinions on what work I thought was notable. I was under no illusion that my induction was based only on merit, since this invitation came from my former post-doc mentor Michael Ashburner. I overlooked this issue at the time, since when you are invited to join the “in-club” as a junior faculty member, it is very tempting since you think things like this will play a positive role in your career progression. [Whether being in F1000 has helped my career I can’t say, but certainly it can’t have hurt, and I (sheepishly) admit to using it on grant and promotion applications in the past.]

Since then, I’ve tried to contribute to F1000 when I can [PAYWALL], but since it is not a core part of my job, I’ve only contributed ~15 reviews in 5 years. My philosophy has been only to contribute reviews on articles I think are of particular note and might be missed otherwise, not to review major papers in Nature/Science that everyone is already aware of. As time has progressed and it has become harder to commit time to non-essential tasks, I’ve contributed less and less, and the F1000 staff has pestered me frequently with reminders and phone calls to submit reviews. At times the pestering has been so severe that I have considered resigning just to get them off my back. And I’ve noticed that some colleagues I have a lot of respect for have also resigned from F1000, which made me wonder if they were likewise fed up with F1000’s nagging.

This summer, while reading a post on the Tree of Life blog, Jonathan Eisen made a parenthetical remark about quitting F1000, which made me more aware of why their nagging was really getting to me:

I even posted a “dissent” regarding one of [Paul Hebert’s] earlier papers on Faculty of 1000 (which I used to contribute to before they become non open access).

This comment made me realize that the F1000 recommendation service is just another closed-access venture for publishers to make money off a product generated for free by the goodwill and labor of academics. Like closed access journals, my University pays twice to get F1000 content — once for my labor and once for the subscription to the service. But unlike a normal closed-access journal, in the case of F1000 there is not even a primary scientific publication to justify the arrangement. So by contributing to F1000, essentially I take time away from my core research and teaching activities to allow a company to commercialize my IP and pay someone to nag me! What’s even more strange about this situation is that there is no rational open-access equivalent of literature review services like F1000. By analogy with the OA publishing of the primary literature, for “secondary” services I would pay a company to post one of my reviews on someone else’s article. (Does Research Blogging for free sound like a better option to anyone?)

Thus I’ve come to realize that is unjustified to contribute secondary commentary to F1000 on Open Access grounds, in the same way it is unjustified to submit primary papers to closed-access journals. If I really support Open Access publishing, then to contribute to F1000 I must either must either be a hypocrite or make an artificial distinction between the primary and secondary literature. But this gets to the crux of the matter: to the extent that recommendation services like F1000 are crucial for researchers to make sense of the onslaught of published data, then surely these critical reviews should be Open for all, just as the primary literature should be. On the other hand, if such services are not crucial, why am I giving away my IP for free to a company to capitalize on?

Well, this question has been on my mind for a while and I have looked into whether there might be evidence that F1000 evaluations have a real scientific worth in terms of highlighting good publications that might provide a reason to keep contributing to the system. On this point the evidence is scant and mixed. An analysis by the Wellcome Trust finds a very weak correlation between F1000 evaluations and the evaluations of an internal panel of experts (driven almost entirely by a few clearly outstanding papers), with the majority of highly cited papers being missed by F1000 reviewers. An analysis by the MRC shows a ~2-fold increase in the median number of citations (from 2 to 4) for F1000 reviewed articles relative to other MRC-funded research. Likewise, an analysis of the Ecology literature shows similar trends, with marginally higher citation rates for F1000 reviewed work, but with many high impact papers being missed. [Added 28 April 2012: Moreover, multifactorial analysis by Priem et al on a range of altmetric measures of impact for 24,331 PLoS articles clearly shows that the “F1000 indicator did not have shared variability with any of the derived factors” and that “Mendeley bookmark counts correlate more closely to Web of Science citations counts than expert ratings of F1000”.] Therefore the available evidence indicates that F1000 reviews do not capture the majority of good work being published, and the work that is reviewed is only of marginally higher importance (in terms of citation) than unreviewed work.

So if (i) it goes against my OA principles, (ii) there is no evidence (on average) that my opinion matters quantitatively much more than anyone else’s, and (iii) there are equivalent open access systems to use, why should I continue contributing to F1000? The only answer I can come up with is that by being a F1000 reviewer, I gain a certain prestige for being in the “in club,” as well as by some prestige-by-association for aligning myself with publications or scientists I perceive to be important. When stripped down like this, being a member of F1000 seems pretty close to being a Sneetch with a star, and that the F1000 business model is not too different than that used by Sylvester McMonkey McBean. Realizing this has made me feel more than a bit ashamed for letting the allure of being in the old-boys club and my scientific ego trick me into something I cannot rationally justify.

So, needless to say I have recently decided to resign from F1000. I will instead continue to contribute my tagged articles to citeulike (as I have for several years) and contribute more substantial reviews to this blog via the Research Blogging portal and push the use of other Open literature recommendation systems like PaperCritic, who have recently made their user-supplied content available under a Creative Commons license. (Thanks for listening PaperCritic!).

By supporting these Open services rather than the closed F1000 system (and perhaps convincing others to do the same) I feel more at home among the ranks of the true crowd-sourced “Faculty of 1,000,000” that we need to help filter the onslaught of publications. And just as Sylvester McMonkey McBean’s Star-On machine provided a disruptive technology for overturning perceptions of prestige by giving everyone a star in The Sneetches, I’m hopeful that these open-access web 2.0 systems will also do some good towards democratizing personal recommendation of the scientific literature.

* Note: This post should in no way be taken as an ad hominem against F1000 or its founder Vitek Tracz, who I respect very much as a pioneer of Open Access biomedical publishing

** This number is an estimate based on the real figure of ~2.5K papers/day in deposited in MEDLINE, extrapolated to the large number of non-biomedical journals that are not indexed by MEDLINE.  If any has better data on this, please comment below.