Being funded through the National Institutes of Health (NIH) is much like applying to a “stretch” college, 75% of applicants fail to make the initial cut and many, to extend the analogy, get on a wait-list from which they are never called. Half of all NIH funding goes to 19% of funded investigators representing 2% of the funded institutions. Is this a “legacy” problem or are these recipients truly “the best?” Inequity in the distribution of government funding is a fact of political life. It can be a result of the specific interests or concerns of the powerful or vocal or can be implicit, captured by the phrase “the rich get richer.” Funding of science by the NIH is not exempt and has been shown to reflect bias by gender, age, and even race. A recent paper looked at funding and publication, a measure of productivity.
Funding
The researcher (and how often do we see a single author) looked at the funding of 15 institutions over a 10-year period by NIH. [1] Funding ranged from $440 to $3 million and reflected the top 25% of institutional funding. They were also categorized by their “prestige” [2]. 137,000 applications were considered in that period, the prestigious, who I will call legacy applicants, were funded roughly a third of the time, but were 1.7 times more likely to be successful than those less prestigious institutions. Legacy applicants received 2.4 times as much funding ($3.5 vs. $1.4 million) per investigator. The researcher concluded that “differences in likelihood of funding and award size are proximate causes of the heavily skewed distribution of funding among institutions.”
Perhaps these investigators are just better
Productivity
While better is a relative term, the researcher used the number of publications and their impact as a proxy measure. [3] The 41,021 research projects funded by the NIH during that period resulted in 95,000 publications. Legacy institutions published fewer papers, 5.3 vs. 8.7 papers/million dollars in funding, than the less prestigious schools. The relative citation ratio (RCR) is a calculation used by the NIH to calculate a paper’s impact. It looks at how often the paper is cited by others, excluding non-research articles like review or editorials. It is a recognized measure of impact on a scientific field. With one exception, the less prestigious institutions outperformed those legacy institutions on this measure, demonstrating 35% greater impact. So by their measures, whether it be publications or their influence, it seems that those legacy institutions were good, just not as good as their funding would suggest.
On what seems to me, a taxpayer, not an applicant, a reasonable analysis of NIH funding we see a highly competitive, highly skewed system where relatively small differences in choice result in greater disparities in funding; and that funding results in less “productivity” – less bang for the buck. The researcher quickly points out that this is not an issue of overt bias, I would agree. But there is always an element of power in relationships, in this instance expressed through social prestige. It has been described as the Matthew effect (after the gospel), in essence, the rich get richer. And there is evidence beyond this paper, that this unconscious desire is at work in science as a social behavior. The acceptance rate of publications changes when the reviewer’s do not know the author or institutions of the papers being considered, blinding them results in more of the “less prestigious” being accepted for publication. He goes on to suggest that another issue may also be at play, this is taken from economics – marginal return. Investigators must administer and complete these grants and have only a finite capacity to do so, irrespective of the amount of funding. Continuing to give the rich more exceeds their capacity to produce, more money does not always translate into more production, in the factory or science.
Perhaps it is time for the NIH to consider alternate means of funding. Let me choose an analogy closer to home, at least for me; maybe we should stop the fee for service funding of science and begin to emphasize different aspects of quality over volume as we are attempting to do in healthcare.
[1] Harvard, Stanford, Johns Hopkins, UCSF, U of Pennsylvania, Purdue, U of Nebraska Medical Center, U of Oklahoma Health Sciences, West Virginia University, University of South Dakota, Eastern Virginia Medical School, SUNY-Buffalo, University of Mississippi Medical Center, University of North Dakota and Louisiana State University Health Center, Shreveport
[2] As determined by US News and World Report (just like they classify “best” colleges). The top five were Harvard, Stanford, Johns Hopkins, UCSF, U of Pennsylvania
[3] One can argue with the choice of proxy, but it is a standard measure in academic science, used not only in funding but promotion.
Source: High Cost of Bias: Diminishing marginal returns on NIH grant funding to institutions BioRxiv DOI: 10.1101/367847 This is not a peer-reviewed publication.