My fellow Oikos editor and blogger Chris Lortie has a strong interest in scientific publication practices (see, e.g., here). His latest effort, now in press at Oikos, examines patterns of funding and impact among the ecological 1%: the most-cited 1% of ecologists over the last 20 years (Lortie et al. in press). Chris is too busy right now to blog it himself, so I’m going to do it, because I think it’s a must-read.
In a previous paper (UPDATE: link fixed), Chris and his colleagues reported detailed survey data characterizing 147 of the most-cited ecologists and environmental scientists. Not surprisingly, they are overwhelmingly male, middle-aged, employed in North America and Western Europe, have large, well-funded labs, publish frequently, and have high per-paper citation rates.* But there’s a surprising amount of variation (multiple orders of magnitude) in terms of lab size and funding level within this elite group. A feature of the data which Lortie et al. take advantage of to ask how citation rates vary with funding level within the elite.
In their previous work, Chris and his colleagues have shown that, for non-elite Canadian ecologists, more funding is associated with more “publication impact efficiency” (PIE): more highly-funded researchers also have more citations per dollar of funding. But Lortie et al. find that the same is not true of the elite. Using a slightly different measure of PIE (citations per publication per dollar), Lortie et al. find no relationship (not even a hint!) between PIE and funding within the ecological elite. Again, that’s despite multiple orders of magnitude of variation in funding level within the elite. Combined with their previous results, this indicates diminishing returns to really high funding levels (above several hundred thousand dollars, roughly). The implication is that funding agencies looking to maximize the “bang for their buck” arguably should reallocate funding away from really elite researchers and towards non-elite researchers. This wouldn’t reduce the PIE of the former group, but would increase the PIE of the latter group. Actually, Lortie et al. are careful to suggest increased funding for non-elite ecologists, rather than a reallocation of existing funding. But opportunity costs are ever-present so long as total funding is finite, and so I don’t think it’s possible to avoid the implication that these data suggest reallocation of funding away from the “elite of the elite”.
This is really thought-provoking stuff, and I hope that the folks who run our funding agencies take note.** One thing I’d like to see is for funding agencies to use this kind of information to give guidance to grant referees on how to evaluate applicants’ track records. For NSERC Disovery Grants, the “excellence of the researcher” (basically, the reviewers’ evaluation of your track record of publications and other outputs over the previous 6 years) is 1/3 of the grant score. To my knowledge, NSERC currently offers no guidance as to whether “excellence” should be scaled relative to the applicant’s funding level (which the applicant is obliged to report). These data suggest that it should be, and further that the appropriate scaling is nonlinear. Against that idea, one could note that increased funding gives researchers various advantages that let them increase their per-publication impact as well as their publication rate. So ultimately it’s impossible for reviewers (or anyone) to try to tease apart how much of an applicant’s track record is just due to their funding level (the idea being that anybody with lots of funding will have a good track record), and how much is due to their “intrinsic” excellence.
I am curious to see results for elite and non-elite researchers using exactly the same measure of PIE. Perhaps Chris can provide these numbers in the comments.
All of the usual caveats about citations as a measure of “impact” apply, obviously, and Lortie et al. recognize those caveats. But the conclusions here are, I think, robust to those caveats. Basically, there’s an upper limit to the mean quality of papers that any lab is capable of producing–and it turns out that even the most brilliant ecologist’s lab hits that limit at a funding level of several hundred thousand dollars or so.
One other caveat is that these are observational, comparative data. They aren’t necessarily a reliable guide to the effects of an “experimental manipulation” such as a reallocation of funding away from the elite. But they’re the only guide we have. Though having said that, I’d also be interested in analyses tracking changes over time in the funding level, PIE, and other relevant variables for individual researchers. Would it lead to the same conclusions?
In passing, one minor quibble I have is that Lortie et al. describe elite researchers as especially “collaborative”. But by this, they seem to mean simply that elite researchers have larger labs on average than non-elite researchers, not that they have more extensive collaborations with colleagues outside their labs. Which seems like a rather unconventional definition of “collaboration”.
There are analogies here to debates in economics over income and wealth inequality and their consequences, which are too obvious for me to ignore–but too incendiary for me to comment on!
This should be an interesting comment thread…
*Click through and read the whole thing for data on other characteristics of elite researchers–such as how hard they work and how much alcohol they drink!
**At least some of them are thinking about this stuff. I can’t find the citation, but a little while back NIH did analyses along these lines for their researchers, and discovered the same pattern of diminishing returns for really well-funded labs. Although the threshold funding level beyond which there was no further increase in efficiency was higher than in ecology, as you’d expect given the higher cost of much biomedical research.