My colleagues and I worked on the top 1% of ecologists project for a long time. There was significant discussion of both the interpretation itself and the implications. We also discussed conducting a social survey of the NSERC Discovery Grant holders similar to the one we conducted for the elite (published in Scientometrics). The editorial board of Oikos (particularly Dustin Marshall) facilitated a much clearer and direct interpretation as I got tangled up in the implications and caveats. I just wanted to provide a few additional nuances here in case you might be interested.
I must admit that my perception of the implications has also recently changed because I just received the news on my own NSERC Discovery Grant. I was funded at what I understand to be the lowest tier ($21,000) and for only one year (instead of 5 years). My former grant was $18,500 per year, and the importance of economy was a critical consideration for any project or idea a student and I generated. I wonder at the merit of underfunding ecologists and at the importance of minimum and realistic thresholds (i.e., how much could one generate with $18,000-20,000 per year and how competitive are you in the next round against others that entered the system at a higher level to begin with). Consequently, I imagine a few intriguing questions with respect to this project and ecology in general (and my own ability to fund and conduct research).
1. Is there a way to mix these two data streams to examine whether there are thresholds or larger relationships?
2. Is there a point between the two sets of individuals (NSERC vs most highly cited) that is meaningful either as a minimum or mean value?
3. Is research best forwarded by incremental increases in funding or more ‘jackpot-driven’ funding (that NSF adopts and that NSERC appears to now emulate)?
1. OK, the first question is a snap – albeit it is questionable to combine the two different datasets. As discussed below in the comments thread associated with the post by Jeremy, there are numerous different attributes associated with the researchers including multiple versus single grant holders that I cannot decouple. The time frame and allocation of funding is also different (i.e. in Canada NSERC uses a 5-year cycle if you are lucky, whereas in the most highly cited instances, the values reported were for an ‘average’ and I hope representative year in their current career stage). Ideally, I would love to see an experiment on this or at least see balanced contrasts by sub-discipline such as between plant ecologists at similar career stages. Nonetheless, with some painful conversions, I can match up the two datasets (good idea Jeremy). This is a purely an exercise to see if we can increase the scope of inference and possibly provide a roadmap for more comprehensive analyses by granting agencies.
Red is Canadian NSERC Discovery Grant holders (converted to citation per publication with a mean annual funding value to match the most highly cited reporting) whilst green is the most highly cited ecologists identified by ISI. The insights/interpretations would be that (i) there is limited overlap, (ii) some of the most highly cited ‘less’ funded individuals approach upper funding levels of Canadians, and (iii) the two lines intercept at 5.9, back-transformed to $794,328 USD. The implication of the latter point would be that NSERC would have to significantly extend mean annual funding to ecologists for this group of scientists to become the citation elite – if recognition or discovery functioned linearly. This value might be a bit too high to hope for Canadians given that the mean funding for 2012 in ecology and evolution was $27,167 (full stats here).
2. Let’s examine this another way. What if I simply combined both sets and ignore the fact that they were very from very different groups. Similar to a funnel plot analysis or a trim-and-fill analysis when conducting a meta-analysis to identify publication bias, do the raw (but matched) values of the NSERC holders ‘fill in’ the missing range of values? As you might expect, they do not, the distribution is still kurtotic and significantly deviates from normal. The line of best fit between funding and citations also inflected at approximately 5.9, but the grand mean (and median) of the distribution fell to 4.7 and 4.6 respectively with the first quartile at 4.9. Now, these are numbers that a Canadian system, or better yet any system, interested in funding ecologists at a reasonable level could consider. Using the median value back-transformed, we are looking at a minimum threshold of $39,810 USD. This seems viable. Detailed budget work both before and after my NSERC Discovery Grant results lead me to a conservative number for field ecology research with travel for a small lab at $36,000 CDN per year. I may not be able to pursue tangents, which is unfortunate because I suspect discovery is accelerated by these surreptitious moments, but I can mentor students and achieve critical mass within the lab in terms of collaboration and field assistance. Anything less than this is really difficult unless students are independently fully funded but most schools require modest top-ups. In summary, I propose that this is a reasonable current minimum for ecological research.
3. Gradual increase versus jackpot is intriguing as it relates to discovery. All of the above assumes that increases, sometimes even nominal ones, can make a difference. Realistically however, the difference between $18,500 and $21,000 per year is not another graduate student. I am grateful for the increase (but wish it was for 5 years), but I still cannot do the research that I proposed. I envisage several alternatives. Increase the minimum, award jackpots here and there, or provide a mechanism for applicants to honestly identify funding levels associated with their specific research. To the best of my knowledge, NSERC ranks applications and each tier is associated with a funding level. What was the purpose of the budget that I carefully prepared? I assume it is to demonstrate that if I really got $52,000 I could effectively allocated the funds to do that research. However, it is not impossible to imagine that padding a budget occurs, and that it is not an even an unreasonable bet-hedging strategy when soliciting funds since ecology is done in natural environments where accidents happen. The combination of these two practices, tiered funding and inability to communicate thresholds by applicants, leads to arbitrary and low awards. An obvious solution would be the provision of more transparent evaluation with respect to the budget and threshold reporting. My understanding of NSF grants is that it is more jackpot based with much lower funding success rates but larger grants. This could accelerate discovery for that specific interval when one is funded but between grants one likely spends large amounts of time writing more grants with limited capacity for scientific discovery. This leads me back to the second question. A hybrid of the two would be more even distributions of funds to more individuals at higher levels, i.e. > 0 and also > $27,167. Nonetheless, when an individual hits an amazing and important idea, the capacity to direct larger sums of funds could be critical. The hybrid should also be in between these two systems – not just in values – but funding model with documentation and limited- discovery research conducted and funded at modest levels and larger discovery endeavors funded more ambitiously. I know that there are discovery accelerator supplements in place and other alternatives as well, but we should urge agencies to do more for ecology which is often at the lower end of the priority list. I suspect if we polled most ecologists with the following question, the answer would be yes. If you were awarded a relatively large grant, even once, could set your research program on a totally different trajectory? I imagine I could, but whether I would actualize that dream is another matter. Perhaps NSF grant holders pre and post large awards could be tested – provided they had some funding after the 3 year cycle.
I propose that funding discovery is similar to the scientific process of inquiry. It is advanced through multiple channels. We need publications that document and describe patterns, propose ideas, explore ideas empirically, and hit home runs with rigorous experimentation. Not every contribution needs to be hit out of the park however, but by providing most ecologists with few dollars, agencies are not even letting them get up to bat. Ideas are sometimes cheap, testing them hard. I enjoy ecology and see the best in our discipline. I firmly hold a conviction that both basic and applied ecology is useful in effectively managing our little planet. With inadequate funding levels and limited alternative models to conduct research, we end up with ideas only for most the team… discussing them in the dugout.