Too much h-index around? Number of citations, h-index and journal’s impact factors are easily used statistics in evaluations of applications for academic jobs and fundings. Easy – yes. But appropriate – not really. One of our editors, Stefano Allesina (University of Chicago), has –together with two colleagues– suggested an alternative metric to use in evaluations: Future h-index, based on scientific activities, diversity of journals where papers are published, network etc. Their method was recently published in Nature.
Stefano, is this the future for academic evaluation committees?

What response have you met on your method?
-There is definitely some interest, as this work adds to the heated debate on how to measure productivity in academia. However, I see the contribution as a way to skew the debate from past accomplishments toward future achievements. If anything, concentrating on the past tends to promote very conservative science, while what we need is innovation.
Isn’t this just what evaluators actually are considering, but in a non-statistical way?
-Definitely. In a way, you can read the results by saying that “science works”: what we’re telling our students to focus on — good publications — is really what matters. However, I think the emphasis on the diversity of audiences is something that is not normally fully considered by committees and funding agencies.
Can the method be used to calculate future Impact factor for journals as well?
-We found the method to be quite context-dependent. It works well on neuroscientists, when we model neuroscientists. However, the predictive power diminishes when we’re trying out-of-fit predictions in other fields. That said, I think that it could be possible to adapt the technique and extend it to journals.
Leave a Reply