Too much h-index around? Number of citations, h-index and journal’s impact factors are easily used statistics in evaluations of applications for academic jobs and fundings. Easy – yes. But appropriate – not really. One of our editors, Stefano Allesina (University of Chicago), has –together with two colleagues– suggested an alternative metric to use in evaluations: Future h-index, based on scientific activities, diversity of journals where papers are published, network etc. Their method was recently published in Nature.
Stefano, is this the future for academic evaluation committees?
–When I am sitting on hiring committees, I often think: “is it even possible to determine which candidates are going to be good scientists by just looking at their CVs?” To answer the question, we analyzed the career of hundreds of neuroscientists. It turns out that yes, you can pretty much forecast their future impact (i.e., predict their future h-index) using exclusively information that is contained in their CVs. What I think it’s good news is that the variables with the strongest exploratory power are basically those you would have thought they should be: current h-index, number of publications, number of publications in top journals. However, we find that the diversity of journals (and hence the size of the “audience”) is also very important. Thus, I think hiring committees should also evaluate the potential candidates for their ability to reach scientists outside their disciplines and main field of interest.
What response have you met on your method?
-There is definitely some interest, as this work adds to the heated debate on how to measure productivity in academia. However, I see the contribution as a way to skew the debate from past accomplishments toward future achievements. If anything, concentrating on the past tends to promote very conservative science, while what we need is innovation.
Isn’t this just what evaluators actually are considering, but in a non-statistical way?
-Definitely. In a way, you can read the results by saying that “science works”: what we’re telling our students to focus on — good publications — is really what matters. However, I think the emphasis on the diversity of audiences is something that is not normally fully considered by committees and funding agencies.
Can the method be used to calculate future Impact factor for journals as well?
-We found the method to be quite context-dependent. It works well on neuroscientists, when we model neuroscientists. However, the predictive power diminishes when we’re trying out-of-fit predictions in other fields. That said, I think that it could be possible to adapt the technique and extend it to journals.