Lack of accurate metrics

"Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough." – Peter Higgs

The main problem of academic science can be summed up in one question: "How does one judge the quality of a scientific researcher? Put yourself in the shoes of any individual or organization that has prioritized investing in scientific research - you have a $1m grant earmarked for quantum gravity research and two different physicists have applied for the funding. How would you go about choosing which lab will produce the most scientific bang for your buck? Funders have traditionally relied on bibliometrics to help guide their grant-making decisions. To determine if a project is likely to produce a safe return on investment, they judge the productivity of the applying researcher, and the impact of their previous publications.

  • Productivity: A quantitative metric of the number of publications a researcher has produced

  • Impact: A qualitative measure of a publication’s importance, based on the number of citations it generates. The most commonly used impact metric is the “impact factor” of the academic journal in which a paper is published. “Impact factor” is calculated based on the number of times a journal is cited by academic publications within the previous two years.

It has been well-established that the use of productivity and impact are not effective tools to objectively determine the best recipient of grant funding. A 2017 study found that there was no agreement between National Institute of Health reviewers in their qualitative or quantitative evaluation of grant-applications.

“It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant.” - Pier et al., 2017

Last updated