ResearchHub Docs
  • ResearchHub Docs
  • Welcome
    • What is ResearchHub?
    • The Problem
      • Research grants incentivize waste
        • Lack of accurate metrics
        • Unintended consequences of modern grant funding
        • The business model of for-profit academic publishers
      • Infrastructural deficiencies
        • Limitations of physical laboratories
        • Outdated format of research outputs
        • Barriers to participation
  • User Support
  • Links
  • ResearchHUb
    • Mission, Vision, Values
    • Journals and Hubs
      • Journals
        • ResearchHub Journal
      • Hubs
    • Product Features
      • Posts
      • Notebook
      • Peer Reviews
      • Grants
    • Reputation (REP)
    • Content Guidelines
    • ResearchHub Team
  • ResearchCoin
    • What is ResearchCoin?
    • Earning RSC
    • RSC Tokenomics
      • RSC Distribution Chart
      • RSC Emission Schedule
  • Contracts and Addresses
  • ResearchHub Foundation
    • What is the ResearchHub Foundation?
    • Partners, Programs, and Contributors
      • Editorial Program
        • Editor Role and Summary
      • Paid Peer Review Program
        • Peer Review Program Guidelines
    • ResearchHub Foundation Team
Powered by GitBook
On this page
  1. Welcome
  2. The Problem
  3. Research grants incentivize waste

Lack of accurate metrics

"Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough." – Peter Higgs

Last updated 2 years ago

The main problem of academic science can be summed up in one question: "How does one judge the quality of a scientific researcher? Put yourself in the shoes of any individual or organization that has prioritized investing in scientific research - you have a $1m grant earmarked for quantum gravity research and two different physicists have applied for the funding. How would you go about choosing which lab will produce the most scientific bang for your buck? Funders have traditionally relied on bibliometrics to help guide their grant-making decisions. To determine if a project is likely to produce a safe return on investment, they judge the productivity of the applying researcher, and the impact of their previous publications.

  • Productivity: A quantitative metric of the number of publications a researcher has produced

  • Impact: A qualitative measure of a publication’s importance, based on the number of citations it generates. The most commonly used impact metric is the “impact factor” of the academic journal in which a paper is published. “Impact factor” is calculated based on the number of times a journal is cited by academic publications within the previous two years.

It has been well-established that the use of productivity and impact are not effective tools to objectively determine the best recipient of grant funding. A 2017 study found that there was no agreement between National Institute of Health reviewers in their qualitative or quantitative evaluation of grant-applications.

“It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant.” -

Pier et al., 2017