Have you ever submitted a grant, only to have it rejected? You respond to the reviewers’ comments, addressing weaknesses and tweaking the protocol to honor their suggestions.
Then, when you resubmit, the proposal is rejected again. This new group of reviewers suggest changes to the protocol. And guess what, their suggestions sound a lot like your original idea that you removed to satisfy the last group of reviewers.
Are you the butt of some cruel academic joke, or is the grant funding process really this subjective and unpredictable?
Grants are serious business to an academic lab. The NIH alone awards over $30 billion to support 300,000 researchers at 2,500 universities.
Applying for one of these grants requires the investigator to prepare a lengthy proposal, detailing the work they intend to do, preliminary results, and how they will address challenges. As anyone who has applied for a grant knows, it’s a grueling process.
After submission, the grant is read and scored by 2-5 reviewers, and those grants receiving a sufficiently high score are sent to ‘study section,’ where they’re reviewed and ranked by a larger peer group of scientists.
Review boards at the NIH then take these rankings and award money to the top-scoring proposals. The others are returned with comments on how to improve the research program, and those labs can then re-submit a proposal in the next round of funding.
All of this talk of scores and rankings might lead you to believe that the grant review process is objective, consistent, and repeatable. But notice that behind the numerical values are a group of humans applying their individual judgement to assign a score.
A recent paper in PNAS by Pier et al. at Princeton University questions the fundamental validity of the grant peer review process.
The study’s authors wanted to know: if you give a group of reviewers the same high-quality grant proposals, will they score them consistently? In other words, do reviewers agree about what makes a quality research proposal?
The paper’s title,: “Low agreement among reviewers evaluating the same NIH grant applications” , makes it clear that reviewers absolutely did NOT agree.
In fact, the authors conclude:
It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant.
This week on the show, we dive deep into the grant review process, and explain the study showing the wide variation in scores given to the same grants. We also suggest some changes that might make the process more fair, even if human bias and judgement cannot be removed from the equation.
The science making news this week brings you three papers exploring the role of doctors increasing patient mortality rates! We describe two studies on the “July Phenomenon” – a period in July where new medical residents start their hospital rotations. Do these new MDs actually kill more patients while they learn the ropes?
We also mention a recent paper indicating that patients admitted to the hospital for heart attacks survive longer if their doctor is away at a conference!
The old phrase about “An apple a day keeps the doctor away” takes on new meaning. Maybe the nutrition in the apple is secondary – the primary health benefit is in keeping the doctor away!
And to celebrate St Patrick’s Day, we down some Murphy’s Imported Stout “Draught Style” from Cork, Ireland. It’s the one occasion where discovering “a floater in my beer” is a good thing.