(more) NIH instructions on writing grant reviews

Nov 02 2016 Published by under Uncategorized

From a previous post: I am ad-hoc'ing on a study section, again. I have received more, and new to me instructions on writing reviews. I repeat why I'm including this here:

One of the best lessons ever: the more you understand about how NIH works, the more likely you are to get funded.

I want to specifically address the problem (from questions in the Twittosphere): why did I get a "29" if there are no weaknesses mentioned in the review?

Compared to 20 years ago, when I stared reviewing, NIH (CSR) sends a tremendous amount of pre-review information and instructions. This is from the (longer) cover email to reviewers titled "General guidance for all sections of the critique".  Let's take a look at what NIH tells its reviewers about scoring (earlier thoughts here).

  • Scores of 1-3 should be supported by clearly articulated strengths.
  • Scores of 4-6 may have a balance of strengths and weaknesses.
  • Scores of 7-9 should be supported by clearly articulated weaknesses (or lack of strengths).

We were told, over and over that to start with a 5, and move up or down from there. We were told, over and over, to use the whole range. More detail, the emphasis mine:

Scores of 1-3 e.g. Applications are addressing a problem of high importance/interest in the field. May have some or no weaknesses.

 

Scores of 4-6  e.g. Applications may be addressing a problem of high importance in the field, but weaknesses in the criteria bring down the overall impact to medium.

OR

e.g. Applications may be addressing a problem of moderate importance in the field, with some or no weaknesses

 

Scores of 7-9 e.g. Applications may be addressing a problem of moderate/high importance in the field, but weaknesses in the criteria bring down the overall impact to low.

OR

e.g. Applications may be addressing a problem of low or no importance in the field, with some or no weaknesses.

Understand that a worse score is not just weaknesses in the proposal. It can be a lack of "importance". There may be nothing wrong with the importance of the proposal, it just doesn't rise to a level that gets above 3. Importance (significance and innovation) are critical. From an old post:

impact = function of {importance (significance, innovation), feasibility (approach, investigator, environment)}

Two parts of impact, the importance and the feasibility. These are not my thoughts. These are taken from various NIH web pages (of which there are many, many, many) talking about writing proposals.

So, how do you get to importance, significance and innovation? That is, if there are no weaknesses, what can one do to ensure that reviewers think the work is important? Firstly, figure out what the target IC wants to fund. But the rule before even that, is figure out what the target IC for the proposal is in the first place. Here is one thing and another I wrote a while back on IC's and finding one. Secondly, do your NIH RePORTER homework. Figure out what is currently being funded. This is often a problem for snowflakes. Or even people who are not snowflakes, but live in an echo chamber department, where What We Do is The MOST Important Thing in the World. Take a good hard look at why your problem is an important problem. Try explaining to your mother or your nutsy Uncle Fester. If they can't get it, likely the SS won't either.

 

4 responses so far

  • qaz says:

    Oftentimes, if a study section (or high-impact journal) review does not think your proposal or discovery is important, then part of the problem may well be that you have not explained it well enough. I have often found that when my view on the proposal (super important) or discovery (super impactful) doesn't translate to the reviews, then there is often a logical step that I either didn't notice or assumed was obvious that wasn't clear in the proposal or paper. I would say to remember that these are proposals for how to spend money from a finite (and thus limited) budget. Thus not making clear that your idea is the important *is* a weakness.

    In my experience, it isn't usually about whether the proposal is matching the goals of the target IC (that's usually easy to identify and often IC's are much broader than what is officially identified). Instead, I have found that it is usually about a failure to make clear the logic behind the impact. (Not the logic of the experiment, but the logic of *why* the experiment is so important.)

    • potnia theron says:

      This is exactly correct.
      But, lots of folks get reviews that do not identify it as a weakness in the review, and don't understand why they got the score that they did.

  • xykademiqz says:

    Not the logic of the experiment, but the logic of *why* the experiment is so important.

    Bingo! This usually comes up if someone slightly out of your field reads the proposal. You think the logic connecting big question, state of the art, open problems, to where your approach comes is clear/straightforward/unbroken, but it is often not the case.

    • potnia theron says:

      Good, good point.
      This is often what is meant when people say the proposal is "tight" vs. "sloppy". Tight = logic flowing cleanly from one point to the next, from the big meaning to the specific experiments.

Leave a Reply