Grant Reviewing: Instructions on "Overall Impact"

Sep 28 2017 Published by under Uncategorized

These posts are prompted by the instructions I'm getting for being on study section. Firstly, I think the advice NIH is giving to reviewers is good. I don’t agree with all of it. You may not agree with all of it. But, having a set of instructions, a set of guidelines, makes the review process more equitable and more objective. I think the goal of SS is to be as transparent as possible, and as even handed as possible. My memories of the variation that existed when I sat on study section in 90s was that there could be that objectivity, but there was a lot of rewarding the bigdogs and hyper-criticism of the young ‘uns. So I perceive the adjusting and calibrating that is happening now, whilst I am reviewing, to be a good thing.

Secondly, here is more of the advice NIH now gives to reviewers. There wording is generally available on line. My comments are, shall we say, not.

So what is in a review? There are six sections that contribute to the score (and others, such as evaluating human subject distribution that are not). The first, as in at the top of the document, is Overall Impact, which is the summary overall assessment. I write this last, after I’ve done the sub-sections. It is the basis of the score that a reviewer gives. The other five are the five areas/criteria (in order on the review page): Significance, Investigator(s), Innovation, Approach, Environment. The links in those criteria lead to text that I will put in subsequent posts.

Today, though, I want to talk about Overall Impact. The words the reviewer writes will go into the overall assessment paragraph at the beginning of of the review you receive. You will also get the specific comments on the 5 areas, as well as scores. One point, though, is that the other members of the study section (not the ones assigned to your proposal) may not see those bits, if they chose not to. But many will read the "overall impact" part of the review.

Here are the instructions about overall impact:

 

Overall Impact: What is the likelihood of the research to exert a sustained, powerful influence on the research field?

Write a paragraph supporting the overall impact score that should contain the following:

  • Introduce the general objective of the project in one sentence to orient reader.
  • State the level of impact the application is likely to have and why (what is the major contribution/advance to be gained?).
  • Identify what the major score-driving factors were for you.
  • Explain how you balanced/combined/weighted the various criteria in the overall impact score.

This may be the MOST important part of your review. It comes first but is based upon all the individual pieces in your completed critique template.

Here is what NIH says is NOT a good review and their reasons why:

The proposal is overly ambitious. There are design flaws. Significance is questionable.The PI’s productivity is low.

NIH Comment on this review: Lacks detail. Hard to interpret.

More of not good review:

(1) In Aim 1, the PI plans to generate XXreagents and test them in the YYsystem. In Aim 2,XX will be usedto explorethe ZZ pathway. ThenAim 3 will examine XX as potentialtreatmentsfor ABC disease. If successful, thisresearch could significantly impact the field.

(2) Only moderate enthusiasm was generated for this application. Strengths noted were the PI and team, excellent environment, state-of-the-art methodologies, and potential importance of the work to understanding XX. Weaknesses were the over ambitious nature, lack of experimental details, some confusing preliminary data, and concern about the choice of YY to be used. Altogether, this project will have a moderate impact on the field.

NIH Comment on this review:

(1) Just a listing of strengths-weaknesses without context.

(2) Only the major score-driving concerns should be listed in the Overall Impact along with the reasons why they are major and how they drove the final score. Just a rehash of the aims. No evaluation of the impact and what the score-driving issues were.

Some of these are "stock NIH critiques". What is important is that NIH is recognizing that stock critiques are useless and trying to push reviewers to do what is right and helpful. For you, the writer, this suggests that when you read something as a critique in the Overall Impact, that is something you need to consider and work on. One of the very hardest things to do is to figure out how to address concerns in the review when you resubmit. Sometimes its just writing/clarification of what you meant that they didn't get. Sometimes, however, you need to do something more substantive (and not just more preliminary data), but rethink how you propose to test hypotheses. This section, done properly, can guide you.

Here is what NIH considers an effective review:

This proposal addresses avery significant issue in the field of XX and overall impact is high because the research is likely to provide the link between two seemingly contradictory outcomes that have stymied recent advancements in this area. The project is not technically innovative, but this is not considered a weakness because the focus on XX is important and the methods are appropriate and rigorous. The approach has some very strong aspects such as X and Y. Most of the weaknesses were minor. However, one weakness created some concern. The weakness was XX. The problem with this is that they make an assumption about ZZ that does not seem to be supported by adequate data. The investigator is well-trained in X, Y, and Z and the collaboration with Drs. A and B, who will bring strengths of C and D, increases the likelihood of a successful outcome. In conclusion, despite the weakness in the approach, the potential overall impact of this project remains high because it will advance understanding of the mechanisms underlying the relationship between XX and YY and test new methods that will be useful in both basic and clinical research areas.

Why?

Uses clear and specific language to explain points. Highlights only the main score-drivers. Any minor points are left in the criterion sections.

Indicates importance of strengths and seriousness of weaknesses when appropriate.

Explains how the strengths and weaknesses were balanced to arrive at the final score.

When I get pink sheets back for a not-close-to-fundable-proposal, it hurts. I can't always read them immediately. It helps that the scores come well before the reviews. But, a good review is valuable. It's why I am giving you all this. It may not apply directly to your writing of a proposal, although it does help clarify what the reviewers are looking for in the proposal. This will be especially clear in the next few posts, as I dissect the instructions for the five specific sections. Understanding what the reviewers are looking for is something that you can keep that in your head as you write. You can reread your proposal and evaluate what a reviewer might say. And when you get those ugly and hurtful Pink Sheets back, you can try and interpret what is being said to you.

24 responses so far

  • Deeemm says:

    What are “pink sheets”, grammma?

    • potnia theron says:

      oye... these young kids know *NOTHING*.

      Pink sheets are NIH Study Section reviews. In the Olden Days, they were actually Xeroxed or printed onto pink paper. You had to wait for them to come in the mail. The snail mail, as there was no alternative at the time. Also, scores came in the mail, after they were compiled by hand. The wait could be excruciating. Also anyone could smoke anywhere in any restaurant. Some things are better now.

  • drugmonkey says:

    “Xeroxed”? Confused-white-guy-blinking.gif

  • Microscientist says:

    I would like to hear from folks who write grants with more regularity than I. Have you ever gotten a critique back anywhere close to the example of an effective review. The line about the lack of innovation is ok because old school methods work; in particular I would love to see someone actually use/ admit that.
    All the critiques I have gotten back have matched much more bad example #2, especially the discussions of "enthusiasm" and the lack of concreteness in terms of what the big issues are.

    • A Salty Scientist says:

      When I read bad example #2, it seemed like a *normal* NIH review. I have received some NIH reviews that were more like the good example, but stock critiques have outnumbered detailed critiques. (Note: my experience with NSF has been the opposite, with less stock criticism. Not exactly sure why, but my guess is that it's because reviewers do not numerically rank).

      • David says:

        I hate to defend stock criticism, but if I know a proposal has no chance of getting funding, I'm not spending the time to provide detailed feedback. I'm just putting in enough criticism to justify a "do not fund" recommendation. Not a great attitude for teaching, I know.

        Granted, I don't review for either NIH or NSF and I am generally reviewing a handful of grants knowing that we can only fund one.

        • potnia theron says:

          This is a hard one. Reasons to do a better job, and not use stock criticisms:

          1) community/morals/ethics --> when you sign on to review you sign on to do a certain job, and to do it to a certain standard.

          2) It might be you at the receiving end, and do unto others, etc.

          3) The proposal is quite likely to come back. You lessen your ultimate load by getting to a better proposal sooner by suggesting things that might actually help.

          • drugmonkey says:

            You are incorrect with this tone that StockCritique is somehow immoral or unethical. The job is NOT to help the applicant or to fix the proposal.

            StockCritiques arose as a reflection of the community standard about certain repeat issues that affect scoring. It is shorthand.

            You may not agree with one or more of them being relevant. If so, your dispute is with the community wisdom. Not the way it is communicated.

            You may agree with the concept but not agree it applies to this particular grant (usually our own, eh?). Again, dispute is not with the communicative value of the shorthand terminology.

          • potnia theron says:

            I stand (er, sit) corrected. Right. The role of SS and reviews is to advise the NIH on what is good science so they can make funding decisions.

            You are correct about where my irritation is.

            But! The instructions I am getting now, for this study section (first time being member in 10 years) are very different from past. It seems NIH has decided that the StockCritiques are not what they, the NIH and the CSR want in the reviews.

          • drugmonkey says:

            The NIH circles round and round. The new scoring was put in place with the explicit understanding that ties would be produced. Necessarily this put more burden on Program to sort priorities amongst essentially equivalently meritorious apps (on peer-review grounds). Bullet point comments were a related move. Now they find they don’t like this and want to return to narrative excess to “explain” the scores. Smells like a return to the days when Program looked to meaningless quantitative differences in scores to avoid making informed decisions.

          • potnia theron says:

            One person's informed decisions is another's idiotic political move (depending, of course, on whether you get funded or not).
            But I do agree that the difference in % or tenths% are meaningless, and yes, we are back to whether you are reviewed before or after lunch.

            From both receiving and giving scores, I find the bullet points and topic headings useful and efficient relative to rambling paragraphs of ad hominem attacks.

    • Luminiferous aether says:

      Yes, I have, for all my proposals so far (all went to the same SS; I am a new PI so have submitted only a handful of proposals). In fact, they have been so clear that I really didn't have to struggle to understand what they loved and what they hated. I guess my SS and SRO are doing their best to adhere to NIH's directives.

    • potnia theron says:

      I've had good and bad over the years. Sometimes they are frustrating because the reviewer just DIDN"T GET IT. But, as the wisdom goes, if they don't get it, it's because you didn't explain it.

      I think NIH is really trying to get away from stock criticisms, and make the reviews more helpful. Certainly the energy my SRO is investing suggests that.

  • girlparts says:

    I wish NIH would provide preliminary impact scores for grants that are not discussed. You really can't predict the impact score from the criterion scores, and it would be nice to know whether you're at 3s and 4s or at 5s and 6s when decided whether to resubmit. I've recently gotten better criterion scores on undiscussed grants than on discussed ones. I really the latter probably dropped during discussion, but it is awfully difficult to interpret....

    • girlparts says:

      oops. Should read: I realize the latter...

    • potnia theron says:

      On one hand it might nice. On another, it doesn't really matter. If funding levels are at 7-12%, then something that is >50%, does it matter if its 60 or 80%? (Note: If you are at 3,4's you will be discussed). In fact its not clear what the final impact scores *would* be, since discussion changes that greatly. As a rule of thumb, you can use a weighting of approach (mostly) and signif/innov (less). But again, the scores are often variable, and it wouldn't have a lot of meaning to how it would fare in discussion.

      As for the changes you've experienced- yes discussion does that. Also all criteria do not weigh equally (which is something NIH encourages).

      The important thing is to look at the reviews, and dissect them. More on that coming.

  • Grumpy says:

    Actually I agree with girlparts about the overall impact scores for not discussed proposals.

    Example: on a recent panel there was a proposal that got a 2,3, and 7. Wasn't discussed. you can always pull a "not discussed" grant out of the pile, but in practice this hasn't happened in any panel I've sat on.

    So the PI who gets their reviews back may not quite realize that their grant was (IMO) pretty solid and just got an unlucky draw of reviewer. They can try to parse the wording but that is not as reliable without the context of the overall impact scores.

    Anyone know why NIH does not give out this info?

    • drugmonkey says:

      I have never seen a case where 2, 3, 7 wasn't mostly deducible from the criterion scores. Particularly when combined with the comments.

      I'm having trouble imagining what you think is going to happen with a more "reliable" indicator of the overall impact scores. It doesn't tell you much about what your strategy should be. "pretty solid, just got an unlucky draw of reviewer" applies to 2, 6, 7 as much as it does 2, 3, 7. The nature of the criticisms and your ability to alter the revision accordingly is much more important than the "reliability" with which the summary statement communicates a quantitative score to you.

    • potnia theron says:

      1. I have pulled out of the pile, and seen it happen about 1-2/study section.
      2. I think they don't because final impact changes so much with discussion. Or *can* change (and I've seen it go all three ways, up down and stay the same).
      3. If you can't tell it is solid from comments, get a senior colleague to help. I do not think those scores tell as much as the comments.

  • Grumpy says:

    Well, like girlparts, I've had funded proposals and not discussed proposals with roughly similar summary statements (admittedly the funded proposal was a lucky pick-up with score in low 30s).

    I guess I don't have enough experience yet to receive a not discussed summary statement and stick to my guns and resubmit with minor revision. Perhaps at some point you develop enough confidence in your ideas that you know the not discussed verdict was an unlucky fluke?

    So far the two times this happened to me I got sick to my stomach and switched to focusing on other ideas in the next submission round. But I'm open to revising strategy if that's suboptimal...

    • potnia theron says:

      yes. and yes.

      When I've seen 2,3,7, the 7 is frequently a near-fatal flaw ( in logic, in method) that 2 & 3 missed. So rather than solid with a grumpy old white beard/blue hair, sometimes, its a matter of 2&3 missing something that 7 caught. 7 ought to have written a review that makes that clear.

  • girlparts says:

    My situation was either a 2,3,6 (most likely) a 3,4,6, or possibly a 2,2,6. It's an interdisciplinary grant, and the negative reviewer just didn't buy the premise, which is new thinking in that field, and which is what the other two reviewers like. That reviewer clearly had a valid point in terms of how well the mechanics of the premise were explained; something that is conventional wisdom in my field, but not in theirs. What I was hoping to know was if I address that concern,did the folks who already bought the premise think it was good enough to be worth resubmitting as a new grant.

    • potnia theron says:

      You seem to have a relatively good grasp of what you need to improve this proposal, based on your comments about premise (and other things you've said). Why does having a number make any difference at all? Your question (about premise) is not going to be answered by the specific numbers, but by reading the comments. Study sections in general, and reviewers, in particular, vary tremendously on the actual values they give. Doing revisions by the score is not going to help you get funded, in my experience. Understanding what is wrong (which was in the comments you did receive), and fixing that, is the way to go.

  • DrugMonkey says:

    How is a more precise indicator of preliminary scores supposed to help? Whether there were 2 good, 1 bad or 1 good, 2 bad didn’t matter. Either the “bad” talked the others down or the “good” failed to talk the others up. If you do something different based on one or the other scenario being true YOU ARE FUCKING UP!

Leave a Reply