Professionalism and Integrity on Study Section

Feb 05 2018 Published by under Uncategorized

There was some discussion on the tweets lately about What Happens At Study Section. Given that NIH triages 50% of the proposals, that is does not discuss, making that cut is important. Part of the discussion concerned whether one of three reviewers could "sink" a proposal.


A procedural note (which was cleared up on the tweets, but I can't find it and it's worth repeating here): if a proposal is triaged, there is no "voting outside the range". Voting outside the range is something that happens after a discussion and after final scores are given by the reviewer. If a proposal is triaged, no one votes and there is no range to be outside of.

In the study sections on which I have sat, since triage has become a thing, there is a list of triaged proposals that is circulated prior to review. ANYONE, not just the reviewers of a particular proposal, can call for review of a proposal (before the meeting) and move it to the discuss group. Moreover, there is another chance to do this during study section meeting.  I've seen moving from triage to discussion at nearly every study section, even from the non-reviewers. If someone feels strongly about a proposal, they can force the discussion.

Discussions tend to be complex things. If someone feels very strongly about a proposal, they can try and drive the discussion. This tends not to happen. I have seldom seen anyone be irrationally negative about a proposal. And almost always, everyone who says something negative, tries and balance it with what they do perceive as positive about the proposal. I have seen more strong very positive reviews.

"Voting out of range", for those who don't know, happens at the end of the discussion. The three (or sometimes four) reviewers each give a final score, and indicate how they have changed from their initial score, based on the discussion. Then the chair asks if anyone is voting out of the range of the reviewers. This is for the non-reviewers, as the reviewers set the range.   It happens. As DM indicated, it's often more than one person, and more often towards a worse score. My sense is that it frequently occurs when non-reviewers think that a problem raised in review is more serious than the reviewer thinks it is. They often explicitly indicate this verbally (which is how I have come to think this).

In general, IME, preliminary scores tend to have a wider range than final scores before voting. Most reviewers are not only listening, but actively engaged in discussing the proposals. Although I have heard the words "I am excited by this proposal" and "I am disappointed in this proposal", by and large, reviewers are not irrational or overly emotional about proposals. They tend to base their reviews on points of substance, and follow the NIH guidelines on reviews.

If you haven't seen the (very extensive) guidelines for reviewers, it is well worth looking at before you submit.  This link is a real rabbit hole, but one worth pursuing. For example, the guidelines for an R01 include:

How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

And I have read proposals that include the text:

Successful completion of these aims will change the treatments available for dysfunctional bunny hopping.

There is absolutely nothing wrong, and quite a lot right, with telling the reviewers what they are looking for in your proposal.

It is easy and often emotionally satisfying to be angry at Study Section and especially, IME, Reviewer #2. They Don't Get It. They are prejudiced against bunny hopping studies. There was one Reviewer who sank my study.  These things are not impossible. They are just not likely.


3 responses so far

  • Ola says:

    One variable I've seen across different study sections that might impact the lone-gunman hypothesis, is the size of the panel itself. Smallest I've seen for a regular sitting study section (not a special emphasis panel or other ad-hoc thing) was 13 people, but I've also seen them go as high as 45 people in the room. Obviously the actions of a single voting member are less impactful the larger a panel becomes, so we should all lobby for bigger study sections.

    The other thing that came up on Twitter was general ciriticism of assigned reviewers voting up or down "just to provide a range". Personally I don't mind this. It's not good to listen to reviewers harp on for 25 minutes then all three of them score the proposal a 2. If you want to vote outside you have to raise your hand, and - NewsFlash - this may be something junior/female/minority reviewers are uncomfortable with. So, thanks for providing a range reviewer 3!

  • DrugMonkey says:

    I agree with Ola that one of the biggest limitations to a very strong and consistent system is that it depends on people speaking up and defending and/or advocating for their position.

    This is much less of a problem wrt the meat of the scientific discussion IME. It is a bigger issue when cultural expectations/ norms and meta-review issues are on the table. I have no clear solutions other than increasing diversity* (particularly career stage diversity) on panels. The more ppl feel they are there by right, instead of sufferance, the more likely to speak up.

    *I think I recall the long running CSR gender split is about 33% women. I was on a panel with this ~split recently and I have to say it looks really underrepresentative to my eye. Really wish SROs would shoot harder for 50%+.

  • qaz says:

    In every case where I've seen someone be irrationally negative about a proposal, the other two primary reviewers have spoken up. Sometimes that is in the discussion and sometimes it is in recovering a grant from triage. Interestingly, I think it is harder to counter irrational exuberance about a proposal than irrational negativity.

    I always come back to my personal experience here, which was that I was convinced study section was unfair when they judged my grants, until I got onto a study section, at which point I had to re-assess my entire perspective on the reviews I was getting back.

Leave a Reply