This does not reach the level of hitting refresh to see your score, but it is in the same phylum. When you submit scores for NIH reviews, you post them on the Commons website. There is a time for submitting and then comes a time for reading (the reviews). And in this time for reading one finds out how close are all the scores for each proposal.
When I was a sprout, and there was no intertubz, one had to wait for the "reading of the scores" in study section, where grants were reviewed in alphabetical order. Now there's an unintentional bias: last names that start with X or Y or Z. or even W or T. Only then one discovered whether the reviewers reached consensus (which at the time was something greatly desired by NIH, now not so much).
One of the best things about being older is that I now have more confidence in my reviews. But as I told my new postdoc, yes, I still at my age have some imposter syndrome. And one of the worse exposés of one's IS is when you feel that you've done the review wrong wrong wrong relative to the Big Dogs on study section. Being too low (good score) means missing some critical flaw that perhaps one was just not smart enough to see. Being too high (bad score) means having no appreciation for what is important in the field (or in the olden dayes, why this Other Big Dog should be funded despite writing a dreadful proposal).
So, indeed, this morning, which opened read phase for study section that meets next week, I did open commons first thing and read the other reviews next thing. I will admit to some small relief that there is only one proposal with wildly disparate scores, and most are at least in the same family if not genus.
Of course consensus may reflect bias all around.
update: lively discussion at the tweets on this