In PeerJ there is an article titled: Prediction of junior faculty success in biomedical research: comparison of metrics and effects of mentoring programs, by Christopher S. von Bartheld1, Ramona Houmanfar, Amber Candido. It is very interesting. Let's start with the abstract:
Measuring and predicting the success of junior faculty is of considerable interest to faculty, academic institutions, funding agencies and faculty development and mentoring programs.
No shit Sherlock. It's also of interest to the junior faculty.
Various metrics have been proposed to evaluate and predict research success and impact, such as the h-index, and modifications of this index, but they have not been evaluated and validated side-by-side in a rigorous empirical study.
This is true. And we are scientists. So we like to do rigorous empirical studies.
Our study provides a retrospective analysis of how well bibliographic metrics and formulas (numbers of total, first- and co-authored papers in the PubMed database, numbers of papers in high-impact journals) would have predicted the success of biomedical investigators (n = 40) affiliated with the University of Nevada, Reno, prior to, and after completion of significant mentoring and research support (through funded Centers of Biomedical Research Excellence, COBREs), or lack thereof (unfunded COBREs), in 2000–2014.
This is what they used as predictors: h-indices, publication, etc. What was their outcome variable? It took significant reading to figure this out. There definition of success was not tied to the COBRE funding (as far as I can tell), but its not as straightforward as I might like.
A successful faculty was defined as having external (not only COBRE-funding) of any amount and duration (in all of our cases at least two years of funding), and in addition publishing on average at least one last-author (=senior author) paper in PubMed per year during or upon graduation from the COBRE (or a comparable time frame when the COBRE was not funded). [my bold]
I think this means that COBRE funding did not count towards success. But what's significant is that success was not tenure/no tenure but other publishing and funding measures. There are some more details about the relationship between funding and success, including:
Simply staying on COBRE funds for extended periods of time without other external grant support was not considered an independent, externally funded and successful investigator in biomedical research.
(I will forward the part about independent to my friend on the tenure committee).
So let's get to the good stuff. What DID predict success? First the stuff that did not predict success. Two interesting findings:
The h-index and similar indices had little prognostic value.
Publishing as mid- or even first author in only one high-impact journal was poorly correlated with future success.
What did matter?
Remarkably, junior investigators with >6 first-author papers within 10 years were significantly (p < 0.0001) more likely (93%) to succeed than those with ≤6 first-author papers (4%), regardless of the journal’s impact factor.
Publishing some begets publishing more. But what is really critical (to tenure committees everywhere) is that the damn IF did not matter. They also found that the COBRE program made a difference, whether it was through money to support work, or the mentoring/group activities involved in those grants.
The benefit of COBRE-support increased the success rate of junior faculty approximately 3-fold, from 15% to 47%.
Or whether the selection criteria for COBRE are those that will make someone successful and the program is just a marker, not providing any additional benefit. They claim that there is no selection bias because their control group were proposed for being a "project leader" but the COBRE was not funded. The "mentored group" were those who proposed to be a project leader, and became one. It is not clear if this is truly a non-selection bias control, as we don't know why some COBRE's were funded and others were not. But there is a small bit of information tucked away that bothered me, which does suggest some potential bias issues in the data:
The gender of junior faculty was 50% female vs. 50% male for the control, and 25% female vs. 75% male for the mentored group.
I do believe that these authors really did try to control for as much as they could. Take the results with a bit of salt, but they are worth thinking about.
The authors believe that these results support the utility of mentoring programs like COBRE. I don't have a problem with that, even if there is selection bias. Anything we can do to help jr faculty is good. They have some data on success rates for males vs. females (not parity), which might be related to the problems in the selection process for COBREs. Also interesting is that in the comparison of native English speakers vs. 2nd language speakers:
Faculty with English as 2nd language had more success (8/18 = 44.4%) than native English speakers (6/22 = 27.3%).
In the end, their main conclusion is quite simple:
We show that a relatively simple metric—the number of 1st-author publications—far outperforms other metrics such as the h-index, journal impact factors, and citation rates in predicting research success of junior faculty.
With the usual caveats:
However, proxies alone are insufficient in evaluating or predicting faculty success, and further work is needed to determine which aspects of the COBRE and other faculty development programs contribute to success. Nevertheless, our study can now be replicated and validated at other biomedical institutions to predict the most suitable targets for faculty development and to evaluate and improve various types of mentoring programs.
Which means, we (senior people, mentors, helpers, people who want to see others succeed) need to do what we can to help junior people publish. This must include making sure that judging them, be it in study section or on tenure committees, that we stop emphasizing and valuing Glamour Pubs, and talk about getting the damn stuff out.
Full Citation: 2015) Prediction of junior faculty success in biomedical research: comparison of metrics and effects of mentoring programs. PeerJ 3:e1262 https://doi.org/10.7717/peerj.1262(