Predicting Junior Faculty Success

Sep 15 2017 Published by under Uncategorized

In PeerJ there is an article titled:  Prediction of junior faculty success in biomedical research: comparison of metrics and effects of mentoring programs, by Christopher S. von Bartheld1, Ramona Houmanfar, Amber Candido. It is very interesting. Let's start with the abstract:

Measuring and predicting the success of junior faculty is of considerable interest to faculty, academic institutions, funding agencies and faculty development and mentoring programs.

No shit Sherlock. It's also of interest to the junior faculty.

Various metrics have been proposed to evaluate and predict research success and impact, such as the h-index, and modifications of this index, but they have not been evaluated and validated side-by-side in a rigorous empirical study.

This is true. And we are scientists. So we like to do rigorous empirical studies.

Our study provides a retrospective analysis of how well bibliographic metrics and formulas (numbers of total, first- and co-authored papers in the PubMed database, numbers of papers in high-impact journals) would have predicted the success of biomedical investigators (n = 40) affiliated with the University of Nevada, Reno, prior to, and after completion of significant mentoring and research support (through funded Centers of Biomedical Research Excellence, COBREs), or lack thereof (unfunded COBREs), in 2000–2014.

This is what they used as predictors: h-indices, publication, etc. What was their outcome variable? It took significant reading to figure this out. There definition of success was not tied to the COBRE funding (as far as I can tell), but its not as straightforward as I might like.

A successful faculty was defined as having external (not only COBRE-funding) of any amount and duration (in all of our cases at least two years of funding), and in addition publishing on average at least one last-author (=senior author) paper in PubMed per year during or upon graduation from the COBRE (or a comparable time frame when the COBRE was not funded). [my bold]

I think this means that COBRE funding did not count towards success. But what's significant is that success was not tenure/no tenure but other publishing and funding measures. There are some more details about the relationship between funding and success, including:

Simply staying on COBRE funds for extended periods of time without other external grant support was not considered an independent, externally funded and successful investigator in biomedical research.

(I will forward the part about independent to my friend on the tenure committee).

So let's get to the good stuff. What DID predict success? First the stuff that did not predict success. Two interesting findings:

The h-index and similar indices had little prognostic value.

Publishing as mid- or even first author in only one high-impact journal was poorly correlated with future success.

What did matter?

Remarkably, junior investigators with >6 first-author papers within 10 years were significantly (p < 0.0001) more likely (93%) to succeed than those with ≤6 first-author papers (4%), regardless of the journal’s impact factor.

Publishing some begets publishing more. But what is really critical (to tenure committees everywhere) is that the damn IF did not matter. They also found that the COBRE program made a difference, whether it was through money to support work, or the mentoring/group activities involved in those grants.

The benefit of COBRE-support increased the success rate of junior faculty approximately 3-fold, from 15% to 47%.

Or whether the selection criteria for COBRE are those that will make someone successful and the program is just a marker, not providing any additional benefit. They claim that there is no selection bias because their control group were proposed for being a "project leader" but the COBRE was not funded. The "mentored group" were those who proposed to be a project leader, and became one. It is not clear if this is truly a non-selection bias control, as we don't know why some COBRE's were funded and others were not. But there is a small bit of information tucked away that bothered me, which does suggest some potential bias issues in the data:

The gender of junior faculty was 50% female vs. 50% male for the control, and 25% female vs. 75% male for the mentored group.

I do believe that these authors really did try to control for as much as they could. Take the results with a bit of salt, but they are worth thinking about.

The authors believe that these results support the utility of mentoring programs like COBRE. I don't have a problem with that, even if there is selection bias. Anything we can do to help jr faculty is good. They have some data on success rates for males vs. females (not parity), which might be related to the problems in the selection process for COBREs. Also interesting is that in the comparison of native English speakers vs. 2nd language speakers:

 Faculty with English as 2nd language had more success (8/18 = 44.4%) than native English speakers (6/22 = 27.3%).

In the end, their main conclusion is quite simple:

We show that a relatively simple metric—the number of 1st-author publications—far outperforms other metrics such as the h-index, journal impact factors, and citation rates in predicting research success of junior faculty.

With the usual caveats:

However, proxies alone are insufficient in evaluating or predicting faculty success, and further work is needed to determine which aspects of the COBRE and other faculty development programs contribute to success. Nevertheless, our study can now be replicated and validated at other biomedical institutions to predict the most suitable targets for faculty development and to evaluate and improve various types of mentoring programs.

Which means, we (senior people, mentors, helpers, people who want to see others succeed) need to do what we can to help junior people publish. This must include making sure that judging them, be it in study section or on tenure committees, that we stop emphasizing and valuing Glamour Pubs, and talk about getting the damn stuff out.

Full Citation: von Bartheld CS, Houmanfar R, Candido A. (2015) Prediction of junior faculty success in biomedical research: comparison of metrics and effects of mentoring programs. PeerJ 3:e1262

9 responses so far

  • Rheophile says:

    Since their success metric is "publishing 1 paper/year and funding," I'd be very curious to see the splits for people successful at 1 paper/year and those successful at funding. (And, of course, tenure.)

    I know many people who aim for more "complete" stories, higher-impact journals, etc. As a result, each paper has larger time commitment, plus there's higher variance, and so they don't publish as many papers (likely to fail the 1 paper/year as a new faculty). But I suspect they do OK in the grant process and at tenure time.

    I'd be a little worried that their criteria essentially selects people like me who prefer publishing smaller work regularly - and that this is also half of their success criteria.

  • chall says:

    I have to say I'm a little surprised about the First author idea for a junior faculty. I also thought the important part was having their own last authorship as junior faculty to prove and show that they are independent and having their own lab. Guess I was mistaken.

    • potnia theron says:

      I think the variation due to other factors, such as sub-field even within Medical Schools, let alone biology or other STEM, or teaching expectations, or even just academic culture, will make a difference. You may not be mistaken.

    • Marcus Webster says:

      You might be surprised to learn that in many fields (physiology, ecology, systematics) the First Author is the lead, senior researcher usually and the most important to the work. The last author is (depending on all sorts of things) sometimes the least contributing.

      • chall says:

        I know that that's the case in several fields, but happy to be reminded again. I admit that I wrote my response based on the "biomedical" idea in the question where the last author is the PI with the lab.

  • AcademicLurker says:

    I too was a bit confused by the first author criterion. Are they counting first author papers during the pre-faculty appointment period?

    I also wonder why they didn't use tenure as their metric for success, since, as far as most people are concerned, that's the measure that counts.

    • A Salty Scientist says:

      It looks like it is the number of 1st author papers in the 10 years preceding the COBRE start date. So >6 graduate/postdoc 1st-author papers. They note that in one case the PI had already *transitioned* to last authorships, so they counted those for that individual.

      I think the confusion is predicting success prior to vs. after the COBRE start date. Without any data at hand, my hypothesis is that both the number of last-author pubs AND their perceived quality are predictors for continued funding as a post-tenured prof.

    • potnia theron says:

      they were looking at papers post-program, not even including during faculty appt. I believe this was because they wanted to see if their program made a difference to success.

      As for not looking at tenure, I can only guess. One, some clinical depts. do not do tenure. Some do not do tenure till Full Prof. But, they are counting pubs & funding as success, which for most of us is what counts towards tenure.

  • Almost Tenured says:

    My anecdotal experience with the cohort of PIs who started around the same time as me would support the conclusions of this paper. The newbies who come in with a single splashy Science paper just aren't satisfied with anything less than that same level of success, and for many of them it doesn't happen, and they end up publishing nothing. The newbies who come in with a pile of JBC* papers continue to make their pile bigger and do just fine when it comes to grants and tenure.

    *just an example of a well-respected medium IF journal

Leave a Reply