The peer review process for biomedical research project grants from the National Health and Medical Research Council (NHMRC) in Australia .

In Australia, the National Health and Medical Research Council (NHMRC) is the biomedical research agency, equivalent to the NIH in the US, the ANR in France or the CIHR in Canada that distributes the funds to researchers. The allocation of funds is based on a competitive process. There are many schemes available from Fellowships to program and development grants. The main scheme in the Australian biomedical landscape is the NHMRC project grant, equivalent to a NIH R01. Over AUSD$ 400 M a year, half of the NHMRC budget, are being distributed to Australian researchers.

The evaluation of the quality of the proposals requires a 'fair, transparent and rigourous peer-review process'. I have been participating to this for quite sometimes. Here is a short summary of the peer-review process. More details are available (see\n6). If you already know the process, feel free to skip the next paragraph.

The NHMRC project grant peer review process:

Once the proposal is submitted to the NHMRC, the compliance of the proposal is checked and distributed to a 'super panel' (neurosciences, biochemistry,...). The key words provided and the proposal summary provided from the applicants is absolutely essential to attribute a proposal to the right specialised panel. The proposal is evaluated from panel members and the panel members declare their conflicts of interests. 37 Grant Review panels or GRP (equivalent to the NIH study section) evaluate the scientific quality of each proposal. The GRP members also declare their potential expertise to review a proposal. The proposals then go to the assigner academy to find external reviewers to evaluate the proposals. The proposals are reviewed from 2 panel members (first and second spokespersons) and 2 external reviewers. The proposal is then scored and the reviews are provided to the applicants. Importantly only panel members have the ability to score the grant. The applicants reply to the reviewers and the proposal is scored again from the first and second spokespersons. The panel members decide whether the proposal would be discussed at the panel meeting.

Grant proposal scroring:

There are 3 important descriptors to score a proposal. Details are provided there (\nt-nhmrc-project-grants-category-descrip). The most important one is the scientific quality (50% of the total score). It describes the scientific quality of the project (well defined, flawless, feasible). The second descriptor is the Significance and Innovation (25%). It assesses the innovative aspect and the significance of the project. The third descriptor is the track record of the applicants (25%). It evaluates the publication, funding outputs from the applicants and their expertise to successfully complete the project.

I have been participating to this peer review process as an external reviewer and as a GRP panel member for quite sometimes. Here are some personal opinions on this peer review process. This does not engage the NHMRC, nor my university or my colleagues. This is not a scientific study. Most of these aspects of this peer review process I will point out are certainly known from many who are playing this game.

Who is applying for project grants?

I've looked up at some of the data I have and I can't share. For a very small proportion of the grants submitted (~200-300 out of 5,000), I've seen a small proportion of females applying for grants as a CIA (~26%). Interestingly As many junior investigators at Level C academic (41%) as senior investigators at Level E academic or professors (40%) are applying for project grants. Given the low success rate for Level C (see my previous post https:[email protected] [email protected]15-9475febce833#.x0zh61j52) many junior investigators won't get funded. The important question is why few females and junior investigators are less likely to get funded? I will try to give my view on this.

The scientific quality descriptor:

Preliminary data are crucial to get a high scoring for the scientific quality. These data provide confidence for the reviewers to evaluate the feasability of the proposal. Some provides tones of preliminary data spanning easily over 3/4 or more years of work. Unlike junior investigators, large labs have the ability to produce massive amount of preliminary data performed from 2 or 3 postdocs. Large labs outcompete junior investigators in producing sufficient evidence making the proposal highly feasible. I noticed too that senior investigators play very secure and write flawless proposals with every single aim and sub-aim backed from preliminary data. However these proposal are most of the time not very exciting or boring as a continuity of a previous work. Junior investigators are less experienced in grant writing and don't have this ability to produce huge preliminary data. They are also taking a more risky approach. Therefore it is easy for a reviewer to point out all the flaws and to lower the scoring of the scientific quality of the proposal.

Significance and Innovation:

From all the grants I've read and reviewed, it appears clear to me that junior investigators proposals are more likely to be innovative than more experienced investigators. However these innovative grants are more likely to be flawed as many aspects of these study are unknown or not predictable. Sadly this gets severely punished from reviewers as feasibility of a project is more rewarded than innovation to fund a project.

Track record:

The track record of the team counts for 25% of the total scoring and it clearly favors senior investigators as these are more likely to publish in glamour journals, to simply publish more papers or to get more funding and awards. This is less or not the case for junior investigators and rarely they have the ability to include 'big names' in their team. Basically, while the scoring is related to opportunity, junior investigators are disadvantaged. Females scientists are seriously disadvantaged due to career interruption. Now career interruption is more seriously considered but one extra year of publication or so is clearly insufficient to compete. The obtain an equivalent track record scoring to senior investigators, junior investigators must include mentors and 'big names' into their proposals but this is very challenging as they are competing with those 'big names'. It turns out that investigators with outstanding track records have the ability to write boring proposals and still make to the full panel review and have reasonable chances to get funded. The reason is they would score high anyway on the track record while they would score low on the innovation. However they still score high on the scientific quality, as the proposal would be flawless.

How to change the descriptors scoring?

Clearly these descriptors favor feasibility over innovation. There is little reward for innovative biomedical science in Australia. To invert this trend, innovative projects must be rewarded and risk must be taken. I would suggest that 40%, instead of 50% of the total scoring should go to the scientific quality. To reward better innovative project, 40% instead of 25% should be scored based on significance and innovation of the project. The remaining 20% should score for the track record to ensure the right expertise is available to successfully complete the project. Importantly career interruption for females academics should be better considered. Another important aspect would be to cap to 4 NHRMC projects to grant holder and to do not allow a program grant holder to apply for a project grant. I would also propose to reduce the length of a proposal to 6pages to focus only on the research plan and the main idea of a proposal. This would not allowed to add many preliminary data to the proposal. Together, this would change the funding landscape and to share better the funding towards junior investigators and females academics. This would also reduce the number of grants that are ghostwritten, which is endemic in the Australian biomedical funding system.

In conclusion:

The funding rate from NHMRC is steadily declining over the years and it is challenging for everyone to get funded. Sadly the way the peer review process for project grants is shaped clearly favor a conservative and risk adverse approach and do not allowed many highly innovative projects to get funded as the scoring descriptors reward feasibility over innovation. In short the peer-review process is flawed towards established investigators. Worryingly many junior investigators and females academics won't get any funding and will leave academia. As recently highlighted in this Elife paper (, we are loosing a generation of scientist and we must avoid this.


This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.