Professional Society Consultation Meeting, July 30, 2007

Meeting Summary

Meeting Context
Dr. Elias Zerhouni
Director, National Institutes of Health

Peer review has been the fundamental system for science funding for more than 60 years. Multiple efforts have been made to transform the system. This year, an NIH priority is to build on past efforts, putting the issue of peer review front and center. NIH leadership has decided to enlarge the conversation to determine how peer review can function in today’s context, where, for example:

  • The scope of science presented in an application is broad and complex, often necessitating ad hoc reviewers to supply needed expertise.
  • The large number of grant mechanisms perhaps encourages the high number of applications.
  • Scientists need two grants to survive, whereas 30 years ago, one was sufficient.

Peer review cannot be high quality without the best reviewers and a transparent process. Peer review cannot be outstanding if excellent science does not get funded. Further, the peer review process should not impose enormous burdens on scientists.

Our challenges include the following:

  • Scientists’ number-one complaint about the NIH peer review process is the number of applications.
  • The success rate for A0’s (first applications) dropped from 18%-19% in 2003 to 7% in 2006. At the same time, total funding was higher. This reflects an asymmetric decisionmaking process.
  • NIH applications are the longest in the world.

We want to be as bold, comprehensive, and transparent as possible as we reevaluate peer review as a component of a larger system.

Several key people have made significant contributions to this effort: Keith Yamamoto, who has given us the energy to address this over the past 2 years; Lawrence Tabak, who has encouraged us to think outside the box; Norka Ruiz Bravo; and Toni Scarpa.

Review of Ongoing Activities
Dr. Lawrence Tabak
Director, National Institute of Dental and Craniofacial Research; NIH Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

NIH is conducting a self-study, in partnership with the scientific community, to strengthen peer review in these changing times. Details of these efforts are at

NIH recognizes we must:

  • Continue to adapt to rapidly changing fields of science and ever-growing public health challenges.
  • Work to ensure that the processes used to support science are as efficient and effective as possible for applicants and reviewers alike.
  • Continue to draw the most talented reviewers.

We are seeking input from the scientific community (investigators, scientific societies, grantee institutions, and voluntary health organizations) and our own staff. We are now in the diagnostic stage of our study. The first step was issuing an RFI and creating an interactive web site for soliciting opinions. The RFI has been extended to September 7 because of the robust response received thus far. We encourage attendees of this meeting to reach out to your constituent groups and solicit their feedback.

Other initiatives include:

  • Conducting Deans’ teleconferences July 31 and August 6.
  • Holding regional town meetings, of which this gathering is the first. Other meetings will be scheduled in Chicago, New York, San Francisco, and Washington, DC.
  • Selecting a series of science liaisons to enhance outreach to stakeholders.
  • Holding consultative meetings within NIH.

All of this information will be synthesized by the end of the calendar year. The working group and NIH leadership will then determine next steps, including pilot projects. An implementation plan will guide the development and execution of these pilots. The most successful ones will be expanded, and a new NIH peer review policy will be developed.

Goals for the Meeting
Dr. Keith Yamamoto
Executive Vice Dean, School of Medicine, UCSF Professor, Cellular/Molecular Pharmacology and Biochemistry/Biophysics, UCSF Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

Dr. Zerhouni’s goal for strengthening the peer review process is for NIH to fund the best science by the best scientists, with the least amount of burden. He wants this working group to be bold in its thinking and not be constrained by past or current practice or by perceptions of what can or cannot be done.

With peers reviewing research, there is always the danger of reviewer self-interest. The people making the judgments draw from the same resource pool as those being evaluated. A second conflict is reviewer conservatism. Even if we recruit the best scientists, those are the very people who have created the prevailing paradigms of the day in any given area and who, by nature, are the people who will most fiercely defend those paradigms.

Our challenge – given that peer review is the best way of doing things, and given the intrinsic conflicts – is how can we manage the process to maintain the best of its excellent characteristics and avoid the problems?

The culture of peer review panels has changed dramatically:

  • The number of reviewers has increased from 1,800 in 1987 to 18,000-20,000 last year.
  • Twenty years ago, becoming a study section member was viewed as a great honor. It was an interactive experience, where members worked with a small number of reviewers for a number of years.
  • Today, most reviewers are ad hoc, some communicating via phone, others attending for only one day. Meeting are large, with very few permanent members.
  • Senior scientists have long felt there is “lifetime immunity” from serving on study sections after they have served one time.
  • As the perception of very limited resources has expanded, applicants have sometimes viewed review groups as adversarial rather than supportive.

The process of doing science has changed as well:

  • The nature of science is much different from 20 years ago.
  • The scope of virtually of every application is broader. This capacity to attack problems from a number of different approaches is good news for science, but it complicates the review process.
  • With increased scope has come increased complexity and the need for highly focused expertise.

The review process must evolve and adapt to these changes. Lots of planning and pilots are underway, and it is our opportunity now to include consultations with the broader community. We hope you will be thinking about this and encourage your members to do likewise. We want bold ideas as well as incremental ones. We will not make decisions impulsively.

The following are examples of what we are thinking about:

  • Given that budget concerns and intrinsic conservatism have placed undue focus in the review process on preliminary data and experimental detail, should the application structure change to deemphasize these and place greater emphasis on the quality of ideas?
  • If the consensus-building process has flattened the quality of decision-making toward a status quo, if the study section culture has disappeared under the increased load, and if the needed expertise for a given grant cannot be collected into one room, is the conventional study section still the right vehicle for doing the best peer review?
  • Should there be a greater dependence on farming out just those particular areas where expertise is needed? Will this allow study sections to pay greater attention to the impact and innovation of ideas?
  • What is the need for the traditional study section model and its set number of deadlines? Should NIH move toward more of an editorial board model, where a grant application would be sent out for review by experts for commentary on details, and an editorial board (study section) then considers the impact and innovation of the work?
  • Should NIH develop mechanisms in addition to the current review process to capture truly transformative research? Is there a process or another track through which to evaluate applications with a focus more on an investigator’s capacity for bold thinking and less on the project? 

This committee and NIH in general welcome thinking that moves beyond tactical adjustments to bold strategic visions and revisions.

General Questions from Group

Q: What is really on the table? I had thought the focus was on the peer review process itself, but some of the things you talked suggest there are broader things on the table.

A (Drs. Yamamoto and Zerhouni): We want to think about this process as broadly as possible; no holds barred.

Q: If the focus is beyond review but also on the system of support, are the following two issues on the table: that certain types of research are not well supported, and that 12% of the NIH budget supports the intramural program?

A (Dr. Zerhouni): It is 9.8% of the total. We have to focus the discussion on what within our mechanisms influences the quality of peer review. The most important thing we need to accomplish from the extramural standpoint is how we can get the best science and best scientists with the least bureaucratic burden. We want to look at the system as it relates to the effectiveness of peer review. We are not going to question the two levels of review.

Q: Is there a trend toward funding biomedical products vs. clinical research?

A (Dr. Tabak): What occurs is that clinician scientists tend not to put in their first amended application at the same rate as do non-clinician scientists. When the clinician scientist does put in the amended application, there seems to be no difference in their success relative to non-clinician scientists. If there are intrinsic roadblocks in our current application review process, we would be very interested in understanding what those are.

Q: There is a need to capture clinical data from larger practices and networked practices, but there is no infrastructure to do this. Can we brainstorm how to gather these data?

A (Dr. Zerhouni): This is not a review program problem but a systemic program problem that relates to the huge increase in clinical service demands, the way academic health centers have evolved, and the increasing regulatory burden on clinician scientists. All of this relates to the fact that there is not a significant home in which to train young scientists to become effective translational clinical scientists. We need a much bolder approach, which is what the Clinical and Translational Science Awards (CTSAs) are trying to do.

Q: What benchmarks will you use for evaluation, and what kind of surrogates are under consideration?

A (Dr. Tabak): It would depend on the nature of what was being challenged. We would consider the possibility of an additional route to supplement or complement the program-driven application process. We already have done a pilot of this nature (we call them MERIT awards or Javits awards), and we could envision doing retrospective analyses to determine how valuable they have been. One could also look, for example, to the experience of other agencies or foundations that use more of that type of model to derive some insight as to success or failure. We will have to be very broad and creative, but it will depend on the specific question at hand as to what surrogates you might draw upon.

Q: Is there a process by which NIH communicates with specialty societies to help the societies and their members understand its philosophies, perspective, priorities, etc., and to take input from the societies, which would help alleviate a lot of the frustration and distress?

A (Dr. Zerhouni): Each institute uses a multiplicity of processes to connect and interact with scientific and patient-related organizations. We have very diverse advisory councils that come from diverse walks of science. Societies that are new and not yet established or present a different field of science tend to have more difficulty dealing with NIH. We work diligently to encourage more communications. Please let us know how we can improve.

Q: Does the current study section format and membership incorporate the expertise necessary to foresee, appreciate, and assess the challenges associated with the practical diffusion of care regarding the T2 block?

A (Dr. Zerhouni): If you don’t believe that, let us know, and make recommendations. Our ears are fully open. We cannot dictate science from the top down; it has to come from the bottom up, from people like you who establish a paradigm that is judged by the community, as diverse as it is, as being value-adding to either our knowledge or our ability to provide help. This is a market of ideas, concepts, and talent that needs to evolve on its own.

A (Dr. Yamamoto): This comment reflects the importance of bringing into a study section the expertise needed for an appropriate review.

Q: When the funding levels drop to the point that approximately one out of every 10 grants is getting funded, the peer review process breaks down. When people are putting out all this effort, it undercuts almost everything and takes the fun out peer review. Peer review doesn’t work at low funding levels. Changes will tweak the system but won’t fix it.

A (Dr. Tabak): You have described a view of how the system is today.

A (Dr. Zerhouni): No peer review system can sustain itself unless we can convince the country that the NIH budget is not a subsidy but an investment in the future. If we do not do that well, all the discussions will not help. Things are not as good as they can be, nor are they as dire as they could be, but it is important to work on the fundamentals of support that the whole country needs to bring to science.

Q: In the trenches, what people are experiencing is that they do not get funded the first time, they may not get funded the second time, and, if they are lucky, they may get funded the third time. The impact is that their time is spent revising and frantically gathering new detail to supply needed data, taking time away from productive work.

A (Drs. Zerhouni and Yamamoto): There is a tremendous amount of wasted energy in the system. The challenge we face is how to change the system to one with the least amount of administrative burden.

Q: Particularly in this funding climate, it is important to use a lot of caution when using a merit-based system. We should continue to avoid cronyism and especially not bias against younger people.

Statements/Proposals from Societies Offering Specific Strategies or Tactics for Enhancing Peer Review and Research Support

Dr. Jeffrey A. Frelinger, American Association of Immunologists

AAI has not yet had the opportunity to focus on the broad issues that were not on the table when we knew about this, so we focused on tweaks to the system:

  • Allow regular review group members to serve only twice a year.
  • Provide reviewers with more time (an additional 4-6 weeks) to submit their own applications during cycles in which they are serving on review panels.
  • Modify the current unscored triage system so PIs receive information on the relative ranking of their grants. Applicants now have to try and divine this information from the comments.
  • Try to maintain reasonable workloads for reviewers. When loads go up, quality of reviews goes down.
  • We support the idea of shortening the grant applications from 25 pages to 15 pages. First time-applicants should be allowed to exceed that by five pages with additional preliminary data.
  • We support increasing the valuation of the significance of proposals.
  • It is a mistake to arbitrarily shorten face-to-face meetings to one day. The meetings themselves, and particularly dinner meetings, help build the culture. Try not to increase the use of phone or bulletin-board reviews.
  • Do not allow mandatory, automatic triage of greater than 50% of the grants.
  • Try not to use inexperienced reviewers. It is important to keep seasoned reviewers.

Dr. B. Timothy Walsh, Academy for Eating Disorders

Our primary concern with the NIH peer review process is inadequate representation. We are a relatively small field that emphasizes multidisciplinary, collaborative research; therefore, our investigators are subject to conflict of interest problems. We also have trouble finding reviewers willing to spend the time. As a result, NIH review groups often have none or only one person conversant in the state of the science.

Two ideas might help to address this problem; the first is a tweak, and the second is more innovative:

  • The tweak is to require that each study section have two people with clear expertise in the field. We suggest that perhaps NIH conflict of interest rules be loosened appropriately so they are more akin to journal review, excluding reviewers closely tied to the application and allowing more distant collaborators to participate.
  • A more novel idea is to stage a two-tiered review. In the first step, a brief executive summary of the grant (prepared by the investigators) would be sent to a large sample of investigators studying the condition of interest – perhaps everyone who had received NIH funding in the last 5 years. Reviewers would be asked to submit review scores. A summary score (a median, or the average after elimination of the top and bottom 5%) would be generated. This score would be used to select the most promising proposals to be submitted to study section for the second, and final, stage of review.

These ideas would ensure that all grants receive scientific review by appropriate experts, distribute the burden of work across many investigators, and reduce reliance on opinions of non-experts or a single expert on the study section.

Some members also expressed the concern that having too many outside-U.S. reviewers could lead to more distortion, as they are not as familiar with the system.

Dr. John Chatham, American Physiological Society

  • One ongoing challenge is how study sections receive and respond to appropriate guidance for the peer review process. For example, some study sections focus critiques on methodology rather than innovations, despite repeated instructions to the contrary.
  • Reviewers seem to focus on alternative mechanisms with the same mindset as for R01s.
  • The same applications should be reviewed by the same reviewers on resubmission.
  • Clear guidance is needed regarding the role of reviewers in evaluating scientific merit associated with human/animal components. Study section chairs need to enforce existing policies.
  • The lack of funding opportunities for new and young investigators makes it difficult to retain outstanding scientists. Newly independent investigators are often ineligible for mechanisms. As the peer review process is restructured, APS recommends that NIH increase funding opportunities specifically for newly independent junior faculty investigators.
  • Review the grant mechanisms that have grown over the years, particularly R21s. The confusion among different institutes regarding the purpose behind R21s needs to be clarified.

Dr. Gregory A. Petsko, American Society for Biochemistry and Molecular Biology

The current perception of tight funding has altered the culture of the scientific community, which has led to too much nitpicking and conservatism. The way to fix this is to change who reviews grants and which criteria they use.

  • Compel senior people back into the peer review system. Having a grant is a privilege, not a right, and privilege entails responsibility. When you get a grant, your name should go into a pool like a jury pool.
  • Assistant professors should not serve on study sections
  • Bet on people, not projects. The quality of ideas and the track record matter. Past performance guarantees future returns in science.
  • Redesign the grant application to make it 15 pages maximum, but longer for junior people.
  • The current triage system does not provide enough information to investigators.
  • We should not worry about having expertise in every technical area.
  • It is important to continually monitor the IRGs. Some monitoring does go on, but it needs to be institutionalized. Acquire and act on feedback from members and people whose grants have been reviewed.

[The following appears in the ASBMB written statement; its essence was later repeated during the Open Discussion, with the additional comment that if this list has not been helpful, ASBMB would like to know]:

A shortage of volunteers from the biomedical research community willing to serve on study sections is of course a major problem. ASBMB addressed this with the survey of its membership in the spring of 2006 . . . At that time, over 700 ASBMB members volunteered to serve on NIH study sections if asked. These names were passed along to Dr. Scarpa. To date, we haven’t received any feedback from CSR about whether any of these individuals have actually been asked to serve in the peer review system, but if further “nudging” from the ASBMB leadership is needed, we are happy to do so.

Dr. Gail Cassell, American Society for Microbiology

  • It is important to have the right kind of expertise and the balance of that expertise to cover the disciplines of the grants being reviewed.
  • No one beneath the associate professor level should serve, nor should those who have unsuccessfully competed in peer review. Peers should review grant applications.
  • Because of the funding crunch, NIH should reexamine the financial management of its portfolio to look at mechanisms whereby more investigators could be funded by stretching dollars.
  • Grants such as the R21 and K99 need to be reassessed in terms of whether they are meeting the needs of new investigators.
  • Application length needs to be reduced.
  • The review cycle time is particularly long and needs to be reduced without compromising the review.
  • There should be an ongoing review of the reviewers – perhaps by a standing committee – to examine not only the merit of the reviewers themselves, but also the behavior of study sections.

Dr. Mary Ann McCabe, Society for Research in Child Development

SRCD has the following concerns:

  • Reviewer bias, in terms of both conservatism and over-emphasis of some scientific paradigms.
  • Overemphasis on technical detail and preliminary data.
  • Inadequate scientific expertise and inadequate senior scientist expertise on the reviews of some applications.

SRCD is in favor of pilots to:

  • Reduce the burden on reviewers and encourage reviewers to serve, including shorter applications and distant reviews. However, as we move in this direction, it is necessary to put even more safeguards in place to ensure applications have adequate expertise in reviews, because applicants will have less space to justify their questions, methodology, and the context of their studies, and it will be more incumbent on reviewers to make those cases to panels.
  • Secure greater expertise and greater senior expertise by changing the process and schedule, reducing the burden, and deemphasizing technical detail.

SRCD also:

  • Encourages greater education of reviewers.
  • Encourages ways to promote new investigators while also preserving the work of senior investigators. One suggestion is to build in incentives for junior investigators within competing renewals.
  • Recommends that NIH work with departments and universities to reward service on review panels.

Jonathan Cohen, 20/20 GeneSystems, Inc., speaking on the topic of SBIRs

The Small Business Innovation Research (SBIR) program has many serious limitations, particularly with regard to review.

  • It favors low-risk incremental research, rather than highly innovative technology development. This is unfortunate in the case of small businesses, which are good at taking risks and doing innovative work.
  • Reviewers usually are academics, few of whom have any product development experience.
  • There is too much weight on grantsmanship and preliminary data, and too little, if any, on the management team.

Following are three concrete suggestions:

  • NIH needs to break down the barrier in the SBIR program between the review and program management. Neither DOD nor NSF has such a barrier. Program managers need to be involved in the actual review process.
  • NIH should provide mechanisms to encourage companies to bring in outside investors.
  • Applicants should be able to submit an executive summary in advance to get a preliminary read.

Kathy Wilson, The American Society for Cell Biology

ASCB’s recommendations are aimed at streamlining the grant review process for applicants and reviewers alike, enhancing participation in the review process by senior investigators, and enhancing the fairness and consistency of review. A couple of key things are already being piloted, and we support them. Our suggestions include the following:

  • Shorten applications to 8-10 pages.
  • Reduce turnaround time so unfunded applications can be put in without missing a cycle.
  • Routinely give bridge funding – perhaps 30% or 50% of last year’s support for a year – to grants that are close but have missed the funding mark.
  • Award an extra year of funding to study section members who serve at a high density (e.g., 10 times out of 12 in a 4-year period). This would compensate time away from their own lab and could increase participation and perhaps restore full camaraderie and continuity.
  • Increase the number of R01 grants and reduce ineffective grant mechanisms.

We have several suggestions to increase participation by senior scientists:

  • Allow senior scientists to serve once per year, perhaps within a 3-year instead of 4-year term.
  • Make it official that members can serve only twice per year instead of three times.
  • Severely limit the number of untenured assistant professors who can serve.
  • Reduce the frequency of other NIH administrative duties (advisory panels, etc.) required of senior scientists.

To increase fairness:

  • Do not triage R01 applications from applicants without previous R01s.
  • Review applications of new investigators as a group rather than interspersed with senior applicants.
  • Require full participation by people on the review panel (e.g., do not come for one day only; no phone-ins).

To keep innovation possible, create a rapid mechanism to remove certain study sections and to create new ones.

Open Discussion

Application Length

  • Shorten grant applications.
  • If the application length is shortened to exclude some technical detail, it will be difficult for reviewers to assess quality. To address this, a section could be added whereby the applicant would indicate the ways in which the methodology supports the internal validity of the study as well as threats to internal validity.

Clinical/Translational Research

  • Grant mechanisms are not currently best suited to translational research, specifically the R01 and P grants.
  • The CTSA mechanism is excellent, especially in emphasizing translational research, but there is some concern that applications do not address the whole lifespan. Other mechanisms that would encourage multi-institutional or network grants would be more effective.
  • How to facilitate review of clinical applications? Have clinical research grants that deal with important clinical problems reviewed by separate and distinct panels, composed of basic scientists and clinical researchers who have participated in clinical research grants.
  • Regarding the move to a translational emphasis and the people who can make it a successful process: If we can get to some principles we can all agree on, that would help in the decisionmaking.
  • Translational science should also come from grass-roots programs and applications that may not already by recognized. Science can flow from the grass-roots up.

Cycle/Review Time

  • Shorten the turnaround time of grant reviews.
  • Shorten the cycle time of proposals.
  • If you shorten the cycle to 6 months, that would allow two full cycles per application per year. If that were possible, you could implement a rule: Limit any single investigator to a single application at any one time (per mechanism). Active grants would not count against the application

Dream Team

  • Conduct “dream team” nominations on the Web, especially for cross-disciplinary areas.
  • Members of the Society for Psychotherapy Research would be happy to come up with a list of members willing to serve on review committees.
  • [See also ASBMB statement, page 8.]


  • Getting the needed expertise on study sections might mean that reviewers are not senior scientists.
  • It is important to include panel members who have an awareness of pediatric problems and an understanding of the developmental impacts of disease.
  • Focus on service delivery implementation research.
  • Effectiveness research in behavioral science has not been successful. One reason is that most of the review groups were populated by people who did not believe in effectiveness research.
  • Increase representation of different paradigms in the review of behavioral science applications.
  • Look at reviewer credentials for looking at clinical proposals to see if they have clinical expertise. Make panel more clinical-heavy in those cases where translation and clinical research are being reviewed.
  • Clinical representation is needed on review panels to assess the clinical impact of the research.
  • Peer review panels need to be composed of true peers. Academicians are not peers of translational researchers.
  • A multidisciplinary approach is needed for evaluating systems biology grants on complex diseases, and study section make-up needs to reflect that.

Funding Line/Funding

  • A lot of the problems with the peer review system will vanish if the funding line can be improved.
  • Move the burden for PI salaries back to institutions.

Grant Mechanisms

  • Do a comprehensive review of all grant mechanisms at NIH with the intent, whenever possible, to streamline or consolidate a number of these mechanisms.
  • Shift resources from smaller mechanisms to increase the number of R01s.
  • Find ways to fund more R01s.

Interdisciplinary Research

  • The call for interdisciplinary and team proposals is incredible right now. Study section composition is lagging behind where the intellectual guidance of NIH is going. Instead of focusing panels by disease or by behaviors, have permanent members on the panel who can address interdisciplinary and crosscutting expertise.
  • Interdisciplinary work tends to be reviewed through the disciplinary perspectives and culture of the primary reviewer. Set up study sections whose goal is to review interdisciplinary work and whose members have done the work and can focus on interdisciplinary science. It is hard to get tenure as an interdisciplinary scientist, so don’t exclude junior scientists from the mix.

NIH Staff

  • Grant review staff participation in the peer review process would increase the pace applicants get their grants funded, decrease cronyism, and create a more integrated, less siloed approach.
  • Don’t go too far in terms of administrative decisionmaking influence. Maintain the scientific editorial board’s recommendation and the integrity of that recommendation.


  • Report cards are needed to pick out people who are doing a bad job of reviewing.

Review, Monitoring, and Evaluation

  • Consider comprehensive, ongoing review and monitoring of large science projects. There should be an efficacy assessment as they are underway and when they have matured. Consider mechanisms for sunsetting and redistribution of resources once the mission has been achieved.
  • Depending on the type of grant, the evaluation of progress may differ. We should be very careful in a priori _______ we see that our interventions, based on the review process [?], lead to the results we want. [This statement by Dr. Willem Kop was unclear.]

Science of Peer Review

  • Think of NIH as a Fortune 500 company. All strategies to review methodology require expertise in statistical techniques and psychometrics. Take into account new ideas in trend analysis and factor analysis, new ways of determining statistical significance, and many others.
  • Tap into judgment and decisionmaking research.
  • Perhaps some of the knowledge about judgment and decisionmaking as well as the nature of groups and group values can help recreate the culture.

Staffing Panels

  • Require people who hold grants to serve on a review panel.
  • Have at least 10% junior people on each section. Their freshness and honesty can counteract some of the conservatism and self-interest.
  • Have a payback, whereby senior scientists must serve on a panel (similar to the requirement for NRSA awardees).
  • Some aspect of both reward and responsibility will get senior investigators back in the queue.
  • Extend grant time or pay to encourage senior scientists to serve on review panels.

Study Section and Review Models

  • We are wasting a lot of time having to come back twice before getting funded. The goal is to make the science better, not the grant.
  • Have a separate or supplemental review mechanism for scientists seeking their first award, where these applications would be reviewed as a collective, separately and not in competition with other applications in the system.
  • Not all applications need to be reviewed by the same mechanisms. For example, clinical research suffers the same problems as team science and interdisciplinary research, in that you cannot get true peer review. Use different ways of review to account for this.
  • Require some continuity of previous reviewers for A1 and A2 reviews.
  • All the same types of applications (e.g., R01s, R21s) should be reviewed together.
  • Look hard at the NIH model for reviewing internal programs. The quality of internal reviews is at least as good overall as external reviews. The fundamental difference is they’re reviewing a laboratory and not projects. Adopting this internal, retrospective approach would allow clinical researchers and groups of researchers to be thought of together.
  • To make the process more efficient, allow only one resubmission.
  • Limit the number of proposals a PI could submit.
  • The old study section model is a good one. We need to triage, but it is hard to do well. Ask for preliminary grant applications – two to four pages – to be reviewed and scored by a broader number of scientists, then examined by the program, with the best of the breed advancing with an expanded application to a constructive study section of the kind that used to exist.
  • Have a mechanism where a concept paper can be submitted and receive feedback.
  • Certain technical aspects of review could be done offline, like some journals. Bring the technical reviews back to a smaller, more old-style study section that has the benefit of that technical expertise but is specifically charged with determining the overall merit and consistency of the project.
  • Consider having targeted review of technologies and methods so study sections can take more time to focus on ideas and individuals. This could restore the culture and save time.

Study Section and Review Models: Editorial Board

  • The editorial board model is a good one. Send applications out for focused comment. Because grants are submitted electronically, this should speed the review.
  • Maintain the sense that editorial board-type review has to be a competition by maintaining a competitive timeline.
  • The editorial board process could mean getting five disparate reviews and somehow figure out where truth is as opposed to sending it back for revision.
  • The editorial board-style would be extremely burdensome, as there would always be a pile of grants on your desk. Face-to-face review is extremely important if applications are to be productively discussed. This is especially important for grants in the middle.


  • Educating heads of study sections needs to be taken more seriously. The goal of the head is to resolve conflicts, and they have to be taught how to do that. Young people coming in need to be taught how to review, and that needs to come from the leaders.
  • Need more formalized member and chair training.


  • Eliminate triage for people seeking their first award, as this is demoralizing and discouraging.


  • Address the unique structure of product development partnerships. Take into account the unique pipeline and the way that partnership develops products.
  • Use a portfolio approach to allow grantees to submit proposed budget for review along with alternatives.
  • Whatever changes are made to the peer review process, do not let the politicization of science influence how science is reviewed.
  • Perhaps some applications are going to the wrong study sections. Let the investigators choose their study section, or have a cycle where study sections really do address the types of applications that are being proposed.
  • There should be more gender, ethnicity, and racial diversity within panels, without using the same people repeatedly.
  • The academic community alliances concept is being tested. These encourage cooperation between practitioners in academic medical centers. The American Association of Neurological Surgeons and Congress of Neurological Surgeons offers to be a laboratory to test this concept.
  • Maintaining research relevance is difficult when review committees ignore what the councils put out in public announcements. Suggestions: Industrial representation is needed to dilute over-academic focus; allow NIH staff to have input on whether the public announcements have been largely ignored; ask professional societies to have input in the review process, including asking practitioners what works and what doesn’t.
  • Harmonize the IRB to allow clinicians to participate more fully in peer review and have the ability to write and receive grants.
  • In behavioral science reviews, statisticians, rather than scientists, seem to be the decisionmakers.

Question from Dr. Tabak

Might there be some value added to the system if we were to simplify the cornucopia by just doing large, medium, and small opportunities? Would anyone be willing to trade the numbers of grants and roll it into one, appropriately scaled, and leave the system for a given number of years?

Responses from Participants

  • If someone enters the “super category,” they should be competing with “super-category” scientists who are in the same realm.
  • Modular budgeting may have contributed to the ballooning of grant budgets. More input is needed on the budget analysis on how to stretch grant dollars.
  • We are trying to cover a very large community, and a six-fits-all selection won’t cover those with special ideas to find their special niche. It is important to look at reducing the number of modes. Look at large grants and see if they would be more effective pared down.
  • It would be refreshing not to have to enter every cycle. On the other hand, some of the innovation will be lost in some long-term grants.
  • The idea of consolidating an investigator’s grant into one large package is appealing to the system because it reduces the flood of grants, but it would be very scary to the individual investigator because then it becomes all or none. Cornucopia is not the right word: It is “airline fares,” because in addition to all the different mechanisms and the individual variations in details that each ICD imposes, it is truly difficult to know what to apply for, and difficult for review groups to review fairly. Streamlining is essential; this really is a case where less is more.
  • There is a benefit to having grants reviewed every 3 to 4 years; it is an important part of the current process. It would be difficult for review panels to compare small, medium, and large grants. It is worthwhile looking at superlabs with multiple R01s. Are those R01 dollars generating as much productivity per dollar as labs that have only one or two R01s? Setting limits on total numbers of R01s would be reasonable.

Closing Remarks
Dr. Zerhouni

Today is an extraordinary example of how the outcome is better if a diverse group comes up with their own independent ideas. I will try to summarize the discussion from my point of view. Many of the concerns expressed today confirm many of the things I have heard.

  • The funding line is the ultimate, fundamental problem.
  • High anxiety and the need to apply multiple times is damaging to science and scientists.
  • Low success rates create a “traffic jam” problem that leads to low A0 success and thus increased burden on scientists and reviewers.
  • Perception is reality. The batting average per application is too low. We have to work on smoothing out the ups and downs, using bridge grants, for example, and improving the review process and perhaps grant structure.
  • Review has run into a process vs. quality problem.
  • The training of reviewers and panel chairs needs to be more consistent regarding their ability to judge people and track records vs. minutiae.
  • The process cycle is too long from review to award.
  • Is new science crowded out?
  • There needs to be a balance between basic and translational/clinical research, although it is a dynamic balance, with no single equilibrium point.
  • We need to address the cornucopia of grant mechanisms.
  • We need to assess the portfolio of large grants vs. small grants as a function of effectiveness. The R01 proportion of RPGs is too low
  • There is a greater need for “science on science” – rigorous research on the process itself and the quality of peer review.
  • We need to examine the review of interdisciplinary and complex science.
  • We need to look at the balance of NIH vs. institutional funding requirements.

I built a scan wheel as people were talking. It reflects that the problems are multifactorial, nonlinear in their capacity to respond, and result in unintended consequences to other problems when addressed. It is important to realize that none of the recommendations in isolation will accomplish the goal.

Thank you again for taking the time to come here. This is a community effort, and NIH is only one piece of the system. Your input is most helpful; we will take it very seriously as we go forward.

I want to thank Drs. Keith Yamamoto and Larry Tabak for their efforts, and Amy Adams and staff for putting this meeting together.

This page was last reviewed on August 23, 2007.
skip main navigation National Institutes of Health - Transforming Health Through Discovery U.S. Department of Health and Human Services Health Information Page NIH Grants News and Events Research Institutes and Centers About NIH