NIH Regional Consultation Meeting on Peer Review

Meeting Summary

October 22, 2007 – Washington, D.C.

Overview
Dr. Raynard Kington
Deputy Director, National Institutes of Health (NIH
)

All of us at the agency thank you for taking time to help us address the important issue of peer review. You are here because of a community effort, a partnership is the way our agency works. We're asking you to think hard about this topic and share your ideas with us.

The two-tiered NIH peer review system is the cornerstone of the enterprise of biomedical and behavioral research in this country and is emulated and respected worldwide. Let me just briefly describe how it works; if you want to know more, I encourage you to go to the web site for NIH's Office of Extramural Research.

The first level of peer review takes place in scientific review groups (SRGs) composed mainly of scientists from the extramural community. These SRGs, also known as study sections, are managed by scientific review administrators (SRAs), who are located at the Center for Scientific Review (CSR), but also at each institute and center. The study sections in CSR help evaluate investigator-initiated applications. In contrast, the review branches of the rewarding institutes and centers have their own review staffs that manage study sections to help evaluate applications submitted in response primarily to RFAs or other unique programs targeted to a specific institute or center.

The second-level review is carried out by the NIH advisory councils. These councils are composed of extramural scientists and public representatives. They ensure that NIH receives advice from a cross-section of the U.S. population geographically and demographically, and from interested constituencies.

Over 30,000 scientists and members of the public advise us via study sections and advisory committees every year. This extraordinary guidance helps NIH choose the best scientists and best science to address the most important and compelling public health problems. The funds go to over 300,000 scientists at various stages and levels of the system, over 3,000 research institutions in every state, and many countries.

According to A Half Century of Peer Review, 1946-1996, the first study section was devoted entirely to research related to syphilis. Over the next few months, other review panels were set up to look at the important scientific and medical issues of the day. The topics were a reflection both of the public health challenges the country faced in the 1940s and of the scientific opportunities most likely to advance broad biomedical research in this country. In 1949, Dr. Eleanor Darby was appointed as an executive secretary of the study sections, the first female scientist to serve on a study section. It took nearly 20 years for an African-American – Dr. Frank Johnson, an Army pathologist – to be appointed to a study section. Today we devote an extraordinary amount of time to ensuring our study sections include a diverse group of scientists, in terms of demographics and perspectives.

We need to act to ensure that our peer review system keeps up with the ever-changing scientific and public health world before us. How can we find experienced and able reviewers to supply needed expertise? Are we encouraging more applications by offering too many grant mechanisms? What about technical issues? NIH grant applications are thought to be among the longest in the world in scientific review. Questions have been raised about success rates for first applications. And these are just a few of the questions we keep hearing. Although there have been periodic reviews of the system, the last one was about 7 years ago. The NIH leadership has decided to enlarge the conversation to determine how our peer review system can best work in today's context.

We really care about your opinions, and we'll be listening to you. I want to thank Drs. Larry Tabak and Keith Yamamoto, who have devoted extraordinary time, effort, and thought to this process.

Review of Ongoing Activities

Dr. Lawrence Tabak
Director, National Institute of Dental and Craniofacial Research, NIH; Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

We are involved in this self-study – in partnership with members of the scientific community and advocates – to strengthen peer review in these times when science has increased in its breadth and complexity. NIH must continue to adapt to rapidly changing science and ever-growing public health challenges.

However we accomplish this, we have to ensure that the processes we employ are both efficient and effective for applicants and reviewers alike. The approach is straightforward: We're seeking input from a broad range of stakeholders, and today we're focusing on voluntary health organizations. We've also sought input from our own staff. Two committees were created to help oversee this program. The first is the Working Group of the Advisory Committee to the Director (ACD) of NIH, which Dr. Yamamoto and I co-chair. A number of people on this committee have served on the previous formal report of peer review, the so-called Boundaries Report. The second, equally strong group of individuals, comprise the internal NIH Steering Committee Working Group on Peer Review, which I co-chair with Dr. Jeremy Berg, director of the National Institute of General Medical Sciences.

Here is a quick review of how peer review is accomplished at NIH. An investigator comes up with an idea for a proposal. He or she submits this to what we refer to as the first level of scientific review: a group of the individual's scientific peers, who gather to evaluate the merit of the proposal. About 70% of that review is conducted centrally through the CSR at NIH, and the remaining 30% is distributed across the institutes and centers at NIH. The information related to the review is then simultaneously transmitted to the applicant and the program officer in the appropriate NIH institute or center. Together, they review and evaluate the feedback from the first level of peer review. That information is then transmitted to the respective national advisory council of the institute or center in question. This group serves as our Board of Directors, and it conducts the second level of peer review by assessing not only the applications alone, but how they stand in the context of the overall mission of the institute and the balance of the scientific portfolio the institute and center support. Then they make their recommendation to the institute leadership, which makes final decisions related to the allocation of funds. Of course, these allocations need to be justified each year to Congress, which appropriates funds for our use in supporting research and training activities around the country.

In our decision making process, every voice counts. We seek and listen to diverse opinion from many quarters, including the general public, scientists, patients and their advocacy groups, voluntary organizations, and the review committees I just outlined for you. Congress is not shy about letting us know how they feel about certain things, and then we have various advisory boards, health professional organizations, professional societies, industrial societies, and so forth.

We are now in the diagnostic phase of this assessment. NIH solicited a request for information (RFI), where we asked for opinion from stakeholders on six specific questions. Respondents were asked to describe what they saw as the challenges to the NIH support system, specific challenges related to peer review, and solutions to these challenges. We then asked questions about the core values of the peer review process, the criteria we use to evaluate grant applications, and the scoring system we use. The final question related to whether the current process is appropriate for investigators at different stages of the career pipeline.

Although the timeline for the RFI has ended, we're still accepting feedback at peerreviewrfi@mail.nih.gov.

Additional activities include the following:

  • Dr. Yamamoto and I held two teleconferences with about 100 deans or their representatives from around the country.
  • A number of scientific liaisons have been identified to serve as a bridge between the ACD Working Group and the members of their respective communities.
  • We are holding a series of regional town meetings, of which this is the fourth. On October 25, in San Francisco, we will meet with the academic community, investigators, and administrators.
  • The internal Steering Committee Working Group has been working with our own staff to solicit information from each institute and center.
  • We are analyzing how agencies worldwide approach peer review.
  • Our efforts will be informed by the National Science Foundation’s recent report on the impact of proposal and award management mechanisms.

What will we do with all this information? NIH leadership will begin to determine next steps, and these likely will include pilot experiments. Armed with the results of the pilots and the evaluations, we will develop an implementation plan. Presumably the subset of pilots that prove most successful will be expanded and extended, ultimately leading to the development of a new peer review policy for NIH.

I’d like to share with you some emerging ideas that might be of particular interest. This is a subset of the many ideas we have heard. These are not prioritized in any way, and they are presented only to facilitate discussion.

  • Review Criteria and Focus: We have received many suggestions about changing the review criteria to increase risk taking, and/or innovation, and/or the focus on public health. So, for example, the individual criteria could be weighted with sliding scales. Many suggested that instead of the single-score approach we use now, we should employ a matrix of scores, which would help us evaluate multiple dimensions of an application. There's been a lot of discussion regarding whether one should review the project as submitted, or give more emphasis to the person, and whether we need to place less emphasis on methodology and on the amount of preliminary evidence that the project will succeed.
  • New Models of Review: Many people have embraced the notion that we should have more, rather than fewer, people reviewing applications. Presently there are typically two to three so-called lead reviewers for every application. Many feel that with the use of electronic modalities, we could increase that to five or even more such persons. There's a lot of discussion in the community about whether we should permit applicants the opportunity to correct factual errors that may emerge during the review process. Many have indicated we should use different types of review for different types of science. Many have argued that clinical-based research requires direct involvement of patients or their advocates during the review process. A similar argument has been made that community-based research requires the involvement of community members during the review process.
  • Maximization of Review(er) Quality: Many said we need more in-depth training for our reviewers. Some have called for rating of reviewers. Others have suggested we rate the scientists overseeing the review process. Questions have been raised regarding how much context should be provided to reviewers at that first level of review.
  • Reviewer Mechanisms/Mechanics: Many people have asked that we provide more useful feedback to applicants, with particular emphasis on the situation where an applicant has put in a proposal that simply is not competitive. In those cases, we have been asked to tell applicants unambiguously if the application is not recommended for revision and resubmission.
  • Mechanisms: Many say we should reduce the number of mechanisms. Some have argued that we need to make the NIH system more accessible to the nontraditional, so-called nonacademic organizations.
  • Other Issues: These include whether we should provide support for individual investigators vs. big science, and how many grants are enough for any single investigator.

Goals for the Meeting
Dr. Keith Yamamoto
Executive Vice Dean, School of Medicine,UCSF; Professor, Cellular/Molecular Pharmacology and Biochemistry/Biophysics,UCSF; Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

The title of my presentation comes from the charge that Dr. Zerhouni gave the Working Group: “Fund the best science, by the best scientists, with the least administrative burden.”

Another quote – "The only possible source for adequate support of our medical schools and medical research is the taxing power of the federal government. . . Such a program must assure complete freedom for the institutions and the individual scientists in developing and conducting their research work" – came from the Surgeon General in December 1945, virtually the month before the peer review system was established at NIH. It was under that framework and that hope that NIH undertook this task of assessing the kind of science that was being done and the ways it would be supported by the federal government.

Against that backdrop, here is a 2005 quote from Tom Cech, Nobel Laureate and head of the Howard Hughes Medical Institute: "Discovery and innovation are to some extent taking place in spite of, rather than because of, the current policies and practices of major biomedical funding agencies.”

Many would agree that peer review is the only system for funding the best science. At the same time, we need to acknowledge that the system has intrinsic conflicts that are not going to go away. The first is reviewer self-interest, in that the very people who make the assessments are active researchers drawing from the same resource pool as the applicants. The second intrinsic conflict is reviewer conservatism. To have the best system, we need to have the best scientists participate in the review process. These scientists created the prevailing paradigms, and so they will defend them.

The doing and reviewing of science have changed dramatically, not just in this last 60 years, but in the last few years. And it continues to be a very strong dynamic that is affecting the way these processes are carried out. The nature of research itself is changing. This means that every investigator can undertake research with a much broader scope than before. Technology continues to be a major driver in research, much more than in times past. This creates more complexity, requiring more expertise on the part of the investigator and more people to get the job done, often in multidisciplinary teams. It's actually become relatively rare that science is carried out as a single-investigator endeavor without other very important collaborators coming into play.

Over recent years, the doubling and then flattening of the budget has driven an explosion in applications. More people were hired during the doubling, buildings were built, new institutes were established, and many people flooded the system. When the budget flattened, everyone had to scramble to find more money, and this meant an increase in applications. Currently, NIH processes and reviews 80,000 applications a year. With these increased applications has come a vast increase in the number of reviewers. Twenty years ago, NIH used 1,800 reviewers; last year, the total was 18,000. A relatively small number of these were senior scientists. The study sections, which are mandated by Congress to have 15 to 20 chartered members each, began to fill up with ad hoc reviewers who might be able to address one point in one grant, but were otherwise unable to bring the kind of perspective that's required. Finally, with this perception of concern about resources, reviewers increasingly have taken an adversarial stance in the process.

The peer review system clearly needs to evolve and adapt to these changes. We are looking for bold thinking in all areas. All of the following, and more, are open for discussion:

  • Reviewer Criteria and Focus: Should we pay attention more to investigators than projects? Is NIH set up in a way that it can recognize and support transformational research?
  • Application Structure and Content: NIH grants are the longest in the world. Do we want that kind of detail? Does it get us in trouble to have the experimental detail documented, in terms of what the reviewers end up focusing on?
  • Reviewer Mechanisms and Mechanics: Does the study section system of 15 or 20 members in a given area still work? Is it still relevant to today's science that's so much broader? Are there adjustments that could better align the review system with the way science is currently done?
  • Reviewers and Review Culture: When I was a study section member, it was an honor to be asked to serve. You came away from those meetings feeling like you'd really accomplished something that was good for the way science was being done. Is there a way to recover that kind of attitude?
  • Scoring: The NIH system uses a hard numeric scoring system that goes out to two decimal places. I think everyone acknowledges that we're not quite at that level of resolution of being able to distinguish one grant from another. And one bad score can do an investigator in these days. Are there other ways of looking at our applications and assessing their relative merit that are both more honest and more in keeping with the quality of the applications we see?

To fund the best science by the best scientists with the least administrative burden means we can't do it piecemeal. We need to be rigorous, tough, and fair, and we need to:

  • Acknowledge the special needs of both new and established investigators.
  • Keep the whole community engaged and involved.
  • Continue to support the kinds of innovative research for which NIH is famous. I would argue that we're not good at recognizing and supporting the transformative research, and maybe we need to look for ways to do that, as well.
  • Make the process more efficient. Some grants should not be sent in again. Is there a way to build more clarity/honesty into the process so reviewers have a clearer idea of what the assessment really means when they get that review back?

Statements/Proposals from External Scientific Community Offering Specific Strategies or Tactics for Enhancing NIH Peer Review and Research Support

Dr. Marc C. Hochberg, The Arthritis Foundation

My comments represent the views of the research advisory council of the Arthritis Foundation.  They were prepared by Dr. John A. Hardin, the chief scientific officer of the Foundation, who regrets he was unable to be here today.

  • Many of those we surveyed felt the peer review system tends to be conservative and averse to supporting research that might for any reason be risky, even when the potential payoff might be very large. In general, emphasis is placed on identification of the level of risk rather than weighing the risk-to-benefit ratio. When translated into funding decisions, this implies that preliminary supporting data often are required to be so strong that the particular outcome of the research is in fact highly probable before the research actually is done. This consideration is thought to particularly affect early patient-based clinical studies, where it is difficult, at best, to present an approach that is relatively certain in outcome.
  • Some of the senior scientists we surveyed felt that the peer review process works best when the reviewers are both experienced investigators and experienced peer reviewers.
  • The question was raised as to whether there should be more consistency across institutes in terms of grant mechanisms.
  • Should a limited number of grants be awarded to any particular investigator, and should this also translate into a limited number of dollars per investigator? Some members felt there should be a cap on the amount of dollars allocated to any one particular investigator, but that this was a difficult question to deal with by individual institute.
  • An appropriate alternative approach to funding might be along the lines of the Howard Hughes Investigator program and the NIH Merit Awards.  A system might be devised in which investigators progress from junior levels of funding to more senior levels, with incremental gradients of the total dollars committed.  In an ideal system of this type, an individual investigator might submit a single application every 5 years.  It would be challenging to identify the best way to use such a system to promote large-scale program projects and clinical networks.

Dr. F. Owen Black, Director, Neurotology Research, Legacy Clinical Research and Technology Center, Legacy Health System, Portland, Oregon; Vestibular Disorders Association Medical Advisory Board

The NIH peer review process has not benefited vestibular patients. According to NIH, about 30 million people in the United States have hearing losses. Although at least twice as many as those patients have vestibular disorders, the vestibular community receives less than 10% of the NIDCD budget.

From our perspective, the peer review process has deteriorated into an opinion-swapping event, in the sense that many, if not most, of the opinions expressed cannot be factually substantiated. If one reviews the summary statements, most are not backed by peer- reviewed references to the literature, nor can one find a rationale for those comments.

We recommend that a process be developed to qualify reviewers via training in all the areas listed by Dr. Yamamoto. One possible basic model is the airline industry methods for pilot qualification and re-qualification. These principles have been applied to monitoring anesthesia procedures in the operating room, resulting in a drop in morbidity and mortality from approximately 3% to 5% percent to less than 1/10 of 1%. Another good example is the Center for Certification of Rehabilitation Institutes, in which the system reviews the reviewers, and the reviewers anonymously review each other.

I'd like to read the statement we agreed upon: It is herewith proposed that the implementation of an objective – that is, a fact- and performance-based process – to qualify NIH reviewers be considered. The simultaneous development of objective measures for acceptable peer review performance, with respect to the goals outlined, for example, in the “NIH Roadmap,” would likely yield a much higher return on the public investment toward improving accomplishments of the NIH mandate to conduct health care research would be better served in the public interest, in our opinion.

Dennis Coleman, Director, NIH Liaison Office

I based my comments on the six issues defined on the web site:

  • Processing the high volume of grant mechanisms should take greater advantage of technology and industrial quality assurance (QA) procedures. Databases, communication technology, expert systems, consensus-building systems, Internet conferencing, even artificial intelligence, might help in this area. Not all decisions are amenable to technology, however; I think there's a need to separate the deterministic decisions from the judgment process, where you need the medical experts. But technology can help.
  • QA-wise, development and application of industrial and government agency methods would help to increase the transparency of the peer review process. A public agency model exists right now: The Administrative Procedures Act very clearly defines how committees can avoid bias, be visible, and avoid conflicts of interest, abuses of discretion, and arbitrary decision making.
  • I'm not sure why grant duration is a problem. But if this issue is volume-related, again, the technology QA remedies apply. If there's no justification for the long duration of some grants, some must die so that others may live, and that's the way business would handle that.
  • Regarding peer review challenges, I think this is a process improvement exercise, and that inherently creates resistance to change. That's because process improvement can change the players and their roles, priorities, influence, and status. But business, industry, and technology enterprises found out years ago that solutions like total quality management can take the personality conflicts out of process improvement. The rationale for change needs articulation, executive commitment, metrics, regular review, and follow-up, just like you do in the product marketing and sales activity..
  • The core values are different for research applicants and peer reviewers. Peer review requires competence, objectivity, quality, transparency, rationality, and good judgment. The researchers are now very well gauged by the technical criteria that exist. What I find missing is the motivation and drive aspect.
  • I like the existing scoring criteria. Does the research build on past results? Is it an important strategic aspect of the roadmap? Are the facilities high quality? All of that makes perfect sense to me.
  • Regarding criteria, the timing and scheduling of when results are delivered deserves more review when the research warrants a sense of urgency. The development and application of behavioral or social science metrics to the review of research applications will reveal the motivation and drive factors.
  • The fact that insight and perspective can be as fruitful to research as experience argues against having a rigid relationship between career stages and the influence individual reviewers can have on the peer review process.

Amy Comstock Rick, Chief Executive Officer, Parkinson’s Action Network

Our comments address whether there is a sufficient connection between the current peer review system and NIH's mission of promoting research that will help prevent, diagnose, and treat disease. We do not disagree in the least that it is scientists who should be assessing the scientific merit of grant applications, but we do think there are ways to increase efficiencies.

Study sections may have a bias toward basic research, which would then fundamentally interfere with NIH's mission of better diagnosing and treating diseases. For example, more innovative ideas in translational or clinical research that may not have the data to support the hypotheses may have a lower score, simply because of the nature of the research.

I recently served 2 years on a working group for the NINDS Council that assessed 12 NINDS centers. In the course of this review, we encountered some problems with the peer review process and how it funds the centers. There was unanimous agreement in the working group that the study section reviewer didn't have the opportunity to learn about the program and its unique aspects. In the report we presented to the Council for NINDS in August, we recommended that only one study section handle all the grant applications for this particular center program, and that they be trained to understand the applications they review.

Given the limited number of hours that each institute’s council meets each year, it is unrealistic to think the council could adequately assess each grant application in terms of public health relevance. Yet, this is of critical importance to American families facing Parkinson’s and other diseases. We recommend that the NIH peer review process be revamped to ensure an assessment of relevance to the agency mission, importance to the translational and critical research portfolio, and context of the application in terms of disease portfolio and gaps in knowledge.

We recommend a three-tier model, whereby council continues to assess program priorities and gaps, and a separate body assesses the grant applications in terms of significance and relevance to public health, ensures there isn't redundancy in areas where validation is not needed at this time, and determines whether this is a good use of taxpayer dollars.

The Department of Defense’s Congressionally Directed Medical Research Program (CDMRP) has a system we think works pretty well. The CDMRP panel determines annual priorities by identifying gaps in knowledge of disease, areas that will accelerate the field of research, among other criteria. The annual priorities are then incorporated into program announcements. After grant applications are received, the peer review panel evaluates the science, based on the program announcement’s established evaluation factors as well as the budget of each submission. The programmatic review panel evaluates the proposals on a comparison basis and identifies those with the greatest programmatic relevance, as well as disease relevance, innovation, and other factors.

We also recommend that study sections be targeted by basic vs. translational and clinical research. There are different levels of supporting data, and that level of expertise is important for the study sections to understand.

M. Carolina Hinestrosa, Executive Vice President, Planning and Programs, National Breast Cancer Coalition

Advocates must be knowledgeable and confident in order to participate in the decision-making process of science and medicine. NBCC's Project Live is an intensive science course that teaches breast cancer advocacy, biostatistics, and epidemiology. Graduates also participate in continuing education workshops and in an online community.

NBCC believes that to have a meaningful impact on the research process, advocates involved must be individuals who have been personally affected by breast cancer; in some cases, as in the peer review process, they must be individuals who have had the disease. Nevertheless, the peer review process has traditionally excluded those most affected by breast cancer research.

Additionally, activists must be involved in the programmatic review process, determining priorities for funding and mechanisms. The Department of Defense Peer Review Breast Cancer Research Program has proven that this is an effective and valuable model of scientist/activist collaboration. NBCC has been closely involved throughout the evolution of this program. This collaboration of leading scientists and consumers has brought about a mindset that fosters the generation of new ideas and risk-taking in research and the training of scholars in nontraditional disciplines that could potentially offer new insights into this complex disease.

Innovative funding mechanisms that fill important gaps in the landscape of breast cancer research – such as concept awards, idea awards, synergistic idea awards, and innovator awards – are examples of many products of this collaboration. Many of these mechanisms were followed a couple of years later by the NIH; these really filled a gap and have been replicated by other funding agencies.

However, we find that the peer review panels often struggle when reviewing proposals and the new mechanisms, particularly in the assessment of unusual criteria such as innovation, synergy, multidisciplinary training, and even impact. While some criteria may be subjective, as is the case with innovation, it is disconcerting how risk-averse scientists can be. The opposite extreme is how often applications that establish techniques in a different cell line, for example, are considered innovative; or how just bringing a biostatistician to a clinical research project is considered multidisciplinary research by some. It is also disconcerting when we look at peer review scores and find that a low-scoring proposal, with more weaknesses than strengths listed, still gets a rating of excellent.

When consumer advocates have been meaningfully involved in the research process, they have changed the culture of research and research programs for the better. This is corroborated by the Institute of Medicine's assessment of the DOD Breast Cancer Research Program, and it is a model we urge this panel to consider.

Jane Holt, President, National Pancreas Foundation

I have chronic pancreatitis, and I'm here as a patient to ask for your help. The National Pancreas Foundation addresses all diseases of the exocrine pancreas. Although our area of interest is specialized, it shares something important with a number of other research areas: It is vastly underfunded in proportion to the burden of the disease. We believe that with the proper level of funding, it is possible to adjust this imbalance to benefit patients and their families, clinicians, hospitals, and research labs.

NIH selects its currently funded researchers for its study sections. These reviewers have already proven they can meet the rigorous standards of excellence NIH demands of its principal investigators. But this becomes a problem for diseases that are underfunded. The ranks from which NIH can choose its reviewers have become smaller. Fewer reviewers with this expertise makes for fewer funded applications, and it becomes a vicious cycle.

There are excellent doctors out there that want to do this research and who also deserve consideration as reviewers on these study sections. Our organization will be more than happy to help funnel these doctors to NIH for consideration as reviewers.

It's especially important for pancreatic applications to be submitted to reviewers with direct pancreatic expertise. Pancreatic science is notoriously difficult to do. When reviewers from other disciplines are asked to review pancreatic applications, they hold them to the same standards set for their own more sophisticated fields. As a result, the applications get graded down, which hurts our field tremendously. For our field, any advance, no matter how small or seemingly inelegant, can make a huge difference for our patients.

The situation has other repercussions. Doctors who have dedicated their lives to unlocking the mysteries of pancreatic disease now find that in order to procure funding, they may need to alter their ideas in order to even be considered. The science suffers, but most of all, the field suffers. Young fellows see what is going on and turn away from the field, fearing for their future livelihood.

The use of special-interest panels and RFAs helps, but it doesn't do enough. We propose that NIH establish new study sections devoted to specific underfunded areas of research. Fund the good science, advance it to the next level of sophistication, and then review whether the new study sections need to continue to exist.

Susan W. Kayne, Director, Marketing and Communications, National Eating Disorders Association

It is clear to our constituency that while the number of diagnosed cases of eating disorders is rising sharply, support for research to find a cure is sorely limited. We come here today to support the Academy of Eating Disorders' recommendations to improve the NIH peer review process. Our concern, like the Academy's, is that NIH reviews are often conducted by scientists who are not familiar with the state of the science in eating disorders. Review groups often have no one or only one person with eating disorder expertise, which places inordinate value on the opinion of non-experts or a single expert in the field. We suspect this occurs in other small fields as well.

We support two Academy recommendations to improve the peer review process:

  1. Mandate that each proposal reviewed in the study section be looked at by at least two reviewers with expertise in the field.  Given the limited availability of experts, it would be necessary to revise the conflict-of-interest policy to exclude investigators from the same institution or those who are key personnel in each other's grants.  Experts who are co-authors on manuscripts or who collaborate more distantly should be allowed to review
  2. Institute a two-staged peer review process.  In the first stage, an executive summary of the grant, prepared by the investigators, would be sent for review to all investigators studying the condition of interest and who have received NIH funding in the past 5 years.  Only reviewers closely tied to the application, key personnel, or those from the same institution would be excluded.  Presumably, this would yield a larger field of reviewers (e.g., ideally more than 20).  Reviewers would be asked to submit scores and perhaps brief comments.  Then a summary score would be generated, which would be used to select the most promising proposals for submission to a study section for the second and final review stages.

Sue Levi-Pearl, Vice President, Medical & Scientific Programs, Tourette Syndrome Association, Inc.

For over 2 decades, the Tourette Syndrome Association has provided annual seed grants to individual investigators interested in both clinical and basic science projects relevant to Tourette syndrome (TS). From the outset, we have relied on funding recommendations from an expert, conflict-free, multidisciplinary scientific advisory board that uses the NIH peer review system as a model. Our objective has always been to support creative, methodologically sound studies. We have succeeded extraordinarily well, with almost all NIH-funded TS investigators having first received grant awards from our association.

The following are our observations concerning experiences with NIH peer review procedures:

  • Too often, submissions do not receive the knowledgeable reviews they deserve. For so many neurological conditions with small patient populations, the number of interested investigators is small. Often, those in the best position to provide appropriate expertise about a submission are ineligible to serve because of a variety of conflicts with a specific applicant. In cases where cross-disciplinary studies are proposed, the lack of expertise becomes more detrimental to the process.
  • Key to a fair and thorough review is the interest and knowledge of NIH program staffers. The quality of their guidance to investigators and the constraints on their ability to provide input into the review process should be reviewed by the Working Group.
  • In many instances, those associated with individual institutes are better equipped to make review judgments about good science, especially regarding disease-related projects.
  • Especially for rarer neurological conditions, there is inestimable value in having reviewers with a truly broad-based understanding of these disorders. Highly innovative and meaningful incentives for senior investigators to serve would improve the reviews of these submissions.
  • Many advocacy organizations sponsor scientific review boards composed of the most knowledgeable experts in their respective disorders. Often members of these boards are virtually the only professionals with the depth of expertise needed for a sound review of specific applications. When appropriate, why not put in place a mechanism for participation by these experts in the review process? This could enhance the quality of the process, avoid duplication of effort, provide input about protocols those boards have already dismissed, and generally serve to develop a closer NIH collaborative effort with these valuable bodies

Compilation of Key Points Made during Open Discussion Sessions

Applicant Feedback/Interaction

  • The keywords system that assigns the reviewers is outdated. One way to remedy that is a self-organized system, in which reviewers would submit their specialization in three hierarchical levels – for example, Level I: Physics, Mathematics, Biophysics; Level II: Algorithms, Computational Sciences, Computational Modeling, Proteomics, Theory (Experiment); Level III: Mass Spectrometry, Molecular Structure, Protein Structure, DNA Structure.
  • When applications that might have excellent ideas receive written comments that do not convey optimism, that has a lasting negative impact. The system would benefit from constructive comments along with consultant advice to shore up weak aspects of an application. This could be in anticipation of applying in a succeeding review cycle or, as we heard today, possibly in real time to strengthen an application under review. The model I have in mind is used in some business plan competitions.

Applications

  • To follow up on the idea of milestone-driven applications, we recommend that such an approach have built-in flexibility and a portfolio approach.
  • Shorter applications will further handicap junior applicants, who already are severely handicapped.
  • Extensive preliminary data – especially for translational, clinical, and high-risk research – isn't needed. If the scientists had all the answers, they wouldn't be applying for the grant. Research design and methodologies can be reduced as well. What we often do is we ask for the methodology section, but then we have a little section on limitations and alternatives.
  • To address the problem of senior reviewers not wanting to be on study section, one of the major proposals that have come forward is shortening the grant application. I'm not sure that alone is going to solve the problem. Some incentive is going to have to be there other than they don't have to read as many pages. I also don't think shortening it will focus the reviewers on what's really important. Reviewers must be retrained and many of them just replaced.  

Criteria for Funding

  • I’m concerned about over-emphasis on the word “innovation.” In science, a lot of the progressive pioneering work builds on previous research. It's solid, it's methodologically sound, and we decide on these factors very heavily until it comes to innovation, which is one of the areas we have to comment on in peer review. So you can have a very pioneering scientist who has always been pioneering and always been innovative who will be docked for not having done something new.

Expertise

  • The American Academy of Physical Medicine and Rehabilitation is not disease oriented, but disability and function oriented. That needs to be properly represented on study sections, which are designed to deal with disease processes and the physiological- and impairment-based activities of disease. This is not an adequate representation of what speaks to the needs and heart of most people in the country, especially the aging population.
  • Study sections need to have people with disease expertise or at least know the state of the field. "What is the need?" "What is the urgency?"
  • I want to reiterate the importance of having a panel composed of people with a background in what is being reviewed, especially with respect to neglected tropical diseases. There are so many aspects of those interventions that need to be tailored to the environment in which they're used. If reviewers are not sensitive to that, the products will ultimately not be as useful.
  • Sleep research is a very tiny field, and the spectrum is very broad, from basic genetics and molecular biology through physiology to pre-clinical and translational research. There is a need for experts on review panels who understand where things are in the field, where they are going, and some idea about the progress that's been made.

Funding

  • If funding were milestone-driven, such as NINDS does for some of its translational grants, then one could cut off the funding if it is clear that it is going nowhere, and the study section would see that as less of a risk and would perhaps be more inclined to fund high-risk applications.
  • As far as a funding cap on individuals, in terms of a dollar amount, I think that's a slippery slope, because certainly not all research costs the same amount to do.  So an arbitrary funding cap may have a much more negative impact on translational and clinical work than it would on very basic work.
  • Peer review is very good at making incremental changes and improvements to the status quo.  As others have pointed out, it's not very good at recognizing conceptual breakthroughs and taking risk.  I proposed a number of years ago, only half in jest, the possibility of reserving a percentage of funding – a small percentage, perhaps, initially, as a pilot project – and awarding it to applications that qualify on basically a lottery basis.  A lot of research is based on serendipity, so this would be a way to expand the pool and get out of the mindset of trying to make completely rational decisions where rationality is not the only factor involved.
  • It would be wonderful if the NIH began to address how we deal with under-funded diseases.
  • For funding individual investigators of under-funded diseases, we need to keep the focus on progress that is being made to get something to the patient, and to look at continuous funding for individual investigators with that in mind.

Grant Mechanisms

  • In terms of basic versus translational science for under-funded diseases, we need to make sure we create different mechanisms that include enough flexibility for different diseases.

Junior Investigators

  • I recommend exploring a mechanism with educational institutions to try to better prepare students for competing effectively for grants.
  • The retrospective approaches to honoring or identifying the quality researchers, such as the Pioneer Award, could be adapted with a little thought toward identifying and assisting young professionals toward advancing their careers.
  • The A-T (Ataxia-Telangiectasia) Children's Project provides seed funding for young investigators. It's becoming harder for them to compete for R01s and obtain NIH funding. They say it's a little unfair to be thrown into a weight division in which they really don't belong, where they are in direct competition with more senior investigators for R01s. I recommend separate review groups for these young investigators who have never obtained an R01.
  • There's not so much a crisis with the review process, but with our being able to pay attention to the people pipelined into biomedical research. We are especially concerned about young investigators. Award sizes should be adjusted, especially in basic research, to engage the most talented young investigators and even mid-career and established investigators into the scientific enterprise.
  • The institutes will have to further their efforts to ensure that the first-time applicants get funded – perhaps not by tweaking the application process, but just by mandating that a certain percentage of first-time applicants get funded no matter where their scores fall. Otherwise, we will lose these generations of scientists.

Miscellaneous

  • Allow and encourage existing or new methodologies that would help with the translation issues.
  • Undertakings of research support should, by their very nature, include environments in which graduate students, fellows, and residents all should be a part of the process. This ought to be reflected in grant applications and so noted.
  • The Institute for the Study of Occupation and Health at the American Occupational Therapy Foundation is developing a “shadow process” of its grant review, enabling the panel reviewers to have a shadow panel to mentor. This concept is enabling us to build the capacity of young scientists to review each other’s work and empower them to win NIH funding.
  • Since the inception of the Medicare End-Stage Renal Disease Program in 1972, there have been no meaningful studies in clinical trials for the morbidity and mortality of dialysis patients. I urge the panel to consider that.
  • So much of the basic and clinical research today is driven by changes in technology that allow us to do things that weren't possible a number of years ago. Oftentimes these technology platforms have implications far beyond the immediate study being conducted. I don't see peer review necessarily taking into account some of these broader implications and giving them proper emphasis.
  • There may be changes in basic research and clinical research that could have tremendous implications for the cost of research and care delivery, and they seldom are given real consideration in that process. The peer review process has to remember that they're under the aegis of, not the National Institute of Research, but the National Institutes of Health.
  • The Future of Disability in America, recently published by the Institute of Medicine, might be a very valuable resource for many of us, especially for those engaged in re-evaluating peer review.

Reviewer Issues

  • I suggest having a much more targeted role for each reviewer. Currently the primary reviewer is the person who supposedly knows the most. What about if the primary were an expert, perhaps someone even working on the disorder who can educate the others and is assigned to put the application in context? The secondary and tertiary reviewers could be the balance to make sure there is no narrow-mindedness in terms of the approach.
  • The question of how much information the reviewers should have should be resolved soon, because we are not fully independent.
  • Self-assignment is not allowed, but I think reviewer assignments would be better done if reviewers were allowed to see abstracts of the proposals ahead of time and make self-assignments. I'm not sure why there's more of a conflict of interest in this regard than in reviewing the grants.

Scoring/Ranking

  • Regarding priority scoring and the dimensions of how things are scored, look at two dimensions that are sometimes ignored: (1) How does the proposal have potential to improve health? (2) How might this optimize health through practice?

Setting Priorities

  • How do we set priorities for what gets research money? How do we balance the fact that the incidence seems relatively minor, but the mortality is great? For example, with ovarian cancer there are not a lot of advocates because they die pretty quickly. Where does that play in setting priorities?
  • I'm a little bit troubled by the amount of disease specificity in this sort of gathering. I think we're all missing a real opportunity to work together in ways that the NIH can support far more readily than trying to insinuate into the study sections or into a third tier of review some special consideration of our rare or not-so-rare disorder. And I represent an exceedingly rare disorder. I think there's far more promise in crosscutting, multi-institutional, multidisciplinary, collaborative efforts. How do we balance? How do we prioritize research when we have rare disease balanced against diseases with apparently a greater burden of disease? We shouldn't let the loudest squeaky-wheel politics dictate how we spend our precious research dollars, because the breakthroughs will not come where that kind of process devotes the resources. The breakthrough will come from the brilliant scientist who gets the “ah ha.” And as Harold Varmus told Congress, "We roll up the flanks of other diseases by getting that initial breakthrough."
  • Does the fact that a disease is so rare, or research has not been funded in that area for a decade, give it a higher priority that the score wouldn't otherwise merit? The fact that a disease has not had good science in 10 years matters.

Staffing Panels

  • I was recently funded but am unable to afford the time to put into peer review. So I've had to ask the SRA to give me a year to make sure my program is progressing before I can sign on to peer reviews, and I do ad hoc. I have the expertise for several different types of grants in sleep research, but I simply can't afford the time. And so the incentive is not money or status, it's being able to run a mid-size program and also serve competently on peer review.
  • As the pharmaceutical and biotechnology industries retire accomplished scientists who want to continue productive careers, they represent an experienced talent pool for academic research centers and NIH. I recommend that the NIH system be modified to foster the transition of experienced scientists into academic research as consultants to investigators or as new investigators in their own right who can compete successfully for research funding.

Study Section and Review Models

  • Study sections do not and perhaps cannot do a good job of reviewing both basic and translational clinical research, so there should be a split among different review groups. NINDS is already doing this very successfully with a translational review program. Grants go to their own review groups and not through the normal study sections.
  • We should do away with a single-blinded review process and make it completely transparent.
  • I invite you to look into the DOD Breast Cancer Research Program, which uses peer review and programmatic review. The program emphasizes innovation, because we heard loud and clear when we started in 1992 that this kind of research didn't do well in the NIH process. We were trying to fill gaps and allow people to take risks and do new things. Over a decade ago, we created the “Idea Award,” for which no preliminary data are necessary. Peer review didn't like that at all, and so the proposals with preliminary data scored better than those without. We had to create another mechanism, the “Concept Award.” Although no preliminary data are allowed, people still try to send that. You can devise the most brilliant mechanisms and still have problems. We have created a system where we have new mechanisms over pre-proposal now, so at the programmatic level we look at the pre-proposals where we learn what people are trying to do – not necessarily the exact science but the approach – and that has helped us some. In a way, it's like a three-level self-review. We allow flexibility in the review process to bring back proposals that scored low to see if something is there.
  • I'd like to speak specifically to the concept of a third tier, a third cohort for review. Diseases are certainly medically defined and have a medical context. They also have a social and a cultural context in our society and in global health. A third tier of review would enable individuals who subscribe to the International Classification of Function, for example, to participate and be part of a broader cohort to see the social context of these applications that are disease specific, but also have social implications.
  • To what extent has the peer review process been remodeled to accommodate the kind of forward-looking translational approach in one respect, but also multidisciplinary, crosscutting, multi-institutional approaches? How do we get study sections that can recognize that it's the functional similarities amongst our various diseases that will lead to cures for whole groups of these?
  • Are we finding good ways to restructure study sections to accommodate the kind of panel that can say it recognizes the crosscutting nature of proposals that have relevance across a broad spectrum? [Drs. Tabak and Yamamoto responded that one approach is the editorial board concept, a two-step review process in which the content experts can provide input, either electronically or via other virtual means.]

Training

  • The overall quality of study sections actually has diminished, even though some run quite well. Part of this is that SRAs don't seem to be as well trained or run the study sections as well anymore. Also, with these large study sections, the quality of the assignments of the reviewers is actually going down. So one thing that could be addressed is not only training of the reviewers, but better training of the SRAs.

Closing Remarks

Dr. Kington

We want to thank all of you again for coming here today and giving us the opportunity to listen to you. I'm always impressed by the breadth of comments from advocacy groups who really care about what we do because you want us to be effective.


Dr. Yamamoto

You served two roles here today: to deliver ideas, information, feedback, suggestions, and criticisms; and to listen to each other. As you go back into your constituent communities, I hope you will keep this lively debate going.

I'm going to make a few comments on your comments. These are not meant as criticisms or invalidations, but just to point out how complex and multifaceted these issues are.

Several words have continued to recur in the discussion today. One is the “third-tier” idea, with a particular eye toward being able to asses early on the relevance of a particular area of investigation, focus of an individual application, and so forth. We do have a wise body looking across the entire spectrum of the 80,000 grant applications that come in. Is there a group that would be able to oversee the whole show in a way in which you would be happy to entrust them? It may be a tough challenge.

Secondly, relevance is very tough to assess. It is not intrinsic, nor does one know what will be relevant tomorrow to your disease area of research.

The bold and innovative research that we're all reaching for depends on giving scientists the freedom the Surgeon General’s quote was about in 1945. Providing that freedom is what attracts the kind of people you want to have working in this field. So there is a certain fragility to this in terms of saying, "No, we'll tell you what to work on, and you just work on it."

“Milestones” and “deliverables” were words that came up early on. These reflect efforts to try to make research more responsible to the initial goals – or, as one speaker said, to give study sections a little more assurance that they can go for a bold idea because there will be milestones in 5 years to show the money won't be wasted. But let me remind you that the other side to this coin is that it will make investigators more conservative in their grant applications.

The need for separate reviews of basic and clinical research was voiced by many of you, and I totally understand that the expertise for your particular focus just isn't there. We all want that. The peer review system depends upon that kind of expertise in the room or established by some other mechanism, like the editorial board system described today. Having the right expertise in the room is critical. But keep in mind that an increasing number of grant applications will come forward because of this new breadth we're enjoying in the field, and these will include basic, clinical, and translational components.

Should all grantees be required to serve on study sections? Should we give them a “free pass" of extended support on their current applications? Maybe. I think wanting to incentivize people to be involved is important. It's not quite clear to me that we want every single NIH grantee to be a study section member. We certainly want the best ones. And if we inserted incentives such as extending the grant period, wouldn't we suddenly have transformed the peer review system into an entitlement system? Is that the kind of incentive we're looking for?

Separate study sections for new investigators is an interesting idea. NIH has been looking for ways to better recognize and support new investigators to help them traverse the difficult transition from post-doc. The average age for a new investigator establishing a laboratory at the Ph.D. level is now 38, and the average age of getting the first independent grant support is 42. For M.D.s, add 2 years onto each of those numbers. So all the numbers are scary, but the scariest to me is the 4 years between getting the job and getting the grant. During that time, a lot of corrosive things can happen to somebody's psyche. So how do we support new investigators? One idea is separate study sections, but the other side of the coin is that you want the right expertise in the room. If you just segregate out new investigators, they're going to cover all the research NIH does. Are we going to be able to have the right expertise in the room for those who might, arguably, need it the most?

Are there ways to shorten grants, and what are the costs of doing that? It’s been said this could strongly disadvantage young/new investigators, but I'd need to be convinced of that. All of us in academia are used to making bets on young people. That's how we hire our new faculty and choose the post-docs that come to our laboratories. We don't make them write 25-page grant applications to do that. How have they climbed up the ladder to succeed to the point that they have carried out all their training and competed with hundreds of people to get their new jobs? They've done it by writing 2- and 3-page applications for fellowships and support, abstracts for going to meetings and giving presentations, and so forth. It's not completely clear to me that 25 pages is a minimum for a young person to be able to write applications. It would literally be the first time they will have done that.

Some innovative suggestions include the idea of formalizing a risk-benefit ratio to assess the kind of difficult-to-grasp notion of higher risk or transformative research; the notion of some sort of qualifying or certifying requirement for reviewers to ensure they have the right expertise; and the third tier. Actually, two third-tier proposals were put forth: One would assess significance, relevance, and so forth; the other would send a letter of intent to all the experts in the field and see what they think before moving on to the formal study section review.

Many good ideas were put forth today, and you should feel very good about the service you have done here on behalf of NIH and of the system as a whole. We are very grateful for your participation and want to thank you again for sharing your energy and ideas with us.


 

This page was last reviewed on November 9, 2007.
skip main navigation National Institutes of Health - Transforming Health Through Discovery U.S. Department of Health and Human Services Health Information Page NIH Grants News and Events Research Institutes and Centers About NIH