• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. background, 4. findings, 5. discussion, 6. conclusion and final remarks, supplementary material, data availability, conflict of interest statement., acknowledgements.

  • < Previous

Evaluation of research proposals by peer review panels: broader panels for broader assessments?

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Rebecca Abma-Schouten, Joey Gijbels, Wendy Reijmerink, Ingeborg Meijer, Evaluation of research proposals by peer review panels: broader panels for broader assessments?, Science and Public Policy , Volume 50, Issue 4, August 2023, Pages 619–632, https://doi.org/10.1093/scipol/scad009

  • Permissions Icon Permissions

Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes, how panel composition and formal assessment criteria affect the arguments used. We observe that more scientific arguments are used than arguments related to societal relevance and expected impact. Also, more diverse panels result in a wider range of arguments, largely for the benefit of arguments related to societal relevance and impact. We discuss how funders can contribute to the quality of peer review by creating a shared conceptual framework that better defines research quality and societal relevance. We also contribute to a further understanding of the role of diverse peer review panels.

Scientific biomedical and health research is often supported by project or programme grants from public funding agencies such as governmental research funders and charities. Research funders primarily rely on peer review, often a combination of independent written review and discussion in a peer review panel, to inform their funding decisions. Peer review panels have the difficult task of integrating and balancing the various assessment criteria to select and rank the eligible proposals. With the increasing emphasis on societal benefit and being responsive to societal needs, the assessment of research proposals ought to include broader assessment criteria, including both scientific quality and societal relevance, and a broader perspective on relevant peers. This results in new practices of including non-scientific peers in review panels ( Del Carmen Calatrava Moreno et al. 2019 ; Den Oudendammer et al. 2019 ; Van den Brink et al. 2016 ). Relevant peers, in the context of biomedical and health research, include, for example, health-care professionals, (healthcare) policymakers, and patients as the (end-)users of research.

Currently, in scientific and grey literature, much attention is paid to what legitimate criteria are and to deficiencies in the peer review process, for example, focusing on the role of chance and the difficulty of assessing interdisciplinary or ‘blue sky’ research ( Langfeldt 2006 ; Roumbanis 2021a ). Our research primarily builds upon the work of Lamont (2009) , Huutoniemi (2012) , and Kolarz et al. (2016) . Their work articulates how the discourse in peer review panels can be understood by giving insight into disciplinary assessment cultures and social dynamics, as well as how panel members define and value concepts such as scientific excellence, interdisciplinarity, and societal impact. At the same time, there is little empirical work on what actually is discussed in peer review meetings and to what extent this is related to the specific objectives of the research funding programme. Such observational work is especially lacking in the biomedical and health domain.

The aim of our exploratory study is to learn what arguments panel members use in a review meeting when assessing research proposals in biomedical and health research programmes. We explore how arguments used in peer review panels are affected by (1) the formal assessment criteria and (2) the inclusion of non-scientific peers in review panels, also called (end-)users of research, societal stakeholders, or societal actors. We add to the existing literature by focusing on the actual arguments used in peer review assessment in practice.

To this end, we observed ten panel meetings in a variety of eight biomedical and health research programmes at two large research funders in the Netherlands: the governmental research funder The Netherlands Organisation for Health Research and Development (ZonMw) and the charitable research funder the Dutch Heart Foundation (DHF). Our first research question focuses on what arguments panel members use when assessing research proposals in a review meeting. The second examines to what extent these arguments correspond with the formal −as described in the programme brochure and assessment form− criteria on scientific quality and societal impact creation. The third question focuses on how arguments used differ between panel members with different perspectives.

2.1 Relation between science and society

To understand the dual focus of scientific quality and societal relevance in research funding, a theoretical understanding and a practical operationalisation of the relation between science and society are needed. The conceptualisation of this relationship affects both who are perceived as relevant peers in the review process and the criteria by which research proposals are assessed.

The relationship between science and society is not constant over time nor static, yet a relation that is much debated. Scientific knowledge can have a huge impact on societies, either intended or unintended. Vice versa, the social environment and structure in which science takes place influence the rate of development, the topics of interest, and the content of science. However, the second part of this inter-relatedness between science and society generally receives less attention ( Merton 1968 ; Weingart 1999 ).

From a historical perspective, scientific and technological progress contributed to the view that science was valuable on its own account and that science and the scientist stood independent of society. While this protected science from unwarranted political influence, societal disengagement with science resulted in less authority by science and debate about its contribution to society. This interdependence and mutual influence contributed to a modern view of science in which knowledge development is valued both on its own merit and for its impact on, and interaction with, society. As such, societal factors and problems are important drivers for scientific research. This warrants that the relation and boundaries between science, society, and politics need to be organised and constantly reinforced and reiterated ( Merton 1968 ; Shapin 2008 ; Weingart 1999 ).

Glerup and Horst (2014) conceptualise the value of science to society and the role of society in science in four rationalities that reflect different justifications for their relation and thus also for who is responsible for (assessing) the societal value of science. The rationalities are arranged along two axes: one is related to the internal or external regulation of science and the other is related to either the process or the outcome of science as the object of steering. The first two rationalities of Reflexivity and Demarcation focus on internal regulation in the scientific community. Reflexivity focuses on the outcome. Central is that science, and thus, scientists should learn from societal problems and provide solutions. Demarcation focuses on the process: science should continuously question its own motives and methods. The latter two rationalities of Contribution and Integration focus on external regulation. The core of the outcome-oriented Contribution rationality is that scientists do not necessarily see themselves as ‘working for the public good’. Science should thus be regulated by society to ensure that outcomes are useful. The central idea of the process-oriented Integration rationality is that societal actors should be involved in science in order to influence the direction of research.

Research funders can be seen as external or societal regulators of science. They can focus on organising the process of science, Integration, or on scientific outcomes that function as solutions for societal challenges, Contribution. In the Contribution perspective, a funder could enhance outside (societal) involvement in science to ensure that scientists take responsibility to deliver results that are needed and used by society. From Integration follows that actors from science and society need to work together in order to produce the best results. In this perspective, there is a lack of integration between science and society and more collaboration and dialogue are needed to develop a new kind of integrative responsibility ( Glerup and Horst 2014 ). This argues for the inclusion of other types of evaluators in research assessment. In reality, these rationalities are not mutually exclusive and also not strictly separated. As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today.

2.2 Criteria for research quality and societal relevance

The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals. Even though the main ingredients are quite similar, as a consequence of the coexisting rationalities in science, societal aspects can be defined and operationalised in different ways ( Alla et al. 2017 ). In the definition of societal impact by Reed, emphasis is placed on the outcome : the contribution to society. It includes the significance for society, the size of potential impact, and the reach , the number of people or organisations benefiting from the expected outcomes ( Reed et al. 2021 ). Other models and definitions focus more on the process of science and its interaction with society. Spaapen and Van Drooge introduced productive interactions in the assessment of societal impact, highlighting a direct contact between researchers and other actors. A key idea is that the interaction in different domains leads to impact in different domains ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Definitions that focus on the process often refer to societal impact as (1) something that can take place in distinguishable societal domains, (2) something that needs to be actively pursued, and (3) something that requires interactions with societal stakeholders (or users of research) ( Hughes and Kitson 2012 ; Spaapen and Van Drooge 2011 ).

Glerup and Horst show that process and outcome-oriented aspects can be combined in the operationalisation of criteria for assessing research proposals on societal aspects. Also, the funders participating in this study include the outcome—the value created in different domains—and the process—productive interactions with stakeholders—in their formal assessment criteria for societal relevance and impact. Different labels are used for these criteria, such as societal relevance , societal quality , and societal impact ( Abma-Schouten 2017 ; Reijmerink and Oortwijn 2017 ). In this paper, we use societal relevance or societal relevance and impact .

Scientific quality in research assessment frequently refers to all aspects and activities in the study that contribute to the validity and reliability of the research results and that contribute to the integrity and quality of the research process itself. The criteria commonly include the relevance of the proposal for the funding programme, the scientific relevance, originality, innovativeness, methodology, and feasibility ( Abdoul et al. 2012 ). Several studies demonstrated that quality is seen as not only a rich concept but also a complex concept in which excellence and innovativeness, methodological aspects, engagement of stakeholders, multidisciplinary collaboration, and societal relevance all play a role ( Geurts 2016 ; Roumbanis 2019 ; Scholten et al. 2018 ). Another study showed a comprehensive definition of ‘good’ science, which includes creativity, reproducibility, perseverance, intellectual courage, and personal integrity. It demonstrated that ‘good’ science involves not only scientific excellence but also personal values and ethics, and engagement with society ( Van den Brink et al. 2016 ). Noticeable in these studies is the connection made between societal relevance and scientific quality.

In summary, the criteria for scientific quality and societal relevance are conceptualised in different ways, and perspectives on the role of societal value creation and the involvement of societal actors vary strongly. Research funders hence have to pay attention to the meaning of the criteria for the panel members they recruit to help them, and navigate and negotiate how the criteria are applied in assessing research proposals. To be able to do so, more insight is needed in which elements of scientific quality and societal relevance are discussed in practice by peer review panels.

2.3 Role of funders and societal actors in peer review

National governments and charities are important funders of biomedical and health research. How this funding is distributed varies per country. Project funding is frequently allocated based on research programming by specialised public funding organisations, such as the Dutch Research Council in the Netherlands and ZonMw for health research. The DHF, the second largest private non-profit research funder in the Netherlands, provides project funding ( Private Non-Profit Financiering 2020 ). Funders, as so-called boundary organisations, can act as key intermediaries between government, science, and society ( Jasanoff 2011 ). Their responsibility is to develop effective research policies connecting societal demands and scientific ‘supply’. This includes setting up and executing fair and balanced assessment procedures ( Sarewitz and Pielke 2007 ). Herein, the role of societal stakeholders is receiving increasing attention ( Benedictus et al. 2016 ; De Rijcke et al. 2016 ; Dijstelbloem et al. 2013 ; Scholten et al. 2018 ).

All charitable health research funders in the Netherlands have, in the last decade, included patients at different stages of the funding process, including in assessing research proposals ( Den Oudendammer et al. 2019 ). To facilitate research funders in involving patients in assessing research proposals, the federation of Dutch patient organisations set up an independent reviewer panel with (at-risk) patients and direct caregivers ( Patiëntenfederatie Nederland, n.d .). Other foundations have set up societal advisory panels including a wider range of societal actors than patients alone. The Committee Societal Quality (CSQ) of the DHF includes, for example, (at-risk) patients and a wide range of cardiovascular health-care professionals who are not active as academic researchers. This model is also applied by the Diabetes Foundation and the Princess Beatrix Muscle Foundation in the Netherlands ( Diabetesfonds, n.d .; Prinses Beatrix Spierfonds, n.d .).

In 2014, the Lancet presented a series of five papers about biomedical and health research known as the ‘increasing value, reducing waste’ series ( Macleod et al. 2014 ). The authors addressed several issues as well as potential solutions that funders can implement. They highlight, among others, the importance of improving the societal relevance of the research questions and including the burden of disease in research assessment in order to increase the value of biomedical and health science for society. A better understanding of and an increasing role of users of research are also part of the described solutions ( Chalmers et al. 2014 ; Van den Brink et al. 2016 ). This is also in line with the recommendations of the 2013 Declaration on Research Assessment (DORA) ( DORA 2013 ). These recommendations influence the way in which research funders operationalise their criteria in research assessment, how they balance the judgement of scientific and societal aspects, and how they involve societal stakeholders in peer review.

2.4 Panel peer review of research proposals

To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding. Peer review is an established component of professional academic practice, is deeply embedded in the research culture, and essentially consists of experts in a given domain appraising the professional performance, creativity, and/or quality of scientific work produced by others in their field of competence ( Demicheli and Di Pietrantonj 2007 ). The history of peer review as the default approach for scientific evaluation and accountability is, however, relatively young. While the term was unheard of in the 1960s, by 1970, it had become the standard. Since that time, peer review has become increasingly diverse and formalised, resulting in more public accountability ( Reinhart and Schendzielorz 2021 ).

While many studies have been conducted concerning peer review in scholarly publishing, peer review in grant allocation processes has been less discussed ( Demicheli and Di Pietrantonj 2007 ). The most extensive work on this topic has been conducted by Lamont (2009) . Lamont studied peer review panels in five American research funding organisations, including observing three panels. Other examples include Roumbanis’s ethnographic observations of ten review panels at the Swedish Research Council in natural and engineering sciences ( Roumbanis 2017 , 2021a ). Also, Huutoniemi was able to study, but not observe, four panels on environmental studies and social sciences of the Academy of Finland ( Huutoniemi 2012 ). Additionally, Van Arensbergen and Van den Besselaar (2012) analysed peer review through interviews and by analysing the scores and outcomes at different stages of the peer review process in a talent funding programme. In particular, interesting is the study by Luo and colleagues on 164 written panel review reports, showing that the reviews from panels that included non-scientific peers described broader and more concrete impact topics. Mixed panels also more often connected research processes and characteristics of applicants with impact creation ( Luo et al. 2021 ).

While these studies primarily focused on peer review panels in other disciplinary domains or are based on interviews or reports instead of direct observations, we believe that many of the findings are relevant to the functioning of panels in the context of biomedical and health research. From this literature, we learn to have realistic expectations of peer review. It is inherently difficult to predict in advance which research projects will provide the most important findings or breakthroughs ( Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , 2021b ). At the same time, these limitations may not substantiate the replacement of peer review by another assessment approach ( Wessely 1998 ). Many topics addressed in the literature are inter-related and relevant to our study, such as disciplinary differences and interdisciplinarity, social dynamics and their consequences for consistency and bias, and suggestions to improve panel peer review ( Lamont and Huutoniemi 2011 ; Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , b ; Wessely 1998 ).

Different scientific disciplines show different preferences and beliefs about how to build knowledge and thus have different perceptions of excellence. However, panellists are willing to respect and acknowledge other standards of excellence ( Lamont 2009 ). Evaluation cultures also differ between scientific fields. Science, technology, engineering, and mathematics panels might, in comparison with panellists from social sciences and humanities, be more concerned with the consistency of the assessment across panels and therefore with clear definitions and uses of assessment criteria ( Lamont and Huutoniemi 2011 ). However, much is still to learn about how panellists’ cognitive affiliations with particular disciplines unfold in the evaluation process. Therefore, the assessment of interdisciplinary research is much more complex than just improving the criteria or procedure because less explicit repertoires would also need to change ( Huutoniemi 2012 ).

Social dynamics play a role as panellists may differ in their motivation to engage in allocation processes, which could create bias ( Lee et al. 2013 ). Placing emphasis on meeting established standards or thoroughness in peer review may promote uncontroversial and safe projects, especially in a situation where strong competition puts pressure on experts to reach a consensus ( Langfeldt 2001 ,2006 ). Personal interest and cognitive similarity may also contribute to conservative bias, which could negatively affect controversial or frontier science ( Luukkonen 2012 ; Roumbanis 2021a ; Travis and Collins 1991 ). Central in this part of literature is that panel conclusions are the outcome of and are influenced by the group interaction ( Van Arensbergen et al. 2014a ). Differences in, for example, the status and expertise of the panel members can play an important role in group dynamics. Insights from social psychology on group dynamics can help in understanding and avoiding bias in peer review panels ( Olbrecht and Bornmann 2010 ). For example, group performance research shows that more diverse groups with complementary skills make better group decisions than homogenous groups. Yet, heterogeneity can also increase conflict within the group ( Forsyth 1999 ). Therefore, it is important to pay attention to power dynamics and maintain team spirit and good communication ( Van Arensbergen et al. 2014a ), especially in meetings that include both scientific and non-scientific peers.

The literature also provides funders with starting points to improve the peer review process. For example, the explicitness of review procedures positively influences the decision-making processes ( Langfeldt 2001 ). Strategic voting and decision-making appear to be less frequent in panels that rate than in panels that rank proposals. Also, an advisory instead of a decisional role may improve the quality of the panel assessment ( Lamont and Huutoniemi 2011 ).

Despite different disciplinary evaluative cultures, formal procedures, and criteria, panel members with different backgrounds develop shared customary rules of deliberation that facilitate agreement and help avoid situations of conflict ( Huutoniemi 2012 ; Lamont 2009 ). This is a necessary prerequisite for opening up peer review panels to include non-academic experts. When doing so, it is important to realise that panel review is a social, emotional, and interactional process. It is therefore important to also take these non-cognitive aspects into account when studying cognitive aspects ( Lamont and Guetzkow 2016 ), as we do in this study.

In summary, what we learn from the literature is that (1) the specific criteria to operationalise scientific quality and societal relevance of research are important, (2) the rationalities from Glerup and Horst predict that not everyone values societal aspects and involve non-scientists in peer review to the same extent and in the same way, (3) this may affect the way peer review panels discuss these aspects, and (4) peer review is a challenging group process that could accommodate other rationalities in order to prevent bias towards specific scientific criteria. To disentangle these aspects, we have carried out an observational study of a diverse range of peer review panel sessions using a fixed set of criteria focusing on scientific quality and societal relevance.

3.1 Research assessment at ZonMw and the DHF

The peer review approach and the criteria used by both the DHF and ZonMw are largely comparable. Funding programmes at both organisations start with a brochure describing the purposes, goals, and conditions for research applications, as well as the assessment procedure and criteria. Both organisations apply a two-stage process. In the first phase, reviewers are asked to write a peer review. In the second phase, a panel reviews the application based on the advice of the written reviews and the applicants’ rebuttal. The panels advise the board on eligible proposals for funding including a ranking of these proposals.

There are also differences between the two organisations. At ZonMw, the criteria for societal relevance and quality are operationalised in the ZonMw Framework Fostering Responsible Research Practices ( Reijmerink and Oortwijn 2017 ). This contributes to a common operationalisation of both quality and societal relevance on the level of individual funding programmes. Important elements in the criteria for societal relevance are, for instance, stakeholder participation, (applying) holistic health concepts, and the added value of knowledge in practice, policy, and education. The framework was developed to optimise the funding process from the perspective of knowledge utilisation and includes concepts like productive interactions and Open Science. It is part of the ZonMw Impact Assessment Framework aimed at guiding the planning, monitoring, and evaluation of funding programmes ( Reijmerink et al. 2020 ). At ZonMw, interdisciplinary panels are set up specifically for each funding programme. Panels are interdisciplinary in nature with academics of a wide range of disciplines and often include non-academic peers, like policymakers, health-care professionals, and patients.

At the DHF, the criteria for scientific quality and societal relevance, at the DHF called societal impact , find their origin in the strategy report of the advisory committee CardioVascular Research Netherlands ( Reneman et al. 2010 ). This report forms the basis of the DHF research policy focusing on scientific and societal impact by creating national collaborations in thematic, interdisciplinary research programmes (the so-called consortia) connecting preclinical and clinical expertise into one concerted effort. An International Scientific Advisory Committee (ISAC) was established to assess these thematic consortia. This panel consists of international scientists, primarily with expertise in the broad cardiovascular research field. The DHF criteria for societal impact were redeveloped in 2013 in collaboration with their CSQ. This panel assesses and advises on the societal aspects of proposed studies. The societal impact criteria include the relevance of the health-care problem, the expected contribution to a solution, attention to the next step in science and towards implementation in practice, and the involvement of and interaction with (end-)users of research (R.Y. Abma-Schouten and I.M. Meijer, unpublished data). Peer review panels for consortium funding are generally composed of members of the ISAC, members of the CSQ, and ad hoc panel members relevant to the specific programme. CSQ members often have a pre-meeting before the final panel meetings to prepare and empower CSQ representatives participating in the peer review panel.

3.2 Selection of funding programmes

To compare and evaluate observations between the two organisations, we selected funding programmes that were relatively comparable in scope and aims. The criteria were (1) a translational and/or clinical objective and (2) the selection procedure consisted of review panels that were responsible for the (final) relevance and quality assessment of grant applications. In total, we selected eight programmes: four at each organisation. At the DHF, two programmes were chosen in which the CSQ did not participate to better disentangle the role of the panel composition. For each programme, we observed the selection process varying from one session on one day (taking 2–8 h) to multiple sessions over several days. Ten sessions were observed in total, of which eight were final peer review panel meetings and two were CSQ meetings preparing for the panel meeting.

After management approval for the study in both organisations, we asked programme managers and panel chairpersons of the programmes that were selected for their consent for observation; none refused participation. Panel members were, in a passive consent procedure, informed about the planned observation and anonymous analyses.

To ensure the independence of this evaluation, the selection of the grant programmes, and peer review panels observed, was at the discretion of the project team of this study. The observations and supervision of the analyses were performed by the senior author not affiliated with the funders.

3.3 Observation matrix

Given the lack of a common operationalisation for scientific quality and societal relevance, we decided to use an observation matrix with a fixed set of detailed aspects as a gold standard to score the brochures, the assessment forms, and the arguments used in panel meetings. The matrix used for the observations of the review panels was based upon and adapted from a ‘grant committee observation matrix’ developed by Van Arensbergen. The original matrix informed a literature review on the selection of talent through peer review and the social dynamics in grant review committees ( van Arensbergen et al. 2014b ). The matrix includes four categories of aspects that operationalise societal relevance, scientific quality, committee, and applicant (see  Table 1 ). The aspects of scientific quality and societal relevance were adapted to fit the operationalisation of scientific quality and societal relevance of the organisations involved. The aspects concerning societal relevance were derived from the CSQ criteria, and the aspects concerning scientific quality were based on the scientific criteria of the first panel observed. The four argument types related to the panel were kept as they were. This committee-related category reflects statements that are related to the personal experience or preference of a panel member and can be seen as signals for bias. This category also includes statements that compare a project with another project without further substantiation. The three applicant-related arguments in the original observation matrix were extended with a fourth on social skills in communication with society. We added health technology assessment (HTA) because one programme specifically focused on this aspect. We tested our version of the observation matrix in pilot observations.

Aspects included in the observation matrix and examples of arguments.

3.4 Observations

Data were primarily collected through observations. Our observations of review panel meetings were non-participatory: the observer and goal of the observation were introduced at the start of the meeting, without further interactions during the meeting. To aid in the processing of observations, some meetings were audiotaped (sound only). Presentations or responses of applicants were not noted and were not part of the analysis. The observer made notes on the ongoing discussion and scored the arguments while listening. One meeting was not attended in person and only observed and scored by listening to the audiotape recording. Because this made identification of the panel members unreliable, this panel meeting was excluded from the analysis of the third research question on how arguments used differ between panel members with different perspectives.

3.5 Grant programmes and the assessment criteria

We gathered and analysed all brochures and assessment forms used by the review panels in order to answer our second research question on the correspondence of arguments used with the formal criteria. Several programmes consisted of multiple grant calls: in that case, the specific call brochure was gathered and analysed, not the overall programme brochure. Additional documentation (e.g. instructional presentations at the start of the panel meeting) was not included in the document analysis. All included documents were marked using the aforementioned observation matrix. The panel-related arguments were not used because this category reflects the personal arguments of panel members that are not part of brochures or instructions. To avoid potential differences in scoring methods, two of the authors independently scored half of the documents that were checked and validated afterwards by the other. Differences were discussed until a consensus was reached.

3.6 Panel composition

In order to answer the third research question, background information on panel members was collected. We categorised the panel members into five common types of panel members: scientific, clinical scientific, health-care professional/clinical, patient, and policy. First, a list of all panel members was composed including their scientific and professional backgrounds and affiliations. The theoretical notion that reviewers represent different types of users of research and therefore potential impact domains (academic, social, economic, and cultural) was leading in the categorisation ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Because clinical researchers play a dual role in both advancing research as a fellow academic and as a user of the research output in health-care practice, we divided the academic members into two categories of non-clinical and clinical researchers. Multiple types of professional actors participated in each review panel. These were divided into two groups for the analysis: health-care professionals (without current academic activity) and policymakers in the health-care sector. No representatives of the private sector participated in the observed review panels. From the public domain, (at-risk) patients and patient representatives were part of several review panels. Only publicly available information was used to classify the panel members. Members were assigned to one category only: categorisation took place based on the specific role and expertise for which they were appointed to the panel.

In two of the four DHF programmes, the assessment procedure included the CSQ. In these two programmes, representatives of this CSQ participated in the scientific panel to articulate the findings of the CSQ meeting during the final assessment meeting. Two grant programmes were assessed by a review panel with solely (clinical) scientific members.

3.7 Analysis

Data were processed using ATLAS.ti 8 and Microsoft Excel 2010 to produce descriptive statistics. All observed arguments were coded and given a randomised identification code for the panel member using that particular argument. The number of times an argument type was observed was used as an indicator for the relative importance of that argument in the appraisal of proposals. With this approach, a practical and reproducible method for research funders to evaluate the effect of policy changes on peer review was developed. If codes or notes were unclear, post-observation validation of codes was carried out based on observation matrix notes. Arguments that were noted by the observer but could not be matched with an existing code were first coded as a ‘non-existing’ code, and these were resolved by listening back to the audiotapes. Arguments that could not be assigned to a panel member were assigned a ‘missing panel member’ code. A total of 4.7 per cent of all codes were assigned a ‘missing panel member’ code.

After the analyses, two meetings were held to reflect on the results: one with the CSQ and the other with the programme coordinators of both organisations. The goal of these meetings was to improve our interpretation of the findings, disseminate the results derived from this project, and identify topics for further analyses or future studies.

3.8 Limitations

Our study focuses on studying the final phase of the peer review process of research applications in a real-life setting. Our design, a non-participant observation of peer review panels, also introduced several challenges ( Liu and Maitlis 2010 ).

First, the independent review phase or pre-application phase was not part of our study. We therefore could not assess to what extent attention to certain aspects of scientific quality or societal relevance and impact in the review phase influenced the topics discussed during the meeting.

Second, the most important challenge of overt non-participant observations is the observer effect: the danger of causing reactivity in those under study. We believe that the consequences of this effect on our conclusions were limited because panellists are used to external observers in the meetings of these two funders. The observer briefly explained the goal of the study during the introductory round of the panel in general terms. The observer sat as unobtrusively as possible and avoided reactivity to discussions. Similar to previous observations of panels, we experienced that the fact that an observer was present faded into the background during a meeting ( Roumbanis 2021a ). However, a limited observer effect can never be entirely excluded.

Third, our design to only score the arguments raised, and not the responses of the applicant, or information on the content of the proposals, has its positives and negatives. With this approach, we could assure the anonymity of the grant procedures reviewed, the applicants and proposals, panels, and individual panellists. This was an important condition for the funders involved. We took the frequency arguments used as a proxy for the relative importance of that argument in decision-making, which undeniably also has its caveats. Our data collection approach limits more in-depth reflection on which arguments were decisive in decision-making and on group dynamics during the interaction with the applicants as non-verbal and non-content-related comments were not captured in this study.

Fourth, despite this being one of the largest observational studies on the peer review assessment of grant applications with the observation of ten panels in eight grant programmes, many variables might explain differences in arguments used within and beyond our view. Examples of ‘confounding’ variables are the many variations in panel composition, the differences in objectives of the programmes, and the range of the funding programmes. Our study should therefore be seen as exploratory and thus warrants caution in drawing conclusions.

4.1 Overview of observational data

The grant programmes included in this study reflected a broad range of biomedical and health funding programmes, ranging from fellowship grants to translational research and applied health research. All formal documents available to the applicants and to the review panel were retrieved for both ZonMw and the DHF. In total, eighteen documents corresponding to the eight grant programmes were studied. The number of proposals assessed per programme varied from three to thirty-three. The duration of the panel meetings varied between 2 h and two consecutive days. Together, this resulted in a large spread in the number of total arguments used in an individual meeting and in a grant programme as a whole. In the shortest meeting, 49 arguments were observed versus 254 in the longest, with a mean of 126 arguments per meeting and on average 15 arguments per proposal.

We found consistency between how criteria were operationalised in the grant programme’s brochures and in the assessment forms of the review panels overall. At the same time, because the number of elements included in the observation matrix is limited, there was a considerable diversity in the arguments that fall within each aspect (see examples in  Table 1 ). Some of these differences could possibly be explained by differences in language used and the level of detail in the observation matrix, the brochure, and the panel’s instructions. This was especially the case in the applicant-related aspects in which the observation matrix was more detailed than the text in the brochure and assessment forms.

In interpretating our findings, it is important to take into account that, even though our data were largely complete and the observation matrix matched well with the description of the criteria in the brochures and assessment forms, there was a large diversity in the type and number of arguments used and in the number of proposals assessed in the grant programmes included in our study.

4.2 Wide range of arguments used by panels: scientific arguments used most

For our first research question, we explored the number and type of arguments used in the panel meetings. Figure 1 provides an overview of the arguments used. Scientific quality was discussed most. The number of times the feasibility of the aims was discussed clearly stands out in comparison to all other arguments. Also, the match between the science and the problem studied and the plan of work were frequently discussed aspects of scientific quality. International competitiveness of the proposal was discussed the least of all five scientific arguments.

The number of arguments used in panel meetings.

The number of arguments used in panel meetings.

Attention was paid to societal relevance and impact in the panel meetings of both organisations. Yet, the language used differed somewhat between organisations. The contribution to a solution and the next step in science were the most often used societal arguments. At ZonMw, the impact of the health-care problem studied and the activities towards partners were less frequently discussed than the other three societal arguments. At the DHF, the five societal arguments were used equally often.

With the exception of the fellowship programme meeting, applicant-related arguments were not often used. The fellowship panel used arguments related to the applicant and to scientific quality about equally often. Committee-related arguments were also rarely used in the majority of the eight grant programmes observed. In three out of the ten panel meetings, one or two arguments were observed, which were related to personal experience with the applicant or their direct network. In seven out of ten meetings, statements were observed, which were unasserted or were explicitly announced as reflecting a personal preference. The frequency varied between one and seven statements (sixteen in total), which is low in comparison to the other arguments used (see  Fig. 1 for examples).

4.3 Use of arguments varied strongly per panel meeting

The balance in the use of scientific and societal arguments varied strongly per grant programme, panel, and organisation. At ZonMw, two meetings had approximately an equal balance in societal and scientific arguments. In the other two meetings, scientific arguments were used twice to four times as often as societal arguments. At the DHF, three types of panels were observed. Different patterns in the relative use of societal and scientific arguments were observed for each of these panel types. In the two CSQ-only meetings the societal arguments were used approximately twice as often as scientific arguments. In the two meetings of the scientific panels, societal arguments were infrequently used (between zero and four times per argument category). In the combined societal and scientific panel meetings, the use of societal and scientific arguments was more balanced.

4.4 Match of arguments used by panels with the assessment criteria

In order to answer our second research question, we looked into the relation of the arguments used with the formal criteria. We observed that a broader range of arguments were often used in comparison to how the criteria were described in the brochure and assessment instruction. However, arguments related to aspects that were consequently included in the brochure and instruction seemed to be discussed more frequently than in programmes where those aspects were not consistently included or were not included at all. Although the match of the science with the health-care problem and the background and reputation of the applicant were not always made explicit in the brochure or instructions, they were discussed in many panel meetings. Supplementary Fig. S1 provides a visualisation of how arguments used differ between the programmes in which those aspects were, were not, consistently included in the brochure and instruction forms.

4.5 Two-thirds of the assessment was driven by scientific panel members

To answer our third question, we looked into the differences in arguments used between panel members representing a scientific, clinical scientific, professional, policy, or patient perspective. In each research programme, the majority of panellists had a scientific background ( n  = 35), thirty-four members had a clinical scientific background, twenty had a health professional/clinical background, eight members represented a policy perspective, and fifteen represented a patient perspective. From the total number of arguments (1,097), two-thirds were made by members with a scientific or clinical scientific perspective. Members with a scientific background engaged most actively in the discussion with a mean of twelve arguments per member. Similarly, clinical scientists and health-care professionals participated with a mean of nine arguments, and members with a policy and patient perspective put forward the least number of arguments on average, namely, seven and eight. Figure 2 provides a complete overview of the total and mean number of arguments used by the different disciplines in the various panels.

The total and mean number of arguments displayed per subgroup of panel members.

The total and mean number of arguments displayed per subgroup of panel members.

4.6 Diverse use of arguments by panellists, but background matters

In meetings of both organisations, we observed a diverse use of arguments by the panel members. Yet, the use of arguments varied depending on the background of the panel member (see  Fig. 3 ). Those with a scientific and clinical scientific perspective used primarily scientific arguments. As could be expected, health-care professionals and patients used societal arguments more often.

The use of arguments differentiated by panel member background.

The use of arguments differentiated by panel member background.

Further breakdown of arguments across backgrounds showed clear differences in the use of scientific arguments between the different disciplines of panellists. Scientists and clinical scientists discussed the feasibility of the aims more than twice as often as their second most often uttered element of scientific quality, which was the match between the science and the problem studied . Patients and members with a policy or health professional background put forward fewer but more varied scientific arguments.

Patients and health-care professionals accounted for approximately half of the societal arguments used, despite being a much smaller part of the panel’s overall composition. In other words, members with a scientific perspective were less likely to use societal arguments. The relevance of the health-care problem studied, activities towards partners , and arguments related to participation and diversity were not used often by this group. Patients often used arguments related to patient participation and diversity and activities towards partners , although the frequency of the use of the latter differed per organisation.

The majority of the applicant-related arguments were put forward by scientists, including clinical scientists. Committee-related arguments were very rare and are therefore not differentiated by panel member background, except comments related to a comparison with other applications. These arguments were mainly put forward by panel members with a scientific background. HTA -related arguments were often used by panel members with a scientific perspective. Panel members with other perspectives used this argument scarcely (see Supplementary Figs S2–S4 for the visual presentation of the differences between panel members on all aspects included in the matrix).

5.1 Explanations for arguments used in panels

Our observations show that most arguments for scientific quality were often used. However, except for the feasibility , the frequency of arguments used varied strongly between the meetings and between the individual proposals that were discussed. The fact that most arguments were not consistently used is not surprising given the results from previous studies that showed heterogeneity in grant application assessments and low consistency in comments and scores by independent reviewers ( Abdoul et al. 2012 ; Pier et al. 2018 ). In an analysis of written assessments on nine observed dimensions, no dimension was used in more than 45 per cent of the reviews ( Hartmann and Neidhardt 1990 ).

There are several possible explanations for this heterogeneity. Roumbanis (2021a) described how being responsive to the different challenges in the proposals and to the points of attention arising from the written assessments influenced discussion in panels. Also when a disagreement arises, more time is spent on discussion ( Roumbanis 2021a ). One could infer that unambiguous, and thus not debated, aspects might remain largely undetected in our study. We believe, however, that the main points relevant to the assessment will not remain entirely unmentioned, because most panels in our study started the discussion with a short summary of the proposal, the written assessment, and the rebuttal. Lamont (2009) , however, points out that opening statements serve more goals than merely decision-making. They can also increase the credibility of the panellist, showing their comprehension and balanced assessment of an application. We can therefore not entirely disentangle whether the arguments observed most were also found to be most important or decisive or those were simply the topics that led to most disagreement.

An interesting difference with Roumbanis’ study was the available discussion time per proposal. In our study, most panels handled a limited number of proposals, allowing for longer discussions in comparison with the often 2-min time frame that Roumbanis (2021b) described, potentially contributing to a wider range of arguments being discussed. Limited time per proposal might also limit the number of panellists contributing to the discussion per proposal ( De Bont 2014 ).

5.2 Reducing heterogeneity by improving operationalisation and the consequent use of assessment criteria

We found that the language used for the operationalisation of the assessment criteria in programme brochures and in the observation matrix was much more detailed than in the instruction for the panel, which was often very concise. The exercise also illustrated that many terms were used interchangeably.

This was especially true for the applicant-related aspects. Several panels discussed how talent should be assessed. This confusion is understandable when considering the changing values in research and its assessment ( Moher et al. 2018 ) and the fact that the instruction of the funders was very concise. For example, it was not explicated whether the individual or the team should be assessed. Arensbergen et al. (2014b) described how in grant allocation processes, talent is generally assessed using limited characteristics. More objective and quantifiable outputs often prevailed at the expense of recognising and rewarding a broad variety of skills and traits combining professional, social, and individual capital ( DORA 2013 ).

In addition, committee-related arguments, like personal experiences with the applicant or their institute, were rarely used in our study. Comparisons between proposals were sometimes made without further argumentation, mainly by scientific panel members. This was especially pronounced in one (fellowship) grant programme with a high number of proposals. In this programme, the panel meeting concentrated on quickly comparing the quality of the applicants and of the proposals based on the reviewer’s judgement, instead of a more in-depth discussion of the different aspects of the proposals. Because the review phase was not part of this study, the question of which aspects have been used for the assessment of the proposals in this panel therefore remains partially unanswered. However, weighing and comparing proposals on different aspects and with different inputs is a core element of scientific peer review, both in the review of papers and in the review of grants ( Hirschauer 2010 ). The large role of scientific panel members in comparing proposals is therefore not surprising.

One could anticipate that more consequent language in the operationalising criteria may lead to more clarity for both applicants and panellists and to more consistency in the assessment of research proposals. The trend in our observations was that arguments were used less when the related criteria were not or were consequently included in the brochure and panel instruction. It remains, however, challenging to disentangle the influence of the formal definitions of criteria on the arguments used. Previous studies also encountered difficulties in studying the role of the formal instruction in peer review but concluded that this role is relatively limited ( Langfeldt 2001 ; Reinhart 2010 ).

The lack of a clear operationalisation of criteria can contribute to heterogeneity in peer review as many scholars found that assessors differ in the conceptualisation of good science and to the importance they attach to various aspects of research quality and societal relevance ( Abdoul et al. 2012 ; Geurts 2016 ; Scholten et al. 2018 ; Van den Brink et al. 2016 ). The large variation and absence of a gold standard in the interpretation of scientific quality and societal relevance affect the consistency of peer review. As a consequence, it is challenging to systematically evaluate and improve peer review in order to fund the research that contributes most to science and society. To contribute to responsible research and innovation, it is, therefore, important that funders invest in a more consistent and conscientious peer review process ( Curry et al. 2020 ; DORA 2013 ).

A common conceptualisation of scientific quality and societal relevance and impact could improve the alignment between views on good scientific conduct, programmes’ objectives, and the peer review in practice. Such a conceptualisation could contribute to more transparency and quality in the assessment of research. By involving panel members from all relevant backgrounds, including the research community, health-care professionals, and societal actors, in a better operationalisation of criteria, more inclusive views of good science can be implemented more systematically in the peer review assessment of research proposals. The ZonMw Framework Fostering Responsible Research Practices is an example of an initiative aiming to support standardisation and integration ( Reijmerink et al. 2020 ).

Given the lack of a common definition or conceptualisation of scientific quality and societal relevance, our study made an important decision by choosing to use a fixed set of detailed aspects of two important criteria as a gold standard to score the brochures, the panel instructions, and the arguments used by the panels. This approach proved helpful in disentangling the different components of scientific quality and societal relevance. Having said that, it is important not to oversimplify the causes for heterogeneity in peer review because these substantive arguments are not independent of non-cognitive, emotional, or social aspects ( Lamont and Guetzkow 2016 ; Reinhart 2010 ).

5.3 Do more diverse panels contribute to a broader use of arguments?

Both funders participating in our study have an outspoken public mission that requests sufficient attention to societal aspects in assessment processes. In reality, as observed in several panels, the main focus of peer review meetings is on scientific arguments. Next to the possible explanations earlier, the composition of the panel might play a role in explaining arguments used in panel meetings. Our results have shown that health-care professionals and patients bring in more societal arguments than scientists, including those who are also clinicians. It is, however, not that simple. In the more diverse panels, panel members, regardless of their backgrounds, used more societal arguments than in the less diverse panels.

Observing ten panel meetings was sufficient to explore differences in arguments used by panel members with different backgrounds. The pattern of (primarily) scientific arguments being raised by panels with mainly scientific members is not surprising. After all, it is their main task to assess the scientific content of grant proposals and fit their competencies. As such, one could argue, depending on how one justifies the relationship between science and society, that health-care professionals and patients might be better suited to assess the value for potential users of research results. Scientific panel members and clinical scientists in our study used less arguments that reflect on opening up and connecting science directly to others who can bring it further (being industry, health-care professionals, or other stakeholders). Patients filled this gap since these two types of arguments were the most prevalent type put forward by them. Making an active connection with society apparently needs a broader, more diverse panel for scientists to direct their attention to more societal arguments. Evident from our observations is that in panels with patients and health-care professionals, their presence seemed to increase the attention placed on arguments beyond the scientific arguments put forward by all panel members, including scientists. This conclusion is congruent with the observation that there was a more equal balance in the use of societal and scientific arguments in the scientific panels in which the CSQ participated. This illustrates that opening up peer review panels to non-scientific members creates an opportunity to focus on both the contribution and the integrative rationality ( Glerup and Horst 2014 ) or, in other words, to allow productive interactions between scientific and non-scientific actors. This corresponds with previous research that suggests that with regard to societal aspects, reviews from mixed panels were broader and richer ( Luo et al. 2021 ). In panels with non-scientific experts, more emphasis was placed on the role of the proposed research process to increase the likelihood of societal impact over the causal importance of scientific excellence for broader impacts. This is in line with the findings that panels with more disciplinary diversity, in range and also by including generalist experts, applied more versatile styles to reach consensus and paid more attention to relevance and pragmatic value ( Huutoniemi 2012 ).

Our observations further illustrate that patients and health-care professionals were less vocal in panels than (clinical) scientists and were in the minority. This could reflect their social role and lower perceived authority in the panel. Several guides are available for funders to stimulate the equal participation of patients in science. These guides are also applicable to their involvement in peer review panels. Measures to be taken include the support and training to help prepare patients for their participation in deliberations with renowned scientists and explicitly addressing power differences ( De Wit et al. 2016 ). Panel chairs and programme officers have to set and supervise the conditions for the functioning of both the individual panel members and the panel as a whole ( Lamont 2009 ).

5.4 Suggestions for future studies

In future studies, it is important to further disentangle the role of the operationalisation and appraisal of assessment criteria in reducing heterogeneity in the arguments used by panels. More controlled experimental settings are a valuable addition to the current mainly observational methodologies applied to disentangle some of the cognitive and social factors that influence the functioning and argumentation of peer review panels. Reusing data from the panel observations and the data on the written reports could also provide a starting point for a bottom-up approach to create a more consistent and shared conceptualisation and operationalisation of assessment criteria.

To further understand the effects of opening up review panels to non-scientific peers, it is valuable to compare the role of diversity and interdisciplinarity in solely scientific panels versus panels that also include non-scientific experts.

In future studies, differences between domains and types of research should also be addressed. We hypothesise that biomedical and health research is perhaps more suited for the inclusion of non-scientific peers in panels than other research domains. For example, it is valuable to better understand how potentially relevant users can be well enough identified in other research fields and to what extent non-academics can contribute to assessing the possible value of, especially early or blue sky, research.

The goal of our study was to explore in practice which arguments regarding the main criteria of scientific quality and societal relevance were used by peer review panels of biomedical and health research funding programmes. We showed that there is a wide diversity in the number and range of arguments used, but three main scientific aspects were discussed most frequently. These are the following: is it a feasible approach; does the science match the problem , and is the work plan scientifically sound? Nevertheless, these scientific aspects were accompanied by a significant amount of discussion of societal aspects, of which the contribution to a solution is the most prominent. In comparison with scientific panellists, non-scientific panellists, such as health-care professionals, policymakers, and patients, often use a wider range of arguments and other societal arguments. Even more striking was that, even though non-scientific peers were often outnumbered and less vocal in panels, scientists also used a wider range of arguments when non-scientific peers were present.

It is relevant that two health research funders collaborated in the current study to reflect on and improve peer review in research funding. There are few studies published that describe live observations of peer review panel meetings. Many studies focus on alternatives for peer review or reflect on the outcomes of the peer review process, instead of reflecting on the practice and improvement of peer review assessment of grant proposals. Privacy and confidentiality concerns of funders also contribute to the lack of information on the functioning of peer review panels. In this study, both organisations were willing to participate because of their interest in research funding policies in relation to enhancing the societal value and impact of science. The study provided them with practical suggestions, for example, on how to improve the alignment in language used in programme brochures and instructions of review panels, and contributed to valuable knowledge exchanges between organisations. We hope that this publication stimulates more research funders to evaluate their peer review approach in research funding and share their insights.

For a long time, research funders relied solely on scientists for designing and executing peer review of research proposals, thereby delegating responsibility for the process. Although review panels have a discretionary authority, it is important that funders set and supervise the process and the conditions. We argue that one of these conditions should be the diversification of peer review panels and opening up panels for non-scientific peers.

Supplementary material is available at Science and Public Policy online.

Details of the data and information on how to request access is available from the first author.

Joey Gijbels and Wendy Reijmerink are employed by ZonMw. Rebecca Abma-Schouten is employed by the Dutch Heart Foundation and as external PhD candidate affiliated with the Centre for Science and Technology Studies, Leiden University.

A special thanks to the panel chairs and programme officers of ZonMw and the DHF for their willingness to participate in this project. We thank Diny Stekelenburg, an internship student at ZonMw, for her contributions to the project. Our sincerest gratitude to Prof. Paul Wouters, Sarah Coombs, and Michiel van der Vaart for proofreading and their valuable feedback. Finally, we thank the editors and anonymous reviewers of Science and Public Policy for their thorough and insightful reviews and recommendations. Their contributions are recognisable in the final version of this paper.

Abdoul   H. , Perrey   C. , Amiel   P. , et al.  ( 2012 ) ‘ Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices ’, PLoS One , 7 : 1 – 15 .

Google Scholar

Abma-Schouten   R. Y. ( 2017 ) ‘ Maatschappelijke Kwaliteit van Onderzoeksvoorstellen ’, Dutch Heart Foundation .

Alla   K. , Hall   W. D. , Whiteford   H. A. , et al.  ( 2017 ) ‘ How Do We Define the Policy Impact of Public Health Research? A Systematic Review ’, Health Research Policy and Systems , 15 : 84.

Benedictus   R. , Miedema   F. , and Ferguson   M. W. J. ( 2016 ) ‘ Fewer Numbers, Better Science ’, Nature , 538 : 453 – 4 .

Chalmers   I. , Bracken   M. B. , Djulbegovic   B. , et al.  ( 2014 ) ‘ How to Increase Value and Reduce Waste When Research Priorities Are Set ’, The Lancet , 383 : 156 – 65 .

Curry   S. , De Rijcke   S. , Hatch   A. , et al.  ( 2020 ) ‘ The Changing Role of Funders in Responsible Research Assessment: Progress, Obstacles and the Way Ahead ’, RoRI Working Paper No. 3, London : Research on Research Institute (RoRI) .

De Bont   A. ( 2014 ) ‘ Beoordelen Bekeken. Reflecties op het Werk van Een Programmacommissie van ZonMw ’, ZonMw .

De Rijcke   S. , Wouters   P. F. , Rushforth   A. D. , et al.  ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—a Literature Review ’, Research Evaluation , 25 : 161 – 9 .

De Wit   A. M. , Bloemkolk   D. , Teunissen   T. , et al.  ( 2016 ) ‘ Voorwaarden voor Succesvolle Betrokkenheid van Patiënten/cliënten bij Medisch Wetenschappelijk Onderzoek ’, Tijdschrift voor Sociale Gezondheidszorg , 94 : 91 – 100 .

Del Carmen Calatrava Moreno   M. , Warta   K. , Arnold   E. , et al.  ( 2019 ) Science Europe Study on Research Assessment Practices . Technopolis Group Austria .

Google Preview

Demicheli   V. and Di Pietrantonj   C. ( 2007 ) ‘ Peer Review for Improving the Quality of Grant Applications ’, Cochrane Database of Systematic Reviews , 2 : MR000003.

Den Oudendammer   W. M. , Noordhoek   J. , Abma-Schouten   R. Y. , et al.  ( 2019 ) ‘ Patient Participation in Research Funding: An Overview of When, Why and How Amongst Dutch Health Funds ’, Research Involvement and Engagement , 5 .

Diabetesfonds ( n.d. ) Maatschappelijke Adviesraad < https://www.diabetesfonds.nl/over-ons/maatschappelijke-adviesraad > accessed 18 Sept 2022 .

Dijstelbloem   H. , Huisman   F. , Miedema   F. , et al.  ( 2013 ) ‘ Science in Transition Position Paper: Waarom de Wetenschap Niet Werkt Zoals het Moet, En Wat Daar aan te Doen Is ’, Utrecht : Science in Transition .

Forsyth   D. R. ( 1999 ) Group Dynamics , 3rd edn. Belmont : Wadsworth Publishing Company .

Geurts   J. ( 2016 ) ‘ Wat Goed Is, Herken Je Meteen ’, NRC Handelsblad < https://www.nrc.nl/nieuws/2016/10/28/wat-goed-is-herken-je-meteen-4975248-a1529050 > accessed 6 Mar 2022 .

Glerup   C. and Horst   M. ( 2014 ) ‘ Mapping “Social Responsibility” in Science ’, Journal of Responsible Innovation , 1 : 31 – 50 .

Hartmann   I. and Neidhardt   F. ( 1990 ) ‘ Peer Review at the Deutsche Forschungsgemeinschaft ’, Scientometrics , 19 : 419 – 25 .

Hirschauer   S. ( 2010 ) ‘ Editorial Judgments: A Praxeology of “Voting” in Peer Review ’, Social Studies of Science , 40 : 71 – 103 .

Hughes   A. and Kitson   M. ( 2012 ) ‘ Pathways to Impact and the Strategic Role of Universities: New Evidence on the Breadth and Depth of University Knowledge Exchange in the UK and the Factors Constraining Its Development ’, Cambridge Journal of Economics , 36 : 723 – 50 .

Huutoniemi   K. ( 2012 ) ‘ Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals ’, Social Studies of Science , 42 : 897 – 921 .

Jasanoff   S. ( 2011 ) ‘ Constitutional Moments in Governing Science and Technology ’, Science and Engineering Ethics , 17 : 621 – 38 .

Kolarz   P. , Arnold   E. , Farla   K. , et al.  ( 2016 ) Evaluation of the ESRC Transformative Research Scheme . Brighton : Technopolis Group .

Lamont   M. ( 2009 ) How Professors Think : Inside the Curious World of Academic Judgment . Cambridge : Harvard University Press .

Lamont   M. Guetzkow   J. ( 2016 ) ‘How Quality Is Recognized by Peer Review Panels: The Case of the Humanities’, in M.   Ochsner , S. E.   Hug , and H.-D.   Daniel (eds) Research Assessment in the Humanities , pp. 31 – 41 . Cham : Springer International Publishing .

Lamont   M. Huutoniemi   K. ( 2011 ) ‘Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels’, in C.   Charles   G.   Neil and L.   Michèle (eds) Social Knowledge in the Making , pp. 209–32. Chicago : The University of Chicago Press .

Langfeldt   L. ( 2001 ) ‘ The Decision-making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome ’, Social Studies of Science , 31 : 820 – 41 .

——— ( 2006 ) ‘ The Policy Challenges of Peer Review: Managing Bias, Conflict of Interests and Interdisciplinary Assessments ’, Research Evaluation , 15 : 31 – 41 .

Lee   C. J. , Sugimoto   C. R. , Zhang   G. , et al.  ( 2013 ) ‘ Bias in Peer Review ’, Journal of the American Society for Information Science and Technology , 64 : 2 – 17 .

Liu   F. Maitlis   S. ( 2010 ) ‘Nonparticipant Observation’, in A. J.   Mills , G.   Durepos , and E.   Wiebe (eds) Encyclopedia of Case Study Research , pp. 609 – 11 . Los Angeles : SAGE .

Luo   J. , Ma   L. , and Shankar   K. ( 2021 ) ‘ Does the Inclusion of Non-academix Reviewers Make Any Difference for Grant Impact Panels? ’, Science & Public Policy , 48 : 763 – 75 .

Luukkonen   T. ( 2012 ) ‘ Conservatism and Risk-taking in Peer Review: Emerging ERC Practices ’, Research Evaluation , 21 : 48 – 60 .

Macleod   M. R. , Michie   S. , Roberts   I. , et al.  ( 2014 ) ‘ Biomedical Research: Increasing Value, Reducing Waste ’, The Lancet , 383 : 101 – 4 .

Meijer   I. M. ( 2012 ) ‘ Societal Returns of Scientific Research. How Can We Measure It? ’, Leiden : Center for Science and Technology Studies, Leiden University .

Merton   R. K. ( 1968 ) Social Theory and Social Structure , Enlarged edn. [Nachdr.] . New York : The Free Press .

Moher   D. , Naudet   F. , Cristea   I. A. , et al.  ( 2018 ) ‘ Assessing Scientists for Hiring, Promotion, And Tenure ’, PLoS Biology , 16 : e2004089.

Olbrecht   M. and Bornmann   L. ( 2010 ) ‘ Panel Peer Review of Grant Applications: What Do We Know from Research in Social Psychology on Judgment and Decision-making in Groups? ’, Research Evaluation , 19 : 293 – 304 .

Patiëntenfederatie Nederland ( n.d. ) Ervaringsdeskundigen Referentenpanel < https://www.patientenfederatie.nl/zet-je-ervaring-in/lid-worden-van-ons-referentenpanel > accessed 18 Sept 2022.

Pier   E. L. , M.   B. , Filut   A. , et al.  ( 2018 ) ‘ Low Agreement among Reviewers Evaluating the Same NIH Grant Applications ’, Proceedings of the National Academy of Sciences , 115 : 2952 – 7 .

Prinses Beatrix Spierfonds ( n.d. ) Gebruikerscommissie < https://www.spierfonds.nl/wie-wij-zijn/gebruikerscommissie > accessed 18 Sep 2022 .

( 2020 ) Private Non-profit Financiering van Onderzoek in Nederland < https://www.rathenau.nl/nl/wetenschap-cijfers/geld/wat-geeft-nederland-uit-aan-rd/private-non-profit-financiering-van#:∼:text=R%26D%20in%20Nederland%20wordt%20gefinancierd,aan%20wetenschappelijk%20onderzoek%20in%20Nederland > accessed 6 Mar 2022 .

Reneman   R. S. , Breimer   M. L. , Simoons   J. , et al.  ( 2010 ) ‘ De toekomst van het cardiovasculaire onderzoek in Nederland. Sturing op synergie en impact ’, Den Haag : Nederlandse Hartstichting .

Reed   M. S. , Ferré   M. , Marin-Ortega   J. , et al.  ( 2021 ) ‘ Evaluating Impact from Research: A Methodological Framework ’, Research Policy , 50 : 104147.

Reijmerink   W. and Oortwijn   W. ( 2017 ) ‘ Bevorderen van Verantwoorde Onderzoekspraktijken Door ZonMw ’, Beleidsonderzoek Online. accessed 6 Mar 2022.

Reijmerink   W. , Vianen   G. , Bink   M. , et al.  ( 2020 ) ‘ Ensuring Value in Health Research by Funders’ Implementation of EQUATOR Reporting Guidelines: The Case of ZonMw ’, Berlin : REWARD|EQUATOR .

Reinhart   M. ( 2010 ) ‘ Peer Review Practices: A Content Analysis of External Reviews in Science Funding ’, Research Evaluation , 19 : 317 – 31 .

Reinhart   M. and Schendzielorz   C. ( 2021 ) Trends in Peer Review . SocArXiv . < https://osf.io/preprints/socarxiv/nzsp5 > accessed 29 Aug 2022.

Roumbanis   L. ( 2017 ) ‘ Academic Judgments under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups ’, Social Studies of Science , 47 : 95 – 116 .

——— ( 2021a ) ‘ Disagreement and Agonistic Chance in Peer Review ’, Science, Technology & Human Values , 47 : 1302 – 33 .

——— ( 2021b ) ‘ The Oracles of Science: On Grant Peer Review and Competitive Funding ’, Social Science Information , 60 : 356 – 62 .

( 2019 ) ‘ Ruimte voor ieders talent (Position Paper) ’, Den Haag : VSNU, NFU, KNAW, NWO en ZonMw . < https://www.universiteitenvannederland.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Ruimte-voor-ieders-talent.pdf >.

( 2013 ) San Francisco Declaration on Research Assessment . The Declaration . < https://sfdora.org > accessed 2 Jan 2022 .

Sarewitz   D. and Pielke   R. A.  Jr. ( 2007 ) ‘ The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science ’, Environmental Science & Policy , 10 : 5 – 16 .

Scholten   W. , Van Drooge   L. , and Diederen   P. ( 2018 ) Excellent Is Niet Gewoon. Dertig Jaar Focus op Excellentie in het Nederlandse Wetenschapsbeleid . The Hague : Rathenau Instituut .

Shapin   S. ( 2008 ) The Scientific Life : A Moral History of a Late Modern Vocation . Chicago : University of Chicago press .

Spaapen   J. and Van Drooge   L. ( 2011 ) ‘ Introducing “Productive Interactions” in Social Impact Assessment ’, Research Evaluation , 20 : 211 – 8 .

Travis   G. D. L. and Collins   H. M. ( 1991 ) ‘ New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System ’, Science, Technology & Human Values , 16 : 322 – 41 .

Van Arensbergen   P. and Van den Besselaar   P. ( 2012 ) ‘ The Selection of Scientific Talent in the Allocation of Research Grants ’, Higher Education Policy , 25 : 381 – 405 .

Van Arensbergen   P. , Van der Weijden   I. , and Van den Besselaar   P. V. D. ( 2014a ) ‘ The Selection of Talent as a Group Process: A Literature Review on the Social Dynamics of Decision Making in Grant Panels ’, Research Evaluation , 23 : 298 – 311 .

—— ( 2014b ) ‘ Different Views on Scholarly Talent: What Are the Talents We Are Looking for in Science? ’, Research Evaluation , 23 : 273 – 84 .

Van den Brink , G. , Scholten , W. , and Jansen , T. , eds ( 2016 ) Goed Werk voor Academici . Culemborg : Stichting Beroepseer .

Weingart   P. ( 1999 ) ‘ Scientific Expertise and Political Accountability: Paradoxes of Science in Politics ’, Science & Public Policy , 26 : 151 – 61 .

Wessely   S. ( 1998 ) ‘ Peer Review of Grant Applications: What Do We Know? ’, The Lancet , 352 : 301 – 5 .

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Cornell University

Search This Site

  • Budget & Planning
  • Common Data Set
  • Substantive change process
  • Brown bags and presentations
  • University Factbook
  • Diversity dashboards
  • Undergraduate
  • Graduate School
  • Professional Schools
  • Medical Division
  • Total Enrollment
  • Undergraduate Enrollment
  • Graduate School Enrollment
  • Professional Schools Enrollment
  • Medical Division Enrollment
  • Tuition and Self-Help
  • Degrees Conferred
  • Academic Staff
  • Non-academic Staff
  • External Environment
  • All undergraduate students
  • Incoming undergraduates
  • Graduating seniors
  • Faculty and academics
  • Submit survey proposals
  • Survey calendar
  • So you want to survey Cornell students…
  • Academic program changes
  • Academic program review

Formal Review of Research Proposals

When is Formal Review Required?

Student & Campus Life research projects that will use substantial resources of the Cornell community must be formally reviewed by the committee before they can be initiated. At a minimum, this includes research that draws participants from a major institutional data base, for example, those maintained by the University Registrar; Office of the Dean of Students; Fraternity, Sorority and Independent Living; and Class Councils. Regardless of how potential participants are to be identified, research that meets the following criteria will also require formal review by the committee:

  • Involves more that 100 participants for a quantitative data collection method (e.g., survey research) or 25 participants for a qualitative data collection method (e.g., focus groups or interviews);
  • Is broader in scope than program evaluation (e.g., asks about more than just program-based experiences or includes individuals who did not participate in the target program or event); and
  • Will require a substantial amount of participants’ time (e.g., protocols that will take more than 10 or 15 minutes to complete, or longitudinal research designs).

Conversely, research projects that are very limited in scope, and research that is conducted exclusively for program evaluation purposes (i.e., research that examines the program-related experiences of students who participate in a specific program or event) will generally be exempt from formal review by the committee.

Submitting a Proposal for Formal Review

The committee meets monthly during the fall, winter and spring semesters to formally review research proposals and conduct related business. At least eight weeks before the anticipated launch date of the project, researchers should submit a  SCLRG research proposal form to Leslie Meyerhoff or Marne Einarson . The proposal form asks for information about the purpose and proposed design of the study, as well as draft versions of data collection instruments. Samples of completed research proposals are available here and here .

The following criteria will be used by the committee to evaluate research proposals:

  • Importance: Does the research address an important issue at Cornell? Will it provide useful information for academic planning or providing services to Cornell students?
  • Content and Design : Does the proposed methodology fit the research question(s)? Are the questions well-constructed and easily understood? Is the instrument of reasonable length? Have the questions been pretested?
  • Population and Sampling Methodology: Who is the target population? Is the sampling methodology appropriate to the research question(s)? Has the same student cohort and/or sample been used in other recent research? Could a smaller sample be drawn to achieve the same objective? How will the researcher(s) gain access to the proposed participants?
  • Timing: Does the proposed timing of the research overlap with or follow closely upon other research directed toward the same population? When were data on this issue last collected at Cornell? Is the data collection period scheduled at a time when students are likely to respond?
  • Data Management and Dissemination: Who will have access to the data? What are the provisions for secure storage of the data? Can data from this research be linked to other data sets? What is the plan for analyzing the data and disseminating the results? How will research results contribute to better decision making? How will research results be shared more broadly?
  • Resources : What resources will be required to conduct this research (e.g., instrument design, Web application development, mail and/or e-mail services, data entry and analysis)? From where will these resources be obtained?
  • Overall Impact: What will be the impact of the study? Are there any conceivable negative impacts on the University? Will the study overburden respondents? Overall, do the expected benefits of the study appear to outweigh the costs?

Based on their evaluation of the research proposal, the committee may decide to:

  • Approve the project as submitted
  • Approve the project with recommendations for changes that must be adopted before the project can be initiated
  • Require revisions and re-submission of the project before approval is granted
  • Reject the project (e.g., the potential benefits of the data do not justify the costs of collection; the research design has weaknesses that cannot be rectified)

IRB Approval

If research results will not be used exclusively for internal purposes (e.g., they will be presented or published beyond Cornell; or used for an undergraduate honors thesis, master’s thesis or doctoral dissertation), researchers may also be required to obtain approval from Cornell’s Institutional Review Board for Human Participants (IRB). IRB approval should be sought after the proposal has been reviewed by the SAS Research Group. The committee should subsequently be informed of the decision of the IRB.

© 2024 Cornell University

If you have a disability and are having trouble accessing information on this website or need materials in an alternate format, contact  [email protected]  for assistance.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A Review Committee's Guide for Evaluating Qualitative Proposals

Profile image of Janice Morse

2003, Qualitative Health Research

Although they complain that qualitative proposals are not reviewed fairly when funding agencies use quantitative criteria, qualitative researchers have failed the system by not developing alternative criteria for the evaluation of qualitative proposals. In this article, the author corrects this deficit by presenting criteria to assess the relevance, rigor, and feasibility of qualitative research. These criteria are not a checklist but rather a series of questions that can aid a reviewer, adept in qualitative methods, to comprehensively evaluate and defend qualitative research.

Related Papers

Michèle Lamont

I approach the questions that were submitted to this panel through somewhat unique lenses. For the past few years, I have been working on an NSF-funded study of how scholars who serve on funding panels in the social sciences and the humanities go about assessing excellence.

assessment of research proposal by review committee

Clemencia Navarro

Book Review

Tesfaye E R E S S O Gofe , Kenenisa Debela

In this article, we examine Sarah J. Tracy's (2013) book Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact, and First Edition. Typically, we carefully read the texts we are reviewing to make sure we don't misunderstand them and underline any portions we feel are particularly important. Next, we begin by describing the subject matter and target audience of the book (since having this information first may allow readers who are not interested to skip the rest of the review, and readers who are interested to raise their attention). Next, we describe the topic's development in terms of its depth of treatment and various content-related aspects. Then we highlight the aspects of the book that, in our opinion, have strengths and shortcomings. We attempt to provide a broad assessment of the book's value and potential use. In the end, we aim to make the form as long as necessary and polish it. We discovered that the Sarah J. Tracy book was beneficial for both beginners and seasoned scholars. Since the author has extensive experience with qualitative research, their publication may present a chance to advance our comprehension of qualitative research design. Furthermore, the book contains a complete, step-by-step explanation of the methodology for evaluating qualitative quality as well as how to gather, examine, and write qualitative data. It is a complete resource for the theoretical foundations and practical application of technique and it is meant to provide graduate students and advanced academics with enough methodological material to be relevant.

Computers &amp; Education

Rachelle Heller

Vilna Bashi Treitler

Questions Motivating this Essay: The questions we were asked to consider as we Qualitative Methods Workshop Participants wrote our essays are: (1) What exactly do we want the NSF to advance? (2) How can NSF help strengthen the scientific basis of qualitative research? (3) What might be considered " best practices " in qualitative research, and what are promising new directions and developments? It may be true that we need to strengthen the scientific basis of qualitative research – a problem for which the solution lays with qualitative researchers ourselves. However, rather than being inherently unscientific, qualitative methods may instead have a reputation for being insufficiently " scientific. " Given that the qualitative research plan neither lists a set of " variables " to be catalogued and examined, nor proposes a " model " that will be tested, nor declares the " algorithms " through which the data will be fed, it may often be quite difficult for researchers in other methodological traditions to " see " the elements in the qualitative proposal that give an indication of what the researcher will actually do. This disconnect has important consequences for principal investigators (PIs) of qualitative research, especially if I'm correct in my assessment that non-qualitative methodologists largely outnumber qualitative ones, and occupy a large share of the important gatekeeping positions on grant and funding boards, and on the Institutional Review Boards concerned with research involved with human subjects. The solution to the problem of finding sufficient scientific merit in qualitative work, then, rests both with the investigators who rely on these methods, and the non-qualitative scholars who evaluate proposals for qualitative research. Researchers can and should plan and propose their projects in clearer terms, and " translate " their work so that the uninitiated researcher may understand the " language " of qualitative methods. And evaluators may learn to more positively value qualitative research, which is conducted in a manner quite different from the ways they may execute other methods. As I reflected on these two distinct audiences, and my experience serving on the National Science Foundation (NSF) Advisory Panel for Sociology, I began to reformulate the questions posed to me and thought of answering the following questions instead:

The Asia-Pacific Education Researcher

DRISHTI YADAV

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qua...

Innovation: The European Journal of Social Science Research

The Qualitative Report

biniam getnet

We review Sharan B. Merriam and Elizabeth J. Tisdell’s 2016 book, Qualitative Research: A Guide to Design and Implementation and then compare it to three other introductory to qualitative research texts. We found the Merriam and Tisdell book to be useful for both novice and more experienced researchers. The two authors are highly experienced in qualitative research and their book may provide an opportunity to improve our understanding of qualitative research design and implementation when compared to other different qualitative books.

International journal of qualitative studies on health and well-being

Virginia Dickson-Swift

Despite on-going debate about credibility, and reported limitations in comparison to other approaches, case study is an increasingly popular approach among qualitative researchers. We critically analysed the methodological descriptions of published case studies. Three high-impact qualitative methods journals were searched to locate case studies published in the past 5 years; 34 were selected for analysis. Articles were categorized as health and health services (n=12), social sciences and anthropology (n=7), or methods (n=15) case studies. The articles were reviewed using an adapted version of established criteria to determine whether adequate methodological justification was present, and if study aims, methods, and reported findings were consistent with a qualitative case study approach. Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, research...

Ayelet Kuper

RELATED PAPERS

Islamic Modernist Discourse

ALAT PENETAS TELUR BURUNG KENARI

Suci Maheswara

Revista Desassossego

adriana zapparoli

IRJET Journal

sunita Naik

rosnani hashim

Physical Review B

Victor Boogers

Jurnal Pembangunan Pendidikan: Fondasi dan Aplikasi

Dwi Siswoyo

Frontiers in Genetics

Matthew Hamilton

Benoit CHEVALIER

Journal of the Geological Society

Nuclear Engineering and Technology

Changheui Jang

Frontiers in Immunology

François Renevey

Vinay Ready

English Language Teaching

MUHAMMAD FARDHANA SYAHADAT IKHLAS

Human Ecology

Cereal Research Communications

Reza Aghnoum

ICERI2021 Proceedings

Ana Paula Aires

Accounting Analysis Journal

SINAR TERANG

jessica maya pardo

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

How Do I Review Thee? Let Me Count the Ways: A Comparison of Research Grant Proposal Review Criteria Across US Federal Funding Agencies

While Elizabeth Barrett Browning counted 25 ways in which she loves her husband in her poem, “How Do I Love Thee? Let me Count the Ways,” we identified only eight ways to evaluate the potential for success of a federal research grant proposal. This may be surprising, as it seems upon initial glance of the review criteria used by various federal funding agencies that each has its own distinct set of “rules” regarding the review of grant proposals for research and scholarship. Much of the grantsmanship process is dependent upon the review criteria, which represent the funders’ desired impact of the research. But since most funders that offer research grants share the overarching goals of supporting research that (1) fits within its mission and (2) will bring a strong return on its financial investment, the review criteria used to evaluate research grant proposals are based on a similar set of fundamental questions. In this article, we compare the review criteria of 10 US federal agencies that support research through grant programs, and demonstrate that there are actually only a small and finite number of ways that a grant proposal can be evaluated. Though each funding agency may use slightly different wording, we found that the majority of the agencies’ criteria address eight key questions. Within the highly competitive landscape of research grant funding, new researchers must find support for their research agendas and established investigators and research development offices must consider ways to diversify their funding portfolios, yet all may be discouraged by the apparent myriad of differences in review criteria used by various funding agencies. Guided by research administrators and research development professionals, recognizing that grant proposal review criteria are similar across funding agencies may help lower the barrier to applying for federal funding for new and early career researchers, or facilitate funding portfolio diversification for experienced researchers. Grantmakers are furthermore provided valuable guidance to develop and refine their own proposal review criteria.

Introduction

The research funding landscape in the United States is highly competitive, with flat or shrinking budgets for investigator-initiated research programs at most federal agencies ( American Association for the Advancement of Science (AAAS), 2014) . Taking biomedical research as an example, in 2014, the National Institutes of Health (NIH) budgeted $15 billion to fund research project grants, an amount that has essentially remained the same since 2003 ( AAAS, 2014 ; Federation of American Societies for Experimental Biology, 2014 ). At the same time, the number of research grant applications has steadily increased, from close to 35,000 in 2003 to 51,000 in 2014. The result has been a stunning 30% drop in funding success rates, from 30.2% in 2003 to 18.8% in 2014. Other federal agencies that fund research, including the National Science Foundation (NSF), Office of Veterans Affairs (VA), and Department of Defense (DoD), are feeling the similar sting of budget restrictions.

Within this tenuous funding environment, it has become essential that investigators and research development offices sustain their research programs by continuing to encourage new researchers to apply for grant support and encouraging established researchers to diversify their funding portfolios. New researchers benefit from clear information about the federal grant process, and experienced researchers benefit from considering funding opportunities from federal funding agencies, national organizations and advocacy groups, state agencies, private philanthropic organizations, regional or local special interest groups, corporations, and internal institutional grant competitions that may not be their typical targets for support. With increasing competition for grant funding, investigators who might be accustomed to one set of rules for preparing grant proposals may become quickly overwhelmed by the prospect of learning entirely new sets of rules for different funding agencies.

Yet this process is not as daunting if we start from the perspective that any funder that offers research grants has essentially the same goal: to support research that fits within its mission and will bring a strong return on its financial investment ( Russell & Morrison, 2015 ). The review criteria used to evaluate research grant proposals reflect the funder’s approach to identifying the most relevant and impactful research to support ( Geever, 2012 ; Gerin & Kapelewski, 2010 ; Kiritz, 2007 ). Thus, planning and preparing a successful grant proposal depends on a clear understanding of the review criteria that will be used. These criteria directly inform how the proposal content should be presented and how much space should be afforded to each section of the proposal, as well as which keywords should be highlighted. It may seem that each funder—federal, state, local, private—has its own distinct set of rules regarding the preparation and review of grant proposals, and that each funder uses specific jargon in its review process. However, because all funders aim to support research that is relevant and impactful, we suggest that the mandatory review criteria used to evaluate research grant proposals are based on a set of fundamental questions, such as: Does this research fit within the funder’s mission? Will the results of this research fill a gap in knowledge or meet an unmet need? Do the investigators have the skills and resources necessary to carry out the research?

In this article, we examine the research grant proposal review criteria used by 10 US federal agencies to demonstrate that there exist only a small and finite number of ways that federal research grant proposals are actually evaluated. Our goal is to help research administrators and research development professionals empower investigators to more confidently navigate funder review criteria, thereby lowering the barrier to first-time applicants or to grant portfolio diversification for more established researchers. Recognizing that research proposal review criteria are aligned across federal funding agencies can also help proposal writers who might be faced with other funding opportunities in which the review criteria are not clearly defined. On the flip side of that equation, understanding that review criteria are based on the same core goals can help grantmakers as they develop and refine review criteria for their funding opportunities.

Observations

We performed an online search of 10 US federal agencies’ (NIH, NSF, VA, Department of Education [ED], DoD, National Aeronautics and Space Administration [NASA], Department of Energy [DOE], United States Department of Agriculture [USDA], National Endowment for the Humanities [NEH], and National Endowment for the Arts [NEA]) websites to identify policies and procedures related to their research grant proposal review process. The NIH Office of Extramural research (OER) website provided the greatest detail and transparency with regard to the review criteria and review process used for evaluating research grant proposals ( National Institutes of Health, 2008a ; 2008b ; 2015a ), and served as a starting point for our analysis of the review criteria for the other nine agencies. We developed key questions corresponding to each of the NIH review criteria, and then aligned the review criteria of the remaining nine agencies with these key questions.

Federal grant program guidance and policy changes occur frequently; the links to online resources for research grant proposal policies for each of the various funding agencies included in our analysis were current as of August 10, 2015. Note that our analysis includes information from the National Institute on Disability and Rehabilitation Research (NIDRR) program as administered by ED. On June 1, 2015, the NIDRR was transferred from ED to the Administration for Community Living (ACL) in the US Department of Health and Human Services (DHHS), and is now called the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) Field-Initiated Program. Our analysis of NIDRR was current as of May 4, 2015.

Also note that there is variability between different research grant programs within each federal agency. We included in our analysis review criteria from the DoD Congressionally Directed Medical Research Programs (CDMRP), the USDA National Institute of Food and Agriculture, the NEH Digital Humanities Start-up program, and the NEA ART WORKS program. Criteria for NASA research programs were compiled from numerous NASA Research Announcements.

The NIH review criteria

The NIH criteria emphasize clinical, interdisciplinary, and translational biomedical research ( National Institutes of Health, 2008a ). Reviewers are instructed to evaluate research grant proposals based on how well five core review criteria are met: Significance, Innovation, Approach, Investigator(s), and Environment ( Table 1 ) ( National Institutes of Health, 2015a ; 2015b ). Assigned reviewers consider each of the five core review criteria and assign a separate score for each using a 9-point scale. These ratings are included in a summary statement that is provided to the researcher, whether or not the entire study section ultimately discusses the proposal.

The NIH core review criteria for research project grant proposals a

NIH, National Institutes of Health.

Each of the five core review criteria can be simplified into a general question. The Significance criterion asks reviewers to consider “Why does the research matter?” Reviewers look for whether the proposed project will address an important problem or critical barrier to progress in the field, and whether the knowledge gained from the proposed research will advance scientific knowledge, technical capacity, or clinical practice to drive the field forward. Innovation translates into “How is the research new?” Reviewers consider how the proposed research challenges current thinking with novel concepts, approaches, tools, or treatments. Approach asks, “How will the research be done?” Reviewers assess the proposed research strategy, methodology, and analyses and determine whether they are appropriate to achieve the aims of the project, and how riskier aspects of the proposal might be handled with alternative approaches. The remaining two core criteria evaluate the context in which the research will be done—defined as the collective set of resources, equipment, institutional support, and facilities available (Environment)—and what is special about the people doing the research (Investigator). For the Environment criterion, reviewers evaluate whether the resources and institutional support available to the investigators are sufficient to ensure successful completion of the research aims, including any unique features such as access to specific subject populations or collaborative arrangements. For the Investigator criterion, reviewers determine whether the primary investigator (PI), other researchers, and any collaborators have the experience and training needed to complete the proposed research, as well as how collaborators will combine their skills and work together.

The five core review criteria ratings, in addition to other proposal-specific criteria, are then used to determine an Overall Impact/Priority Score ( National Institutes of Health, 2015a ; 2015b ). This score reflects the reviewers’ assessment of the “likelihood for the project to exert a sustained, powerful influence on the research field(s) involved.” An application does not need to have exemplary scores in all criteria in order to be judged as likely to have a high overall impact. For example, a project that by its nature is not highly innovative may nevertheless be deemed essential to advance knowledge within a field. A 2011 study by the National Institutes of General Medicine Science (NIGMS) examined the correlation between the core review criteria scores and the Overall Impact score and found that reviewers weighted certain criteria more heavily than others, in the following order: Approach > Significance > Innovation > Investigator > Environment ( Rockey, 2011 ). Thus, the quality of ideas appeared to matter more than investigator reputation, a particularly good finding for new investigators ( Berg, 2010a ; 2010b ; 2010c ). These findings of relative importance of the core review criteria by reviewers also suggest that, in terms of space, it makes sense for proposers to utilize more pages of the proposal narrative to address aspects of their approach and the research project’s significance than on the environment supporting the project.

Other agencies have formalized systems for weighting grant proposal review criteria. For example, the ED NIDRR standard selection criteria are weighted using a points designation ( US Department of Education, 2014 ): Design of Research Activities (50 pts); Importance of the Problem (15 pts); Project Staff (15 pts); Plan of Evaluation (10 pts); and Adequacy and Accessibility of Resources (10 pts). Similar to NIH reviewers, ED weights research design and the importance of the problem more heavily than staff or resources when evaluating grant proposals ( Committee on the External Evaluation of NIDRR and Its Grantees, National Research Council, Rivard, O’Connell, & Wegman, 2011 ).

How do the NIH review criteria compare to those of other federal agencies?

The most straightforward comparison of research grant review criteria is between the NIH and NSF, which together make up 25% of the research and development budget in the US ( AAAS, 2014 ). The NSF criteria emphasize transformative and interdisciplinary research ( National Science Foundation, 2007 ), and involve three (3) guiding principles , two (2) review criteria , and five (5) review elements ( National Science Foundation, 2014 ). The two review criteria used by the NSF are Intellectual Merit, which encompasses the potential to advance the field, and Broader Impacts, which encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes. Within each of these two review criteria are five review elements ( Figure 1 ). These five review elements line up remarkably well with the NIH core review criteria ( Table 2 ), with both agencies’ criteria addressing a similar set of concepts but using distinct language to describe each criterion.

An external file that holds a picture, illustration, etc.
Object name is nihms-770392-f0001.jpg

NSF Merit Review Criteria ( National Science Foundation, 2014 )

Comparison of the NIH and NSF research grant proposal review criteria

NIH, National Institutes of Health; NSF, National Science Foundation; PD, program director; PI, principal investigator.

What about a non-science funding agency like the NEH? While there is some variability between individual NEH grant programs, the NEH application review criteria are: Humanities Significance, Project Feasibility and Work Plan, Quality of Innovation, Project Staff Qualifications, and Overall Value to Humanities Scholarship ( National Endowment for the Humanities, 2015a ; 2015b ). The significance of the project includes its potential to enhance research, teaching, and learning in the humanities. The quality of innovation is evaluated in terms of the idea, approach, method, or digital technology (and the appropriateness of the technology) that will be used in the project. Reviewers also examine the qualifications, expertise, and levels of commitment of the project director and key project staff or contributors. The quality of the conception, definition, organization, and description of the project and the applicant’s clarity of expression, as well as the feasibility of the plan of work are also assessed. Finally, reviewers consider the likelihood that the project will stimulate or facilitate new research of value to scholars and general audiences in the humanities. Table 3 shows the NEH review criteria compared with those used by the NIH and NSF. Though there is not an exact match for the key question “In what context will the research be done?” (i.e., the research environment and available resources), this is evaluated in NEH proposals as part of the Project Feasibility and Work Plan.

Comparison of research grant proposal review criteria used by the NIH, NSF, and NEH

NIH, National Institutes of Health; NSF, National Science Foundation; NEH, National Endowment for the Humanities.

Comparing review criteria across federal agencies: Eight key questions

In addition to the core review criteria mentioned above, funding agencies also typically ask reviewers to consider the project budget and the approach that will be used to evaluate project success. When we expanded the comparison of research grant proposal review criteria across 10 US federal agencies, and included the budget and evaluation criteria, we revealed that all of the agencies’ review criteria aligned with a consistent set of eight key questions that reviewers consider when evaluating any type of research proposal ( Table 4 ).

Eight key questions considered by reviewers of research grant proposals and the associated review criteria terms used by 10 US federal funding agencies

The research grant proposal review criteria used by the 10 federal funding agencies are associated with these eight key questions ( Table 5 ). We have already demonstrated that the question, “Why does it matter?”—which addresses the importance or significance of the proposed project— applies to similar review criteria from the NIH (Significance), NSF (Intellectual Merit), and the NEH (Humanities Significance) ( National Endowment for the Humanities, 2015a ; 2015b ; National Institutes of Health, 2015a , 2015b ; National Science Foundation, 2014 ). Likewise, ED evaluates the “Importance of the Problem” ( US Department of Education, 2014 ); the DoD application review criteria includes “Importance” ( Department of Defense, 2015 ); the VA and NASA each evaluate “Significance” ( National Aeronautics and Space Administration, 2015 ; US Department of Veterans Affairs, 2015 ); the DOE looks at “Scientific and Technological Merit” ( US Department of Energy, 2015 ); the USDA evaluates “Project Relevance” ( United States Department of Agriculture, 2015 ); and the NEA assesses “Artistic Excellence” ( National Endowment for the Arts, 2015 ). There are also parallels in the language used by each of the funders as they ask reviewers to assess proposed research project innovation or novelty, the approach or methodology to be used, the investigators or personnel involved, the environment and resources available, and the overall impact or value of the project ( Table 5 ).

Comparison of research grant proposal review criteria across 10 US federal funding agencies

NIH, National Institutes of Health; NSF, National Science Foundation; VA, Department of Veterans Affairs; ED, Department of Education; DoD, Department of Defense; NASA, National Aeronautics and Space Administration; DOE, Department of Education; USDA, US Department of Agriculture; NEH, National Endowment for the Humanities; NEA, National Endowment for the Arts; N/A, not applicable.

While all the agencies’ collective review criteria fall within the eight key questions, there is some variability across agencies. For example, the DOE does not have a clear review criterion for evaluating the overall impact or value of a project, equivalent to the key question “What is the return on investment?” Some agencies to do not explicitly include the budget as part of their review criteria, such as the NSF, VA, and USDA, while other agencies do not specifically ask for a plan to evaluate success of the project, including the NIH, VA, DoD, DOE, USDA, or NEH. Funders may also have unique review criteria. Unlike the other nine agencies evaluated, the DoD uses the review criterion “Application Presentation,” which assesses the writing, clarity, and presentation of the application components. Agencies may also have mission- or program-specific review criteria; for example, for certain applications, the NEA may evaluate the potential to reach underserved populations as part of “Artistic Merit.” Despite these differences, it is clear that for the 10 federal funding agencies examined, the review criteria used to evaluate research grant proposals are extraordinarily aligned.

If we remember that all funding agencies are trying to evaluate research grant proposals to reach the same goals—to determine which projects fit within their mission and will provide a return on their financial investment—it is perhaps not all that surprising that the review criteria that federal funding agencies use are aligned. We further propose that funding announcements from any funder, including state agencies, local groups, and private philanthropic organizations, similarly ask for research grant proposals to answer some, if not all, of the eight key questions that emerged from our analysis of US federal funding agencies. Keeping these key questions in mind can help research administrators and research development offices, as well as proposal writers, decipher research grant proposal review criteria from almost any funding agency, thereby facilitating proposal development.

For this article, we limited our analysis to the review criteria used across different US federal funders to evaluate research grant proposals, and did not include criteria used for other federal funding mechanisms, such as training grants or contract proposals. NIH has compared the review criteria used across their various funding mechanisms, including research grants, grants for conferences and scientific meetings, small business innovation or technology transfer grants, fellowship and career development grants, and training grants, among others ( National Institutes of Health, 2014 ). Again, while there are differences in the language used to describe each core review criterion across the various grant mechanisms, the concepts being reviewed—what is being done, why it is being done, how it is new, who is doing the work, and where it will be done—are essentially the same across each mechanism.

We have demonstrated that research grant proposal review criteria are remarkably aligned across 10 US federal funding agencies, despite the differences in their missions and the terminology each uses for its own review process ( Table 5 ). Moreover, a set of only eight key questions summarizes the collective research grant proposal review criteria across all these federal agencies. While the sheer number of non-federal funding opportunities makes a similar comparative analysis of their review criteria impractical, we suggest that the eight key questions emerging from our analysis provide a starting point for researchers, research administrators, and funders to assess the review criteria used by most, if not all, other research funding opportunities. This is reasonable given that each funder is trying to achieve the same goal during the grant review process: find those research projects that fit the funder’s mission and are worth its investment. Through this lens, the review criteria used for research proposals across agencies are easier to understand and address, which may encourage new investigators to apply for funding, and seasoned investigators and research development offices to consider a diversified set of funding sources for their research portfolios. We also hope that this analysis provides guidance to other grantmakers as they develop review criteria for their own funding opportunities. For the 10 US federal agencies included here, we hope that the analysis serves as a starting point to develop even greater consistency across the review criteria—perhaps even a single canonic, cross-agency set of review criteria—used to evaluate federal research grant proposals.

Acknowledgments

Author’s Note

The authors would like to thank Amy Lamborg, MS, MTSC, for providing invaluable insights and for reviewing the manuscript.

The work is based on material developed by HJF-K for the Grantsmanship for the Research Professionals course at Northwestern University School of Professional Studies (SCS PHIL_ NP 380-0), and was presented in part at the National Organization of Research Development Professionals 7th Annual Research Development Conference in Bethesda, MD, April 29- May 1, 2015.

  • American Association for the Advancement of Science Intersociety Working Group. AAAS report XXXIX: Research and development FY 2015. 2014 Retrieved from http://www.aaas.org/page/aaas-report-xxxix-research-and-development-fy-2015 .
  • Berg J. Even more on criterion scores: Full regression and principal component analysis. NIGMS Feedback Loop Blog. 2010a Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/even-more-on-criterion-scores-full-regression-and-principal-component-analyses/
  • Berg J. Model organisms and the significance of significance. NIGMS Feedback Loop Blog. 2010b Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/model-organisms-and-the-significance-of-significance/
  • Berg J. More on criterion scores. NIGMS Feedback Loop Blog. 2010c Retrieved June 17, 2015, from https://loop.nigms.nih.gov/2010/07/more-on-criterion-scores/
  • Committee on the External Evaluation of NIDRR and Its Grantees, National Research Council . In: Review of disability and rehabilitation research: NIDRR grantmaking processes and products. Rivard JC, O’Connell ME, Wegman DH, editors. National Academies Press; Washington, DC: 2011. [ PubMed ] [ Google Scholar ]
  • Department of Defense Congressionally directed medical research programs: Funding opportunities. 2015 Retrieved June 17, 2015, from http://cdmrp.army.mil/funding/ prgdefault.shtml .
  • Federation of American Societies for Experimental Biology (FASEB) NIH research funding trends: FY1995-2014. 2014 Retrieved June 17, 2015, from http://www.faseb.org/Policy- and-Government-Affairs/Data-Compilations/NIH-Research-Funding-Trends.aspx .
  • Geever JC. Guide to proposal writing. 6th The Foundation Center; New York, NY: 2012. [ Google Scholar ]
  • Gerin W, Kapelewski CH. Writing the NIH grant proposal: A step-by-step guide. 2nd SAGE Publications, Inc.; Thousand Oaks, CA: 2010. [ Google Scholar ]
  • Kiritz NJ. Program planning and proposal writing. Grantsmanship Center; Los Angeles, CA: 2007. [ Google Scholar ]
  • National Aeronautics and Space Administration Guidebook for proposers responding to a NASA Research Announcement (NRA) or Cooperative Agreement Notice (CAN) 2015 Retrieved June 17, 2015, from http://www.hq.nasa.gov/office/procurement/nraguidebook/ proposer2015.pdf .
  • National Endowment for the Arts ART WORKS guidelines: Application review. 2015 Retrieved June 17, 2015, from http://arts.gov/grants-organizations/art-works/ application-review .
  • National Endowment for the Humanities NEH’s application review process. 2015a Retrieved June 17, 2015, from http://www.neh.gov/grants/application-process#panel .
  • National Endowment for the Humanities Office of Digital Humanities: Digital humanities start-up grants. 2015b Retrieved August 10, 2015, from http://www.neh.gov/files/ grants/digital-humanities-start-sept-16-2015.pdf .
  • National Institutes of Health Enhancing peer review: The NIH announces enhanced review criteria for evaluation of research applications received for potential FY2010 funding. 2008a Retrieved June 17, 2015, from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-09-025.html .
  • National Institutes of Health Enhancing peer review: The NIH announces new scoring procedures for evaluation of research applications received for potential FY2010 funding. 2008b Retrieved June 17, 2015, from http://grants.nih.gov/grants/guide/notice-files/NOT-OD-09-024.html .
  • National Institutes of Health Review criteria at a glance. 2014 Retrieved June 17, 2015, from https://grants.nih.gov/grants/peer/Review_Criteria_at_a_Glance_MasterOA.pdf .
  • National Institutes of Health Office of Extramural Research Support: Peer review process. 2015a Retrieved September 10, 2015, from http://grants.nih.gov/grants/peer_review_process.htm .
  • National Institutes of Health Scoring system and procedure. 2015b Retrieved June 17, 2015, from https://grants.nih.gov/grants/peer/guidelines_general/scoring_system_and_procedure.pdf .
  • National Science Foundation Important notice no. 130: Transformative research. 2007 Retrieved June 17, 2015, from http://www.nsf.gov/pubs/2007/in130/in130.jsp .
  • National Science Foundation Chapter III - NSF proposal processing and review. Grant proposal guide. 2014 Retrieved September 10, 2015, from http://www.nsf.gov/pubs/policydocs/ pappguide/nsf15001/gpg_3.jsp#IIIA .
  • Rockey S. Correlation between overall impact scores and criterion scores. Rock Talk. 2011 Retrieved June 17, 2015, from http://nexus.od.nih.gov/all/2011/03/08/overall-impact-and-criterion-scores/
  • Russell SW, Morrison DC. The grant application writer’s workbook: Successful proposals to any agency. Grant Writers’ Seminars and Workshops, LLC; Buellton, CA: 2015. [ Google Scholar ]
  • U.S. Department of Agriculture National Institute of Food and Agriculture (NIFA) peer review process for competitive grant applications. 2015 Retrieved June 17, 2015, from http://nifa.usda.gov/resource/nifa-peer-review-process-competitive-grant-applications .
  • U.S. Department of Education. Office of Special Education and Rehabilitative Services FY 2014 application kit for new grants under the National Instutite on Disability and Rehabilitation: Field initiated program (research or development) 2014 Retrieved July 8, 2015, from https://www2.ed.gov/programs/fip/2014-133g1-2.doc .
  • U.S. Department of Energy Merit review guide. 2015 Retrieved June 17, 2015, from http://energy.gov/management/downloads/merit-review-guide .
  • U.S. Department of Veterans Affairs Office of Research & Development BLR&D/ CSR&D merit review program. 2015 Retrieved June 17, 2015, from http://www.research.va.gov/ services/shared_docs/merit_review.cfm .

Designing the Research Proposal or Interim Report

  • First Online: 25 May 2023

Cite this chapter

assessment of research proposal by review committee

  • Uche M. Mbanaso 4 ,
  • Lucienne Abrahams 5 &
  • Kennedy Chinedu Okafor 6  

426 Accesses

This chapter explains what is required for postgraduate student researchers to design and submit the research proposal. In some universities, the student is required to present an interim report. It sets out the key components of the structure of the research proposal, including the research problem statement, research purpose statement, research questions or hypotheses, background to the research problem, literature review and methodology, list of references and in-text referencing. It gives specific attention to a guiding framework for thinking about originality in the research design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Daoud, S., Alrabaiah, H., & Zaitoun, E. (2019). Technology for promoting academic integrity: The impact of using Turnitin on reducing plagiarism. Proceedings of the 2019 International Arab Conference on Information Technology (ACIT), United Arab Emirates , 178–181. https://doi.org/10.1109/ACIT47987.2019.8991046

Hao, J., & Ching-Chiuan, Y. (2009). PhD in design: A reflection from a PhD student and his supervisor. Proceedings of the IEEE 10th International Conference on Computer-Aided Industrial Design & Conceptual Design , China , 146–150. https://doi.org/10.1109/CAIDCD.2009.5375111

Vrbanec, T., & Meštrović, A. (2017). The struggle with academic plagiarism: Approaches based on semantic similarity. Proceedings of the 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Croatia , 870–875. https://doi.org/10.23919/MIPRO.2017.7973544

Download references

Author information

Authors and affiliations.

Centre for Cybersecurity Studies, Nasarawa State University, Keffi, Nigeria

Uche M. Mbanaso

LINK Centre, University of the Witwatersrand, Johannesburg, South Africa

Lucienne Abrahams

Department of Mechatronics Engineering, Federal University of Technology, Owerri, Nigeria

Kennedy Chinedu Okafor

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Mbanaso, U.M., Abrahams, L., Okafor, K.C. (2023). Designing the Research Proposal or Interim Report. In: Research Techniques for Computer Science, Information Systems and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-30031-8_3

Download citation

DOI : https://doi.org/10.1007/978-3-031-30031-8_3

Published : 25 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-30030-1

Online ISBN : 978-3-031-30031-8

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Users & Science
  • Apply for beamtime

Proposal Review Process and Review Committees

Each application for beam time, which is assigned a final number based on one of 12 scientific areas (see point 6) , is reviewed by one or more Review Committees (see point 4) . The Review Committees assess the proposals for their scientific merit or their technical, technological and/or innovative relevance. The members of the Review Committees are specialists in related areas of science and are appointed by the ESRF management.

The Review Committees meet twice a year, in April and in October, to assess proposals received at the submission deadlines of 1 st March and 10 th September, respectively.

It is the beamline(s) requested by the main proposer, in the application for beam time, which determine(s) the Committee(s) which will review the proposal.

Structure of the review process

Beamlines of similar techniques or activities have been grouped together to form 12 review committees (see point 4) . Each review committee reviews and assess all the proposals which request one or several of the beamlines for which it is responsible.

The proposal review process allows all proposals received for a particular beamline to be assessed by the same Review Committee. This gives the Committees flexibility to optimise the selection of proposals to be performed on each beamline. In addition, this structure and associated software allow ESRF to obtain and publish the final decisions much sooner after the Review Committee Meetings of April and October.

It is therefore extremely important that the proposer takes great care when filling in this part of the proposal form in order to have the proposal assessed by the appropriate review committee.

  • the choice of the beamline(s) registered in the electronic application for beam time determine(s) the Committee(s) which will review the proposal: it is extremely important that the proposer takes great care when filling in this part of the proposal form in order to have the proposal assessed by the appropriate review committee.
  • Upgrade: check the beamline status
  • Find a beamline

ESRF beamlines are designed for research in areas as diverse as engineering, physics, chemistry, crystallography, earth science, biology and medicine, surface and materials science , and are characterized by specific techniques of investigation.

In the electronic application form, the main proposer may select :

  • the principal beamline (P1) required and up to two alternative beamlines (A1, A1') : this corresponds to a request for P1 or A1 or A1'.
  • two principal beamlines (P1, P2). If appropriate, possible alternative beamlines for each principal beamline may be selected (as above) : this corresponds to a request for P1 and P2.
  • the proposer must give number of shifts requested on each principal beamline

In order to request the appropriate beamline(s) for your research, and therefore to have your proposal assessed by the appropriate review committee, you are kindly requested, in case of doubt:

  • to consult the information web page on the beamlines,
  • to contact the beamline responsible if you need more information
  • to MAKE SURE, before the validation of your proposal, that the status ("principal" or "alternative") of the selected beamline(s) corresponds exactly to what you need.

12 Review Committees, corresponding to groups of beamlines of similar techniques or activities, review and assess all proposals requesting the beamlines of that Committee, whatever their scientific area. They meet twice year:

  • in April to review the proposals submitted for the deadline of 1 st March,
  • in October to review the proposals submitted for the deadline of 10 th September

If beamlines from different committees are selected

In the case where a proposal requests beamlines which fall into more than one Committee, each relevant Committee will review the proposal, giving only a recommendation for the beamline(s) for which it is responsible.

The table below shows the 12 review committees, identified from C01 to C12, and the beamlines for which each committee will review and assess proposals:

Consult also the our information page on Beamlines Status

The electronic application forms for beam time for a Standard Proposal and for a Long Term Project Proposal allow proposers to select beamlines as described in point 3, and provide accurate information about the scientific research carried out at the ESRF and its societal impact:

  •  for each proposal, the proposer should select the most appropriate " Societal Theme " from the list of seven given;
  • the list of " Scientific Areas "  covers the general and specific fields of research which are studied at ESRF. The scientific area will determine the proposal number assigned.

The electronic application forms for a BAG proposal and a Rolling non-BAG proposal must ONLY be used for applications for beam time for Structural Biology Experiments to be carried out on the Structural Biology beamlines.

The complete list of available electronic application forms for beam time is proposed in the user account : tab "Proposals/Experiments" -> "Proposals" -> tab "New proposal".

Guidelines for electronic submission per type of proposal

Societal Themes

The research proposed in an application for beam time should be identified within the framework of a "Societal Theme" which must be registered by the main proposer in the electronic registration form.

The following "Societal Themes" are proposed:

  • Earth and Environment
  • Information & Communication Technology (ICT)
  • Other Functional Materials
  • Other Fundamental Science ( other than that included in the themes already given above )
  • Other (if none of the proposed societal themes is relevant; in this case, keywords must be given)

The information is mandatory and ONLY one societal theme can be selected.

Please note that the proposers must identify whether the proposal is fundamental / applied / industrially relevant science elsewhere on the proposal form. The societal theme section should not be used for this. Please therefore ONLY select "Fundamental Science" if the theme is Fundamental Science not already covered by the other themes proposed.

Scientific areas

Each application for beam time is assigned to one of 12 scientific areas as follows:

  • CH (Chemistry)
  • ES (Earth Science)
  • EV (Environment)
  • HC (Hard Condensed Matter Science)
  • HG (Cultural Heritage)
  • LS (Life Sciences)
  • MA (Applied Material Science)
  • MD (Medicine)
  • ME (Engineering)
  • MI (Methods and Instrumentation)
  • MX (Structural Biology) - only for MX BAG, Rolling applications and LTP proposals.
  • SC (Soft Condensed Matter Science)

For queries or questions, please contact the User Office.

  • Privacy Policy
  • User portal
  • Staff login
  • Staff web pages
  • Manage subscriptions

assessment of research proposal by review committee

European Synchrotron Radiation Facility - 71, avenue des Martyrs, CS 40220, 38043 Grenoble Cedex 9, France.

  • louisville.edu
  • PeopleSoft HR
  • PeopleSoft Campus Solutions
  • PeopleSoft Financials
  • Business Ops
  • Cardinal Careers

University of Louisville

  • Undergraduate
  • International
  • Online Learning

Office of Research and Innovation

  • Events and Trainings
  • News Stories
  • Grand Challenges
  • Research Challenge Trust Fund
  • Publications and Reports
  • University of Louisville Scholar and Distinguished Scholar Program
  • Monthly Town Hall Recordings
  • Find Funding
  • Proposal Development and Submission
  • Compliance and Regulatory
  • IRB Submissions
  • Non-Funded Agreements
  • Grants Management
  • Hiring and Purchasing
  • Innovation and Technology Transfer
  • Industry Partnerships
  • Clinical Trials
  • Research Centers and Institutes
  • Core Facilities
  • Publishing and Presenting
  • Training and Workshops
  • Research Handbook
  • Grand Challenges / Research Priorities
  • Research Systems and Tools
  • Sponsoring Research
  • Licensing Technologies
  • Launching a Startup
  • Co-location and Land
  • Working with our Students
  • Research Capabilities
  • Get started
  • Pay and Benefits
  • Post-Doc Association - UofL Chapter
  • Post-Doc Appointing Approval Form
  • Trainings and Workshops
  • Children in Research
  • Open Trials
  • Offices and Staff
  • Unit Research Offices
  • Newsletter Signup

assessment of research proposal by review committee

  • CHAPTER FOUR: Proposal Review, Approval, and Submission
  • / Researchers
  • / Research Handbook
  • / CHAPTER FOUR: Proposal Review, Approval, and Submission

4.1 Review and Approval Responsibilities

4.2 Institutional Cover Sheet iRIS eProposal / Proposal Clearance Form (PCF) / Transmit for Review and Initial Assessment (TRIA)-Multi-Institutional Research Application (MIRA)

4.3 Other Pre-Submission Approvals and Requirements

4.4 Proposal Review

4.5 Signature Authority

4.6 Unfunded Proposals

Proposals must be reviewed and have the appropriate approvals prior to submission to an external funding agency. The review process includes review by the department chair/unit head, the respective dean or designee and potential review and approval by University compliance offices/committees such as the Human Subjects Protection Program/Institutional Review Board (IRB) and Animal Use/Institutional Animal Care and Use Committee (IACUC). Review and approval by the Office of Sponsored Programs Administration is the final step in the process prior to proposal submission.

All proposals submitted to external funding agencies must list the “University of Louisville Research Foundation, Inc.” as the award recipient unless the submission is required to be from the University.  At least five (5) business days prior to submission to the Sponsor, the proposal must be submitted to OSPA for final review and approval.

Submission to OSPA is via the Integrated Research Information System (iRIS), by either the eProposal or the Proposal Short Form. The system’s eProposal is a comprehensive electronic packet which incorporates information required on the institutional cover sheet, proposal documents and sponsor forms. Alternately, the iRIS system offers a Proposal Short Form to which a conventional cover sheet and proposal documents can be uploaded.  In this case, a completed and signed Proposal Clearance Form (PCF) or Transmit for Review and Initial Assessment (TRIA) – Multi-Institutional Research Application (MIRA) must accompany proposal submission to OSPA.

The institutional cover sheet - whether in the form of eProposal, Proposal Clearance Form (PCF) or Multi-Institutional Research Application TRIA-MIRA enables review of administrative, policy, and fiscal issues related to the proposal. The eProposal/PCF/TRIA-MIRA consists of a series of informational items and questions to assist the Principal Investigator/Project Director (PI/PD) and University reviewers in assessing potential risks and obligations should the proposal be funded. In addition to the signatures of the PI/PD and co-Investigators, signatures of the department chair/unit head and respective dean or designee are typically required.

The TRIA-MIRA or PCF Clinical Attachment is utilized for all clinical trials and sponsored research requiring approval of the Biomedical IRB and any other project/study that uses hospital facilities (for example, Jewish Hospital & St. Mary’s Healthcare Services, Norton Healthcare, or University of Louisville Hospital) or resources to conduct the research. The information on the PCF/TRIA-MIRA is shared with the respective hospital/study site and is used by the respective hospital/study site to grant approval for the research study to be conducted at their facility.

When University personnel from more than one academic department are participating in a proposed project, all appropriate department chairs/unit heads and deans must provide approval (by signing the PCF/MIRA) prior to submission of the proposal to OSPA for institutional approval.

If an award is received for which no proposal was submitted, an eProposal/PCF/TRIA-MIRA must be completed, signed, and submitted to OSPA prior to award establishment in PeopleSoft.

PIs/PDs are required to conduct research and manage the financial and regulatory aspects of sponsored projects in compliance with University policy, Federal and state law and Sponsor requirements.  PIs/PDs must ensure that they and members of their research team(s) meet all compliance requirements, including any necessary disclosure(s) and training requirements.

Several areas of regulatory compliance may need to be considered when submitting proposals to an external funding agency.  Examples include:

1) Conflict of Interest (COI);

2) Human Subjects Protection;

3) Animal Subjects Protection;

4) Biosafety and Radiation Safety;

5) Export and Secure Research Control.

See Chapter Nine of the Research Handbook for additional information on these and other research regulations.

If a proposed project requires UofL participants to interact with or handle human subjects, animals, or agents impacting environmental health and safety (e.g., recombinant DNA; pathogenic organisms; human blood, tissues, cell lines, or other potentially infectious materials [OPIM]), a proposal must be submitted to the appropriate committee(s) for internal review and approval prior to activation of an award.  External Sponsors have different policies regarding the status of regulatory approvals at the time of proposal submission; while most will accept “pending review” or “pending approval,” some require full regulatory approval prior to submission.  PIs/PDs should review the regulatory requirements of Sponsors when developing proposals for external funding.

Documentation of institutional approval (e.g., an approval letter) for actions “pending” at the time of proposal must be provided to OSPA prior to activation of an award (chartfield establishment).  In limited situations OSPA may establish a chartfield prior to regulatory approval.  As an example, for clinical trials, a chartfield may be established for site initiation visits and other limited startup activities prior to receiving final Institutional Review Board (IRB) approval [NOTE: no human subjects may be consented/enrolled into the trial until the IRB has granted formal approval].

All applications and proposals for external funding must be reviewed and approved by OSPA for consistency with Sponsor and Federal guidelines and University policies prior to submission. OSPA also reviews the budget for accuracy and proper format, ensure the correct application of fringe benefit and Facilities and Administrative Cost (aka F&A or indirect cost) rates, and verify University cost-sharing commitments.

Draft copies of all documents, including the draft budget may be submitted to OSPA for preliminary review and comment. This is particularly helpful for complex proposals, such as those with multi-year budgets, subagreements, and/or cost sharing commitments.  It should be noted that preliminary review and comment does not constitute official approval for submission.

OSPA must receive the following items for review prior to granting approval for proposal submission:

1) eProposal, or completed and signed PCF and PCF Clinical Attachment (if applicable).  Completed and signed TRIA-MIRA may be used for non-federal clinical proposals;

2) Proposal, including abstract, budget, budget justification, and Sponsor forms. A final copy of the proposal for OSPA files should be submitted at the time of proposal review (if electronic) or within two weeks following submission;

3) For proposals involving subrecipients, completed  Subrecipient Commitment Form , budget, budget justification, and scope of work;

4) Indication of regulatory approval status (e.g., IRB, IACUC, DEHS);

5) Confirmation that all participants on the project, regardless of role, have completed the  Attestation and Disclosure Form ; and

6) Approval in writing of all committed University cost sharing or matching obligations.

Only specific designees within OSPA have been granted signature authority by the President of the University. Under no circumstance is a PI/PD to sign a proposal to an external funding agency on behalf of the University and/or University of Louisville Research Foundation, Inc. without the prior approval of an Authorized Institutional Official.

Upon receipt of notification that a proposal will not be funded, the PI/PD should inform OSPA. OSPA retains copies of unfunded proposals for one year following notification that the proposal will not be funded.

1. General Information

2. Pre-Proposal Activities

3. Development, Budgeting

4. Proposal Review, Approval & Submission

5. Award Acceptance & Account Establishment

6. Financial Management of Awards

7. Admin/Non-Financial Management of Awards

8. Industry Awards & Agreements

9. Research Regulations

10. Service Centers

UL Monogram

University of Louisville

Louisville, Ky. 40202

[email protected]

502.852.6512, 502.852.8361 (Fax)

Looking for a unit within the Office of Research and Innovation? Contact information  here .

assessment of research proposal by review committee

Office of the Vice President for Research

  • Service Units
  • Research Integrity & Compliance
  • Human Subjects Protection Program (HSPP)

Scientific Review Committee

  • uconn health

The Scientific Review Committee (SRC) ensures that the scientific question being asked within a protocol is relevant and that the design of the protocol is appropriate to answer that question. A guide for using iRIS  to access the material to be reviewed is available by clicking the Help button within iRIS .

The SRC review will primarily focus on the elements of good scientific study design. Proposals will be evaluated for the following criteria:

  • clarity of the research question,
  • appropriateness and efficiency of design,
  • rigor and feasibility of methods,
  • qualifications and expertise of the research team,
  • scholarship and pertinence of background material and rationale,
  • adequacy of sample size and relevance of controls,
  • and the validity of the statistical analysis plan.

In addition, the Committee may desire to comment on the proposal’s scientific relevance or compelling ethical or patient safety issues. The SRC will submit a summary of their evaluation to the IRB and report their final recommendation as (1) recommend approval without revision, (2) recommend approval pending acceptable revision, or (3) recommend rejection. Recommendations will be based on consensus. The Scientific Review Committee may call upon a consultant if additional expertise is needed to conduct a review. The opinion of the consultant will be taken into consideration when developing the final report. The fact that a consultant was utilized will be noted in the letter to the IRB.

The IRB requires scientific review for all studies reviewed by the convened board that have not already undergone review by another body. The IRB also reserves the right to send any study to the Scientific Review Committee for evaluation. The Scientific Review Committee will generally meet the week before the scheduled IRB meeting to discuss the research protocol.

A member will not review any protocol in which s/he has an interest in the study. If needed, the IRB may call upon the Scientific Advisory Committee of the General Clinical Research Center for assistance.

Scientific Review Committee Membership :

  • Dr. Julie Wagner, Chair, Professor, Oral Health and Diagnostic Sciences
  • Dr. Sheila Alessi, Associate Professor, Calhoun Cardiology Center
  • Dr. Biree Andemariam, Professor, Neag Comprehensive Cancer Center
  • Dr. Lance Bauer, Professor, Psychiatry
  • Dr. Molly Brewer, Professor, Obstetrics & Gynecology
  • Dr. Kevin Claffey, Professor, Cell Biology
  • Dr. Justin Cotney, Associate Professor, Genetics and Genomic Sciences
  • Dr. Jonathan Covault, Professor, Psychiatry
  • Dr. Richard Fortinsky, Professor, Center on Aging
  • Dr.  Ivo Kalajzic, Professor, Center for Regenerative Medicine and Skeletal Development
  • Dr. Jayesh Kamath, Professor, Psychiatry
  • Dr. Carla Rash, Associate Professor, Calhoun Cardiology
  • Dr. Tannin Schmidt, Associate Professor, Biomedical Engineering

Cornell Research Site

  • Find My GCO
  • IACUC applications (Cayuse Animal Management System)
  • IBC Applications (eMUA)
  • IRB Applications (RASS-IRB) External
  • Institutional Profile & DUNS
  • Rates and budgets
  • Report external interests (COI)
  • Join List Servs
  • Ask EHS External
  • Research Development Services
  • Cornell Data Services External
  • Find Your Next Funding Opportunity
  • Travel Registry External
  • RASS (Formerly Form 10 and NFA) External
  • International research activities External

Register for Federal and Non-Federal Systems

  • Disclose Foreign Collaborations and Support
  • Web Financials (WebFin2) External
  • PI Dashboard External
  • Research metrics & executive dashboards
  • Research Financials (formerly RA Dashboard) External
  • Subawards in a Proposal

Proposal Development, Review, and Submission

  • Planning for Animals, Human Participants, r/sNA, Hazardous Materials, Radiation
  • Budgets, Costs, and Rates
  • Collaborate with Weill Cornell Medicine
  • Award Negotiation and Finalization
  • Travel and International Activities
  • Project Finances
  • Project Modifications
  • Research Project Staffing
  • Get Confidential Info, Data, Equipment, or Materials
  • Managing Subawards
  • Animals, Human Participants, r/sNA, Hazardous Materials, Radiation
  • Project Closeout Financials
  • Project Closeout
  • End a Project Early
  • Protecting an Invention, Creation, Discovery
  • Entrepreneurial and Startup Company Resources
  • Gateway to Partnership Program
  • Engaging with Industry
  • Responsible Conduct of Research (RCR)
  • Export Controls
  • Research with Human Participants
  • Research Security
  • Work with Live Vertebrate Animals
  • Research Safety
  • Regulated Biological Materials in Research
  • Financial Management
  • Conflicts of Interest
  •   Search

Proposal Review Guidelines

The proposal review guidelines define the level of review performed by OSP based on the date the proposal was submitted to OSP vs. when the proposal is due to be submitted to the sponsor.

Your science/technical content does not have to be complete when you submit to OSP. You can keep working while OSP reviews the rest of the proposal.  * A full business day is an official Cornell workday between 8:30 a.m. and 5:00 p.m.

**Adherence to Sponsor Guidelines includes; among other things, length, margins, line spacing, font size, file name and type, required information provided (e.g., Broader Impacts Statement); eligibility criteria; etc.

Risk Assessment

Pre-award research operations (pro), additional things to consider when preparing a proposal, find my gco (grant & contract officer), sample proposal library, limited submissions funding opportunities, pi eligibility, institutional profile, duns, and uei numbers, rass (form 10 is phased out), nfa (non-financial agreements).

INSPIRE

Scientific Review Committee (SRC)

The INSPIRE Scientific Review Committee (SRC) is responsible for:

  • Management of the project review process for research proposals, manuscripts and grants. These formally occur twice per year at the IMSH and INSPIRE Virtual meetings as part of the ALERT presentation process.
  • Ongoing updates and revisions of the INSPIRE scientific review process (e.g ALERT presentation templates, submission processes, rating/feedback processes, etc).
  • Dissemination of INSPIRE research expectations and templates (e.g. INSPIRE poster templates, acknowledgments of INSPIRE in a poster/manuscript, INSPIRE bylines, etc).
  • Development of educational processes intended to enhance the research knowledge and skillsets of those affiliated with INSPIRE (e.g research design, project implementation, writing skills, etc).

COMMITTEE CO-CHAIR

assessment of research proposal by review committee

Priti Jani, MD, MPH

University of Chicago, Comer Children’s Hospital, IL, USA

assessment of research proposal by review committee

Nancy M Tofil, MD MeD

University of Alabama in Birmingham, AL, USA

COMMITTEE MEMBERS:

  • Christopher M Kennedy, MD – University of Kansas School of Medicine, KS, USA
  • Linda Brown, MD MSCE – Brown University, RI, USA
  • Akira Nishisaki, MD MSCE – Children’s Hospital of Philadelphia, PA, USA
  • Frank Overly, MD – Brown University, RI, USA
  • Aaron W Calhoun, MD – University of Louisville, KY, USA
  • Tarek R. Hazwani, MD – King Saud Bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
  • Kamal M Abulebda, MD – Indiana University, IN
  • Karen A Mangold, MD MeD – Northwestern University, IL, USA
  • Arun Bansal, MD – Virginia Hospital Center, VA, USA
  • Benjamin T Kerrey, MD – University of Cincinnati, OH, USA
  • Wendy Van Ittersum, MD – Northeast Ohio Medical University, OH, USA
  • Tali Capua, MD – Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
  • Jabeen Fayyaz, MD FCPS, MCPS, DCH, MHPE, PhD – The Hospital for Sick Kids, Toronto, CA

As INSPIRE increases membership and sites, we receive more research proposals and collaborative interest.  To continue our objective to provide ongoing feedback and collaboration, we recruit interested and expert simulationists to address our proposals and help provide ongoing feedback throughout the life of the study.  If you are interested, please contact us at [email protected] .

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State
  • Becoming a Principal Investigator
  • Find Funding
  • Research Administration Tools
  • Student Involvement
  • Knowledge Enterprise Resources
  • Research Commons
  • Report Emergency or Animal Concerns
  • Institutional Animal Care and Use Committee
  • Animal Care and Use Policies and Guidelines
  • Animal Study Team Requirements
  • Per Diem Rates and Fees
  • Lab Animal Handling and Technique Training
  • Laboratory Animal Medicine Training
  • Animal Facilities and Access
  • Animal Transfer
  • Animal Ordering or Imports
  • Surgery and Technical Services
  • Quality Assurance Assessment Program
  • Accreditation
  • Recombinant DNA, Biohazard and Animal Incident Reporting
  • Institutional Biosafety Committee
  • IBC Registration Process
  • Biosafety Study Team Requirements
  • Biosafety Protocols and Amendments
  • Biosafety Policies and Guidelines
  • Report Human Subjects Concerns
  • Institutional Review Board
  • IRB Member Rosters and Meeting Dates
  • IRB Submission Types
  • Human Research Study Team Requirements
  • Human Subject Research Policies and Guidance
  • IRB Time to Approval
  • Informed Consent, Assent and Parental Permission
  • HIPAA and Human Subjects Research
  • Exempt Research
  • Event Reporting
  • Collaborative and Multi-Site Research
  • Research vs. Quality Assurance/Improvement
  • Quality Improvement Program
  • Information for Research Participants
  • Community-Engaged Research
  • Clinical Trials Registration
  • Information for IRB Members
  • Responsible Conduct of Research
  • Disclosure Process
  • Tips for Disclosing
  • What Not to Disclose
  • Common Disclosure Scenarios
  • Policies and Documents
  • Foreign Talent Programs
  • Conflict Approval Committee
  • Controlled Substances
  • Dual Use Research
  • University Activities
  • Export Control Regulations
  • International Research Engagements
  • Export Control Training
  • Export Control Terms and Definitions
  • Human Gene Transfer
  • LabArchives - Electronic Lab Notebook
  • Research Diving
  • Avoiding Plagiarism
  • Research Security Governance Board
  • Roles and Responsibilities
  • Proposal Development and Submission Guidelines
  • Types of Sponsors
  • Grants, Gifts and Contracts
  • Working with Business and Industry Sponsors
  • Working with Federal Agencies
  • Other Support/Current and Pending Support
  • Clinical Trials
  • Uniform Guidance
  • Institutional Information
  • Direct Cost Categories
  • Employment Benefits (Fringe) Rates
  • Subcontracts for Budget Preparation
  • Participant Support Costs
  • Facilities and Administrative Costs
  • Central Cost Share Request
  • Submission Checklist
  • Sponsored Program Officers
  • Bayh-Dole Certification
  • Award Acceptance and Initiation
  • Award Management Roles and Responsibilities
  • Award Terms and Conditions
  • Cost Sharing Requirements
  • Cost Transfers
  • Personnel Appointments
  • Payments to Research Subjects
  • Absence or Transfer of Principal Investigator
  • Patents and Copyrights
  • Travel on Sponsored Projects
  • Ordering in Workday
  • Procurement Policies and Procedures
  • Purchasing Basics
  • Bid Opportunities
  • Consultant Agreement Procedures
  • Subcontracting on Sponsored Awards
  • Procurement and Receipt of Radioactive Material
  • Equipment and Property
  • Software License Agreements
  • Special Purchasing Needs
  • Physical Facilities
  • International Purchasing
  • Award Closeout
  • Corporate Partnerships at Ohio State
  • Invention Disclosure and Assessment
  • Licensing Your Technology
  • Forming a Startup Company
  • Peter Mohler
  • Cynthia Carnes
  • Animal Care and Use Program
  • Office of Research Compliance
  • Office of Responsible Research Practices
  • Office of Secure Research
  • Office of Sponsored Programs
  • Byrd Polar and Climate Research Center
  • Campus Chemical Instrument Center
  • Campus Microscopy and Imaging Facility
  • Center for Emergent Materials
  • Center of Microbiome Science
  • Center for RNA Biology
  • Chronic Brain Injury Program
  • Foods for Health
  • Gene Therapy Institute
  • Global Water Institute
  • Infectious Diseases Institute
  • Institute for Materials Research
  • Institute for Optical Science
  • Translational Data Analytics Institute
  • Facts and Figures
  • 150 Years of Excellence
  • Awards and Honors
  • Associate Deans for Research
  • Submit an Event
  • Research and Innovation Showcase 2022
  • Research and Innovation Showcase 2021
  • Research and Innovation Showcase 2020
  • Research and Innovation Showcase 2019

Important Changes to NSF Proposal Submission Requirements

The National Science Foundation has released an updated Proposal & Award Policies & Procedures Guide (PAPPG) , which includes revised requirements for proposals due or submitted on or after May 20, 2024. While all the changes are summarized , the following are the most important for investigators at Ohio State.

Biographical Sketch

  • Must be prepared in SciENcv
  • Page limit has been removed
  • Synergistic Activities section has been moved

Synergistic Activities Document (New)

  • Each individual identified as key/senior personnel must include a 1-page document listing up to five distinct activities that demonstrate the individual’s broader impact. The document is uploaded as part of the senior/key personnel documents in research.gov.

Malign Foreign Talent Recruitment Programs

  • Each individual identified as key/senior personnel must certify on both their Biographical Sketch and Current and Pending forms that they are not participating in a malign foreign talent recruitment program . Participants in such programs cannot participate as key/senior personnel on NSF proposals and awards. 

Mentoring Plan

  • A mentoring plan is required if the proposal includes support for either a graduate student or a postdoctoral researcher. The plan is still limited to one page for the entire proposal, including collaborative and linked collaborative proposals.

Individual Development Plans for Postdoctoral Scholars or Graduate Students (New)

  • Each graduate student and/or postdoctoral researcher receiving at least one month of support from the award must have an individual development plan (IDP) that maps their educational goals, career exploration and professional development . 
  • The plans are not part of the proposal, nor are they submitted to NSF. However, the PI must certify in each annual and final report that IDPs are in place for the relevant postdocs/grad students.
  • OSP is working with the Grad School and Office of Postdoctoral Affairs in developing resources to help PIs meet this requirement.
  • Additional information will be provided by late summer, before the first awards subject to the requirement are issued.

Foreign Organization Eligibility

  • Clarification of the information required to justify providing funds to a foreign organization by a subaward or to a foreign individual through a consulting agreement

Tribal Nation Approval for Proposals that May Impact Tribal Resources or Interests (New)

  • Proposals that may impact the resources of a Tribal Nation must seek and obtain approval from the Tribal Nation prior to award.

New and updated post-award requirements

  • If a project that did not include a mentoring plan adds a graduate student or a postdoctoral researcher after an award is made, the required plan must be submitted to the NSF program officer and reported on annually.
  • If a project that did not have a safe and inclusive working environment plan subsequently adds an off-campus or off-site component a plan must be developed and maintained according to NSF program guidelines . 
  • Office of the Provost
  • Location Location
  • Contact Contact
  • Offices and Divisions
  • Our Offices
  • Office of Faculty Affairs

Research and Creative Activity Support

Office of research.

The office offers extensive support to individuals seeking outside funding, and it provides asynchronous platforms so faculty can complete required training modules in a timely manner.

Required Research Training and Resources

Asynchronous and ongoing

University of South Carolina offers a variety of training and educational opportunities for those involved in all aspects of research. Training requirements and resources are explained regarding: Human Subjects Protection; Good Clinical Practice; Animal Care Use; Responsible Conduct of Research; Export Controls. 

  • Training and Resources
  • Propel Research Mentorship Program

Application (nomination)

Supporting new and early-career faculty is one of our key missions. The Propel Research Mentorship Program provides participating faculty nine months of face-to-face intensive mentorship, education, editing support and more as they plan, draft, finalize and submit a proposal for a federal grant from either the National Institutes of Health (NIH) or the National Science Foundation (NSF).

Research Computing Workshops and Tutorials

The Research Computing program (RC) works with researchers to improve their project performance and secure computing resources not only at the university level but beyond, to National Labs and other high profile HPC resources (e.g., Hyperion High Performance Computing (HPC) supercomputer cluster, the Linux computing environment, machine learning and deep learning, scientific computing programming languages, and quantum computing.  

  • Research Computing

Propel AI (in development)

The vision for Propel AI includes faculty from all USC colleges, with an emphasis on the arts, humanities and professional colleges like law and business. By including faculty from a broad array of disciplines, we aim to foster creativity and collaboration among faculty as you learn together how to make AI your partner in scholarship of all kinds, not just scientific research. This program is in development in the Office of Research.

A link will be posted once the program has been made available

Thomas Cooper Library Workshops

The library offers a number of workshops throughout the year. Many of these are designed to support faculty research. Topics include: Data Management, Text Analysis, Publishing in Scholarly Journals, Publishing and Digital Scholarship, Text and Data Mining & Analysis, Data Visualization, Artificial Intelligence, and Citation Management Tools.

For more information and to view a calendar of workshops, visit the library website .

Challenge the conventional. Create the exceptional. No Limits.

The Federal Register

The daily journal of the united states government, request access.

Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs.

If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated.

An official website of the United States government.

If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.

IMAGES

  1. How to Write a Successful Research Proposal

    assessment of research proposal by review committee

  2. FREE 10+ Scientific Research Proposal Samples in MS Word

    assessment of research proposal by review committee

  3. Assessment Research Proposal

    assessment of research proposal by review committee

  4. Research Proposal How to Write: Detail Guide and Template

    assessment of research proposal by review committee

  5. FREE 8+ Proposal Evaluation Forms in PDF

    assessment of research proposal by review committee

  6. Example Research Proposal Template

    assessment of research proposal by review committee

VIDEO

  1. What is difference between Research proposal and Research paper/ NTA UGC NET

  2. Security Assessment Research AD HOC Committee November 1 2023

  3. Assignment #2 Research Proposal Review Part IV Outcome Up Dated Sp 2024

  4. Creating a research proposal

  5. An Assessment of the Biden Administration’s Withdrawal from Afghanistan by America’s Generals

  6. Assignment #2 Research Proposal Review Part III Questions Up Dated Sp 2024

COMMENTS

  1. Evaluation of research proposals by peer review panels: broader panels

    While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications (Abdoul et al. 2012). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding.

  2. Formal Review of Research Proposals

    Submitting a Proposal for Formal Review. The committee meets monthly during the fall, winter and spring semesters to formally review research proposals and conduct related business. At least eight weeks before the anticipated launch date of the project, researchers should submit a SCLRG research proposal form to Leslie Meyerhoff or Marne Einarson.

  3. A Review Committee's Guide for Evaluating Qualitative Proposals

    Compounding this problem is the fact that qualita. tive expertise is often lacking on review committees (Munhall, 2001), consisting of 1. or 2 members among a membership of 1 to 5 to 20—a ...

  4. PDF Evaluation of research proposals: the why and what of the ERC's recent

    committee of the Scientific Council is responsible for the development of norms and rules for the proper functioning of the evaluation panels.4 In addition, certain aspects of research assessment are handled by the Working Group on Open Science.5 In July 20216, the ERC endorsed the San Francisco Declaration on Research Assessment (DORA) and in ...

  5. A Review Committee's Guide for Evaluating Qualitative Proposals

    Abstract. Although they complain that qualitative proposals are not reviewed fairly when funding agencies use quantitative criteria, qualitative researchers have failed the system by not developing alternative criteria for the evaluation of qualitative proposals. In this article, the author corrects this deficit by presenting criteria to assess ...

  6. (PDF) Evaluation of research proposals by peer review panels: broader

    Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the ...

  7. PDF PEER REVIEW GUIDANCE: RESEARCH PROPOSALS

    Figure 1 - Essential Steps in the Peer Review process Research Proposal/Report -This is usually a research funding proposal or a report intended for publication. Peer Review - The research proposal/report is sent out to two or more independent experts for review. Most journals/funding organisations have an assessment system in place, be it an

  8. A Review Committee's Guide for Evaluating Qualitative Proposals

    A Review Committee's Guide for Evaluating Qualitative Proposals. Criteria to assess the relevance, rigor, and feasibility of qualitative research are presented, not a checklist but rather a series of questions that can aid a reviewer, adept in qualitative methods, to comprehensively evaluate and defend qualitative research. Expand.

  9. A Review Committee's Guide for Evaluating Qualitative Proposals

    This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. ... When evaluating a qualitative proposal, the review committee must place the greatest weight on the idea ...

  10. Research Evaluation

    Being on a program committee is serious commitment, not just an honor. In the case of journal, when a researcher accepts the request to review a paper, she commits upfront to completing the assigned task within a deadline. ... The largest section of a review reports should be an assessment of the work, where the reviewers address the authors to ...

  11. How Do I Review Thee? Let Me Count the Ways: A Comparison of Research

    This score reflects the reviewers' assessment of the "likelihood for the project to exert a sustained, powerful influence on the research field(s) involved." ... The research grant proposal review criteria used by the 10 federal funding agencies are associated with these eight key questions ... Committee on the External Evaluation of ...

  12. Designing the Research Proposal or Interim Report

    The research proposal is presented for assessment for the following degree programmes, for the Masters by coursework and research, for the Masters by dissertation only and for the PhD programme. ... Recommendation for major revisions, leading to a further review of the proposal in 3-4 months, as deemed fit by the review committee. If the ...

  13. The introduction of research ethics review procedures at a university

    To facilitate the review and assessment of a social science research ethics committee, the present study employed an assessment tool developed by Silverman and Sleem (2014) whose purpose is to evaluate the operational characteristics of research ethics committees in LMIC. The same authors indicate that this assessment tool for RECs is based on ...

  14. Proposal Review Process and Review Committees

    The members of the Review Committees are specialists in related areas of science and are appointed by the ESRF management. The Review Committees meet twice a year, in April and in October, to assess proposals received at the submission deadlines of 1 st March and 10 th September, respectively. Proposal review process. Beamlines. Review committees.

  15. Writing a Winning Application—Consider the Review Committee

    Thus far, our series has covered several bases, from conducting a self-assessment to drafting Specific Aims, to creating an appropriate budget—find hyperlinks to those previous steps at the bottom of this article. This installment covers considering the review committee that has the best fit with the proposed project.

  16. A Guide to Peer Reviewing Book Proposals

    Peer review is an integral component of publishing the best quality research. Its purpose is to: 1. Aid in the vetting and selection of research for publication, ensuring that the best work is taken forward 2. Provide suggestions for improving the books that go through review, raising the general quality of published research

  17. IRB Research Review Committee

    The Committee provides feedback on research plans. This may include information regarding background, protocol, analysis plan, etc. The RRC helps ensure the study meets research standards and provides peer review and practice in responding to professional critiques. Please view the presentation on the "Role of the Research Review Committee".

  18. RESEARCH PROPOSAL CHECKLIST AND EVALUATION WORKSHEET

    Before submitting a research proposal to the LAUC Research and Professional Development Committee, be sure to read the attached "Evaluation Worksheet." This checklist should be used as a final checklist to assure that you have followed the directions and are submitting all necessary information with your proposal. Proofread all documents. Use the electronic form available from the Statewide ...

  19. CHAPTER FOUR: Proposal Review, Approval, and Submission

    CHAPTER FOUR: Proposal Review, Approval, and Submission. 4.1 Review and Approval Responsibilities. 4.2 Institutional Cover Sheet iRIS eProposal / Proposal Clearance Form (PCF) / Transmit for Review and Initial Assessment (TRIA)-Multi-Institutional Research Application (MIRA) 4.3 Other Pre-Submission Approvals and Requirements. 4.4 Proposal Review.

  20. Scientific Review Committee

    The SRC review will primarily focus on the elements of good scientific study design. Proposals will be evaluated for the following criteria: and the validity of the statistical analysis plan. In addition, the Committee may desire to comment on the proposal's scientific relevance or compelling ethical or patient safety issues. The SRC will ...

  21. Research Review Committee

    Contact Details. 3811 O'Hara St. Pittsburgh. PA. 15213. Email [email protected]. The Research Review Committee (RRC) is the Department of Psychiatry's internal grant review committee and helps to ensure that research proposals prepared by investigators affiliated with our Department meet the highest scientific and ethical standards.

  22. Proposal Review Guidelines

    The proposal review guidelines define the level of review performed by OSP based on the date the proposal was submitted to OSP vs ... The Research Advisory Committee reviews the nominations/proposals received and then selects the submissions to go forward based on how closely the proposed research matches the sponsor's interests and thus has ...

  23. Scientific Review Committee (SRC)

    The INSPIRE Scientific Review Committee (SRC) is responsible for: Management of the project review process for research proposals, manuscripts and grants. These formally occur twice per year at the IMSH and INSPIRE Virtual meetings as part of the ALERT presentation process. Ongoing updates and revisions of the INSPIRE scientific review process ...

  24. Important Changes to NSF Proposal Submission Requirements

    The National Science Foundation has released an updated Proposal & Award Policies & Procedures Guide (PAPPG), which includes revised requirements for proposals due or submitted on or after May 20, 2024. While all the changes are summarized, the following are the most important for investigators at Ohio State.Biographical SketchMust be prepared in SciENcvPage limit has been removedSynergistic ...

  25. Research and Creative Activity Support

    Supporting new and early-career faculty is one of our key missions. The Propel Research Mentorship Program provides participating faculty nine months of face-to-face intensive mentorship, education, editing support and more as they plan, draft, finalize and submit a proposal for a federal grant from either the National Institutes of Health (NIH ...

  26. Proposal Review Panel for Materials Research; Notice of Meeting

    In accordance with the Federal Advisory Committee Act (Pub. L. 92-463, as amended), the National Science Foundation (NSF) announces the following meeting: Name and Committee Code: Proposal Review Panel for Materials Research—Science and Technology Center (STC) Site Visit University of Washington (#1203).

  27. PDF Federal Register/Vol. 89, No. 98/Monday, May 20, 2024/Notices

    Proposal Review Panel for Computing & Communication Foundations; Notice of Meeting In accordance with the Federal Advisory Committee Act (Pub. L. 92- 463, as amended), the National Science Foundation (NSF) announces the following meeting: Name and Committee Code: Proposal Review Panel for Computing & Communication Foundations (#1192)—

  28. PDF Supplemental Information on the EPA's Update of PM2.5 Data from T640

    Research, 13(4), 101374, 2022. 2. ... During the scientific review of EPA's Policy Assessment for the Reconsideration of the PM NAAQS two years ago, the Clean Air Scientific Advisory Committee (CASAC) provided advice ... Standards for Particulate Matter a proposal to calibrate PM FEMs using routinely operated PM FRMs from state, local and ...

  29. King Leads Bipartisan Letter Urging the Appropriations Committee to

    WASHINGTON, D.C.— Today, U.S. Senator Angus King (I-ME) is calling on the leaders of the Appropriations Committee to prioritize funding for traumatic brain injury (TBI) research in the FY2025 spending bill. In a letter to Defense Subcommittee Chairman Jon Tester (D-MT) and Ranking Member Susan Collins (R-ME), the Senator led a bipartisan group of his colleagues to urge the Appropriations ...