What is quality research? A guide to identifying the key features and achieving success

quality research work

Every researcher worth their salt strives for quality. But in research, what does quality mean?

Simply put, quality research is thorough, accurate, original and relevant. And to achieve this, you need to follow specific standards. You need to make sure your findings are reliable and valid. And when you know they're quality assured, you can share them with absolute confidence.

You’ll be able to draw accurate conclusions from your investigations and contribute to the wider body of knowledge in your field.

Importance of quality research

Quality research helps us better understand complex problems. It enables us to make decisions based on facts and evidence. And it empowers us to solve real-world issues. Without quality research, we can't advance knowledge or identify trends and patterns. We also can’t develop new theories and approaches to solving problems.

With rigorous and transparent research methods, you’ll produce reliable findings that other researchers can replicate. This leads to the development of new theories and interventions. On the other hand, low-quality research can hinder progress by producing unreliable findings that can’t be replicated, wasting resources and impeding advancements in the field.

In all cases, quality control is critical. It ensures that decisions are based on evidence rather than gut feeling or bias.

Standards for quality research

Over the years, researchers, scientists and authors have come to a consensus about the standards used to check the quality of research. Determined through empirical observation, theoretical underpinnings and philosophy of science, these include:

1. Having a well-defined research topic and a clear hypothesis

This is essential to verify that the research is focused and the results are relevant and meaningful. The research topic should be well-scoped and the hypothesis should be clearly stated and falsifiable .

For example, in a quantitative study about the effects of social media on behavior, a well-defined research topic could be, "Does the use of TikTok reduce attention span in American adolescents?"

This is good because:

  • The research topic focuses on a particular platform of social media (TikTok). And it also focuses on a specific group of people (American adolescents).
  • The research question is clear and straightforward, making it easier to design the study and collect relevant data.
  • You can test the hypothesis and a research team can evaluate it easily. This can be done through the use of various research methods, such as survey research , experiments or observational studies.
  • The hypothesis is focused on a specific outcome (the attention span). Then, this can be measured and compared to control groups or previous research studies.

2. Ensuring transparency

Transparency is crucial when conducting research. You need to be upfront about the methods you used, such as:

  • Describing how you recruited the participants.
  • How you communicated with them.
  • How they were incentivized.

You also need to explain how you analyzed the data, so other researchers can replicate your results if necessary. re-registering your study is a great way to be as transparent in your research as possible. This  involves publicly documenting your study design, methods and analysis plan before conducting the research. This reduces the risk of selective reporting and increases the credibility of your findings.

3. Using appropriate research methods

Depending on the topic, some research methods are better suited than others for collecting data. To use our TikTok example, a quantitative research approach, such as a behavioral test that measures the participants' ability to focus on tasks, might be the most appropriate.

On the other hand, for topics that require a more in-depth understanding of individuals' experiences or perspectives, a qualitative research approach, such as interviews or focus groups, might be more suitable. These methods can provide rich and detailed information that you can’t capture through quantitative data alone.

4. Assessing limitations and the possible impact of systematic bias

When you present your research, it’s important to consider how the limitations of your study could affect the result. This could be systematic bias in the sampling procedure or data analysis, for instance. Let’s say you only study a small sample of participants from one school district. This would limit the generalizability and content validity of your findings.

5. Conducting accurate reporting

This is an essential aspect of any research project. You need to be able to clearly communicate the findings and implications of your study . Also, provide citations for any claims made in your report. When you present your work, it’s vital that you describe the variables involved in your study accurately and how you measured them.

Curious to learn more? Read our Data Quality eBook .

How to identify credible research findings

To determine whether a published study is trustworthy, consider the following:

  • Peer review: If a study has been peer-reviewed by recognized experts, rest assured that it’s a reliable source of information. Peer review means that other scholars have read and verified the study before publication.
  • Researcher's qualifications: If they're an expert in the field, that’s a good sign that you can trust their findings. However, if they aren't, it doesn’t necessarily mean that the study's information is unreliable. It simply means that you should be extra cautious about accepting its conclusions as fact.
  • Study design: The design of a study can make or break its reliability. Consider factors like sample size and methodology.
  • Funding source: Studies funded by organizations with a vested interest in a particular outcome may be less credible than those funded by independent sources.
  • Statistical significance: You've heard the phrase "numbers don't lie," right? That's what statistical significance is all about. It refers to the likelihood that the results of a study occurred by chance. Results that are statistically significant are more credible.

Achieve quality research with Prolific

Want to ensure your research is high-quality? Prolific can help.

Our platform gives you access to a carefully vetted pool of participants. We make sure they're attentive, honest, and ready to provide rich and detailed answers where needed. This helps to ensure that the data you collect through Prolific is of the highest quality.

With Prolific, you can streamline your research process and feel confident in the results you receive. Our minimum pay threshold and commitment to fair compensation motivate participants to provide valuable responses and give their best effort. This ensures the quality of your research and helps you get the results you need. Sign up as a researcher today to get started!

You might also like

Whitepaper-ux-research

High-quality human data to deliver world-leading research and AIs.

quality research work

Follow us on

All Rights Reserved Prolific 2024

Child Care and Early Education Research Connections

Assessing research quality.

This page presents information and tools to help evaluate the quality of a research study, as well as information on the ethics of research.

The quality of social science and policy research can vary considerably. It is important that consumers of research keep this in mind when reading the findings from a research study or when considering whether or not to use data from a research study for secondary analysis.

quality research work

Announcements

Find announcements, including conferences and meetings, Research Connections newsletters, opportunities, and more.

quality research work

Search Resources

Search all resources in the Research Connections Library.

quality research work

Explore Our Topics

Research Connections' resources are organized into topical categories and subcategories.

Key Questions to Ask

This section outlines key questions to ask in assessing the quality of research.

Research Assessment Tools

This section provides resources related to quantitative and qualitative assessment tools.

Ethics of Research

This section provides an overview of three basic ethical principles.

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Inclusion criteria for abstract and full article screening

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 08 February 2023

Quality research needs good working conditions

  • Rima-Maria Rahal   ORCID: orcid.org/0000-0002-1404-0471 1 ,
  • Susann Fiedler 2 ,
  • Adeyemi Adetula   ORCID: orcid.org/0000-0001-9344-576X 3 , 4 ,
  • Ronnie P.-A. Berntsson   ORCID: orcid.org/0000-0001-6848-322X 5 , 6 ,
  • Ulrich Dirnagl 7 ,
  • Gordon B. Feld   ORCID: orcid.org/0000-0002-1238-9493 8 ,
  • Christian J. Fiebach   ORCID: orcid.org/0000-0003-0827-1721 9 ,
  • Samsad Afrin Himi   ORCID: orcid.org/0000-0001-6955-1757 10 ,
  • Aidan J. Horner   ORCID: orcid.org/0000-0003-0882-9756 11 , 12 ,
  • Tina B. Lonsdorf   ORCID: orcid.org/0000-0003-1501-4846 13 ,
  • Felix Schönbrodt   ORCID: orcid.org/0000-0002-8282-3910 14 ,
  • Miguel Alejandro A. Silan   ORCID: orcid.org/0000-0002-7480-3661 15 , 16 ,
  • Michael Wenzler   ORCID: orcid.org/0000-0001-5248-2660 17 &
  • Flávio Azevedo   ORCID: orcid.org/0000-0001-9000-8513 18  

Nature Human Behaviour volume  7 ,  pages 164–167 ( 2023 ) Cite this article

31k Accesses

13 Citations

924 Altmetric

Metrics details

High-quality research requires appropriate employment and working conditions for researchers. However, many academic systems rely on short-term employment contracts, biased selection procedures and misaligned incentives, which hinder research quality and progress. We discuss ways to redesign academic systems, emphasizing the role of permanent employment.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

quality research work

Bello, M. & Galindo-Rueda, F. Charting the Digital Transformation of Science: Findings from the 2018 OECD International Survey of Scientific Authors (ISSA2) (OECD Science, Technology and Industry Working Papers No. 2020/03) (OECD Publishing, 2020).

IJzerman, H. et al. APS Obs . 34 , www.go.nature.com/3WOu0PJ (2021).

Platt, M. O. Nat. Rev. Mater. 5 , 783–784 (2020).

Article   Google Scholar  

Taris, T. W. & Schaufeli, W. B. in Current Issues in Work and Organizational Psychology (ed. Cooper, C.) 189–204 (Routledge, 2004).

Goodhart, C. (1975). in Monetary Theory and Practice (ed. Goodhart, C.) 91–121 (Macmillan International Higher Education, 1975).

John, L. K., Loewenstein, G. & Prelec, D. Psychol. Sci. 23 , 524–532 (2012).

Article   PubMed   Google Scholar  

Desai, T. A., Eniola-Adefeso, O., Stevens, K. R., Vazquez, M. & Imoukhuede, P. Nat. Rev. Mater. 6 , 556–559 (2021).

Mustajoki, H. et al. Making FAIReR Assessments Possible. Final Report of EOSC Co-Creation Projects: ‘European Overview of Career Merit Systems’ and ‘Vision for Research Data in Research Careers’, https://zenodo.org/record/4701375 (2021).

Götz, F. M., Gosling, S. D. & Rentfrow, P. J. Perspect. Psychol. Sci. 17 , 205–215 (2022).

Tiokhin, L., Lakens, D., Smaldinon, P. E. & Panchanathan, K. Preprint at https://doi.org/10.31222/osf.io/juwck (2021).

Allen, L., O’Connell, A. & Kiermer, V. Learn. Publ. 32 , 71–74 (2019).

Vazire, S. Perspect. Psychol. Sci. 13 , 411–417 (2018).

Pownall, M. et al. Scholarsh. Teach. Learn. Psychol. , https://doi.org/10.1037/stl0000307 (2021).

Azevedo, F. et al. BMC Res. Notes 15 , 75 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Lewis, N. A. Jr & Wai, J. Perspect. Psychol. Sci. 16 , 1242–1254 (2021).

Download references

Acknowledgements

This work was conceived in the scope of the German Reproducibility Network ( https://reproducibilitynetwork.de ). We thank F. Henninger for his helpful comments, and L. Wagner and B. Heling for their help with formatting the document.

Author information

Authors and affiliations.

Behavioral Law & Economics, Max Planck Institute for Research on Collective Goods, Bonn, Germany

Rima-Maria Rahal

Department Strategy & Innovation, Vienna University of Economics and Business, Vienna, Austria

Susann Fiedler

Department of Psychology, Alex Ekwueme Federal University Ndufu-Alike, Abakaliki, Nigeria

Adeyemi Adetula

Laboratoire Interuniversitaire de Psychologie, Personnalité, Cognition, Changement Social, Université Grenoble Alpes, Grenoble, France

Department of Medical Biochemistry and Biophysics, Umeå University, Umeå, Sweden

Ronnie P.-A. Berntsson

Wallenberg Centre for Molecular Medicine, Umeå University, Umeå, Sweden

QUEST Center for Responsible Research, Berlin Institute of Health, Berlin, Germany

Ulrich Dirnagl

Central Institute of Mental Health, Heidelberg University, Mannheim, Germany

Gordon B. Feld

Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany

Christian J. Fiebach

Department of Psychology, Jagannath University, Dhaka, Bangladesh

Samsad Afrin Himi

Department of Psychology, University of York, York, UK

Aidan J. Horner

York Biomedical Research Institute, University of York, York, UK

Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

Tina B. Lonsdorf

Department of Psychology, LMU Munich, München, Germany

Felix Schönbrodt

Development, Individual, Process, Handicap, Education (DIPHE) Research Unit, Université Lumière Lyon 2, Lyon, France

Miguel Alejandro A. Silan

Annecy Behavioral Science Lab, Menthon-Saint-Bernard, France

Tübingen, Germany

Michael Wenzler

Department of Psychology, University of Cambridge, Cambridge, UK

Flávio Azevedo

You can also search for this author in PubMed   Google Scholar

Contributions

R.M.R., S.F., U.D. and C.J.F. conceived the work. R.M.R. wrote the original draft, and R.M.R., S.F., F.A., G.B.F., U.D., C.J.F., F.S., T.B.L., M.W., A.J.H., A.A., M.A.A.S., S.A.H. and R.P.A.B. reviewed and edited the Comment.

Corresponding author

Correspondence to Rima-Maria Rahal .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Supplementary information

Supplementary information.

Supplementary Discussion, and Supplementary Figure 1

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Rahal, RM., Fiedler, S., Adetula, A. et al. Quality research needs good working conditions. Nat Hum Behav 7 , 164–167 (2023). https://doi.org/10.1038/s41562-022-01508-2

Download citation

Published : 08 February 2023

Issue Date : February 2023

DOI : https://doi.org/10.1038/s41562-022-01508-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Five creative ways to promote reproducible science.

  • Josefina Weinerova
  • Rotem Botvinik-Nezer

Nature Human Behaviour (2024)

Rethink funding by putting the lottery first

  • Finn Luebber
  • Sören Krach
  • Jule Specht

Nature Human Behaviour (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

quality research work

Book cover

International Conference on Intelligent Systems Design and Applications

ISDA 2022: Intelligent Systems Design and Applications pp 374–383 Cite as

A Step-To-Step Guide to Write a Quality Research Article

  • Amit Kumar Tyagi   ORCID: orcid.org/0000-0003-2657-8700 14 ,
  • Rohit Bansal 15 ,
  • Anshu 16 &
  • Sathian Dananjayan   ORCID: orcid.org/0000-0002-6103-7267 17  
  • Conference paper
  • First Online: 01 June 2023

240 Accesses

19 Citations

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 717))

Today publishing articles is a trend around the world almost in each university. Millions of research articles are published in thousands of journals annually throughout many streams/sectors such as medical, engineering, science, etc. But few researchers follow the proper and fundamental criteria to write a quality research article. Many published articles over the web become just irrelevant information with duplicate information, which is a waste of available resources. This is because many authors/researchers do not know/do not follow the correct approach for writing a valid/influential paper. So, keeping such issues for new researchers or exiting researchers in many sectors, we feel motivated to write an article and present some systematic work/approach that can help researchers produce a quality research article. Also, the authors can publish their work in international conferences like CVPR, ICML, NeurIPS, etc., or international journals with high factors or a white paper. Publishing good articles improve the profile of researchers around the world, and further future researchers can refer their work in their work as references to proceed with the respective research to a certain level. Hence, this article will provide sufficient information for researchers to write a simple, effective/impressive and qualitative research article on their area of interest.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Nair, M.M., Tyagi, A.K., Sreenath, N.: The future with industry 4.0 at the core of society 5.0: open issues, future opportunities and challenges. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–7 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402498

Tyagi, A.K., Fernandez, T.F., Mishra, S., Kumari, S.: Intelligent Automation Systems at the Core of Industry 4.0. In: Abraham, A., Piuri, V., Gandhi, N., Siarry, P., Kaklauskas, A., Madureira, A. (eds.) ISDA 2020. AISC, vol. 1351, pp. 1–18. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-71187-0_1

Chapter   Google Scholar  

Goyal, D., Tyagi, A.: A Look at Top 35 Problems in the Computer Science Field for the Next Decade. CRC Press, Boca Raton (2020) https://doi.org/10.1201/9781003052098-40

Tyagi, A.K., Meenu, G., Aswathy, S.U., Chetanya, V.: Healthcare Solutions for Smart Era: An Useful Explanation from User’s Perspective. In the Book “Recent Trends in Blockchain for Information Systems Security and Privacy”. CRC Press, Boca Raton (2021)

Google Scholar  

Varsha, R., Nair, S.M., Tyagi, A.K., Aswathy, S.U., RadhaKrishnan, R.: The future with advanced analytics: a sequential analysis of the disruptive technology’s scope. In: Abraham, A., Hanne, T., Castillo, O., Gandhi, N., Nogueira Rios, T., Hong, T.-P. (eds.) HIS 2020. AISC, vol. 1375, pp. 565–579. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73050-5_56

Tyagi, A.K., Nair, M.M., Niladhuri, S., Abraham, A.: Security, privacy research issues in various computing platforms: a survey and the road ahead. J. Inf. Assur. Secur. 15 (1), 1–16 (2020)

Madhav, A.V.S., Tyagi, A.K.: The world with future technologies (Post-COVID-19): open issues, challenges, and the road ahead. In: Tyagi, A.K., Abraham, A., Kaklauskas, A. (eds.) Intelligent Interactive Multimedia Systems for e-Healthcare Applications, pp. 411–452. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-6542-4_22

Mishra, S., Tyagi, A.K.: The role of machine learning techniques in the Internet of Things-based cloud applications. In: Pal, S., De, D., Buyya, R. (eds.) Artificial Intelligence-Based Internet of Things Systems. Internet of Things (Technology, Communications and Computing). Springer, Cham. https://doi.org/10.1007/978-3-030-87059-1_4

Pramod, A., Naicker, H.S., Tyagi, A.K.: Machine Learning and Deep Learning: Open Issues and Future Research Directions for Next Ten Years. Computational Analysis and Understanding of Deep Learning for Medical Care: Principles, Methods, and Applications. Wiley Scrivener (2020)

Kumari, S., Tyagi, A.K., Aswathy, S.U.: The Future of Edge Computing with Blockchain Technology: Possibility of Threats, Opportunities and Challenges. In the Book Recent Trends in Blockchain for Information Systems Security and Privacy. CRC Press, Boca Raton (2021)

Dananjayan, S., Tang, Y., Zhuang, J., Hou, C., Luo, S.: Assessment of state-of-the-art deep learning based citrus disease detection techniques using annotated optical leaf images. Comput. Electron. Agric. 193 (7), 106658 (2022). https://doi.org/10.1016/j.compag.2021.106658

Nair, M.M., Tyagi, A.K.: Privacy: History, Statistics, Policy, Laws, Preservation and Threat analysis. J. Inf. Assur. Secur. 16 (1), 24–34 (2021)

Tyagi, A.K., Sreenath, N.: A comparative study on privacy preserving techniques for location based services. Br. J. Math. Comput. Sci. 10 (4), 1–25 (2015). ISSN: 2231–0851

Rekha, G., Tyagi, A.K., Krishna Reddy, V.: A wide scale classification of class imbalance problem and its solutions: a systematic literature review. J. Comput. Sci. 15 (7), 886–929 (2019). ISSN Print: 1549–3636

Kanuru, L., Tyagi, A.K., A, S.U., Fernandez, T.F., Sreenath, N., Mishra, S.: Prediction of pesticides and fertilisers using machine learning and Internet of Things. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–6 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402536

Ambildhuke, G.M., Rekha, G., Tyagi, A.K.: Performance analysis of undersampling approaches for solving customer churn prediction. In: Goyal, D., Gupta, A.K., Piuri, V., Ganzha, M., Paprzycki, M. (eds.) Proceedings of the Second International Conference on Information Management and Machine Intelligence. LNNS, vol. 166, pp. 341–347. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9689-6_37

Sathian, D.: ABC algorithm-based trustworthy energy-efficient MIMO routing protocol. Int. J. Commun. Syst. 32 , e4166 (2019). https://doi.org/10.1002/dac.4166

Varsha, R., et al.: Deep learning based blockchain solution for preserving privacy in future vehicles. Int. J. Hybrid Intell. Syst. 16 (4), 223–236 (2020)

Tyagi, A.K., Aswathy, S U.: Autonomous Intelligent Vehicles (AIV): research statements, open issues, challenges and road for future. Int. J. Intell. Netw. 2 , 83–102 (2021). ISSN 2666–6030. https://doi.org/10.1016/j.ijin.2021.07.002

Tyagi, A.K., Sreenath, N.: Cyber physical systems: analyses, challenges and possible solutions. Internet Things Cyber-Phys. Syst. 1 , 22–33 (2021). ISSN 2667–3452, https://doi.org/10.1016/j.iotcps.2021.12.002

Tyagi, A.K., Aghila, G.: A wide scale survey on botnet. Int. J. Comput. Appl. 34 (9), 9–22 (2011). (ISSN: 0975–8887)

Tyagi, A.K., Fernandez, T.F., Aswathy, S.U.: Blockchain and aadhaar based electronic voting system. In: 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, pp. 498–504 (2020). https://doi.org/10.1109/ICECA49313.2020.9297655

Kumari, S., Muthulakshmi, P.: Transformative effects of big data on advanced data analytics: open issues and critical challenges. J. Comput. Sci. 18 (6), 463–479 (2022). https://doi.org/10.3844/jcssp.2022.463.479

Article   Google Scholar  

Download references

Acknowledgement

We want to think of the anonymous reviewer and our colleagues who helped us to complete this work.

Author information

Authors and affiliations.

Department of Fashion Technology, National Institute of Fashion Technology, New Delhi, India

Amit Kumar Tyagi

Department of Management Studies, Vaish College of Engineering, Rohtak, India

Rohit Bansal

Faculty of Management and Commerce (FOMC), Baba Mastnath University, Asthal Bohar, Rohtak, India

School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, 600127, India

Sathian Dananjayan

You can also search for this author in PubMed   Google Scholar

Contributions

Amit Kumar Tyagi & Sathian Dananjayan have drafted and approved this manuscript for final publication.

Corresponding author

Correspondence to Amit Kumar Tyagi .

Editor information

Editors and affiliations.

Faculty of Computing and Data Science, FLAME University, Pune, Maharashtra, India

Ajith Abraham

Center for Smart Computing Continuum, Burgenland, Austria

Sabri Pllana

University of Bari, Bari, Italy

Gabriella Casalino

University of Jinan, Jinan, Shandong, China

Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India

Ethics declarations

Conflict of interest.

The author declares that no conflict exists regarding the publication of this paper.

Scope of the Work

As the author belongs to the computer science stream, so he has tried to cover up this article for all streams, but the maximum example used in situations, languages, datasets etc., are with respect to computer science-related disciplines only. This work can be used as a reference for writing good quality papers for international conferences journals.

Disclaimer. Links and papers provided in the work are only given as examples. To leave any citation or link is not intentional.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Tyagi, A.K., Bansal, R., Anshu, Dananjayan, S. (2023). A Step-To-Step Guide to Write a Quality Research Article. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol 717. Springer, Cham. https://doi.org/10.1007/978-3-031-35510-3_36

Download citation

DOI : https://doi.org/10.1007/978-3-031-35510-3_36

Published : 01 June 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-35509-7

Online ISBN : 978-3-031-35510-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Creating a Culture of Quality

  • Ashwin Srinivasan
  • Bryan Kurey

Financial incentives don’t reduce errors. Employees must be passionate about eliminating mistakes.

In most industries, quality has never mattered more. New technologies have empowered customers to seek out and compare an endless array of products from around the globe. Shoppers can click to find objective data compiled by experts at organizations such as Consumer Reports and J.D. Power and go online to read user-generated reviews at sites such as Amazon; together, these sources provide an early warning system that alerts the public to quality problems. And when customers are unhappy with a product or service, they can use social media to broadcast their displeasure. In surveys, 26% of consumers say they have used social media to air grievances about a company and its products. And this issue isn’t limited to the consumer space—75% of B2B customers say they rely on word of mouth, including social media, when making purchase decisions.

  • AS Ashwin Srinivasan is a managing director, and Bryan Kurey is a senior director, at CEB.
  • BK Bryan Kurey is the Senior Vice President of Research at SBI Growth Advisory.

Partner Center

  • Coerced online child sexual abuse
  • Cyberflashing
  • Livestreaming
  • Misinformation
  • Online Bullying
  • Online Challenges
  • Parental controls
  • Pornography
  • Screen Time
  • Social Media
  • Parents and Carers
  • Teachers and school staff
  • Children and young people
  • Grandparents
  • Governors and trustees
  • Social workers
  • Foster carers and adoptive parents
  • Residential care settings
  • Healthcare Professionals
  • Social media guides
  • Safe remote learning hub
  • Trusted Flagger Guidance
  • Training and events

A guide from the UKCIS Evidence Group

Research comes in many shapes and forms. It employs qualitative, quantitative and/or mixed methods depending on the research question being asked. Data also comes in many shapes and forms and not all of it qualifies as ‘research’ or ‘evidence’. Without trying to summarise the vast published literature on the nature, quality, conduct and uses of research, we note key points that research users should have in mind when faced with a new report or article. Our focus is on research relevant to the UK Council for Internet Safety. We have in mind research with children, although other populations may also be studied.

Good quality research provides evidence that is robust, ethical, stands up to scrutiny and can be used to inform policy making. It should adhere to principles of professionalism, transparency, accountability and auditability.

Design and data collection 

  • Ensure that the research design and methods are appropriate for the research topic or question.
  • Ensure that sampling is fit for purpose – e.g. if a survey, use a sufficiently large and representative sample to enable meaningful statistical analysis; not using qualitative data for quantitative conclusions; ensure that qualitative interviewees are purposively selected to reflect the range and diversity of the population of interest.
  • Employ researchers trained to work with children (or vulnerable groups), as appropriate, and ensure that sensitive issues are ethically addressed.
  • Be aware of the strengths and limitations of the methodology, data collected or findings reported.

Reporting 

  • Report the actual questions asked of children and the exact dates and circumstances of data collection.
  • Publish the findings of the research along with an account of its methodology (sufficient for another researcher to replicate the study).
  • If key percentages are to be released to the press, this should be linked to publication of the report (and not precede it).
  • Report findings (e.g. percentages) with a clear statement of the sample or subsample (e.g. children aged 9-16; girls who use the internet aged 9-16).
  • Qualitative reports should present the range and diversity of views expressed.
  • Report comparisons and trends conscientiously so as not to exaggerate minor differences (e.g. use statistical analysis) or confuse correlation with causation (e.g. higher internet use plus lower grades may not mean internet use reduces grades) or other interpretative errors.
  • Avoid describing findings in headline-grabbing, exaggerated or panicky ways.

Accountability

  • Disclose any funding sources and any potential conflicts of interest.
  • Ensure the full research report is accessible and/or provide contact details for interested parties to follow up with the researchers and ask about the research conduct or data analysis.

Download as a PDF:

We use cookies to ensure you get the best experience on our website.

We need your consent to continue

Necessary cookies.

Cookies for the basic functionality of the UKSIC website.

Functional cookies

Cookies for additional functionality and increased website security.

Targeting cookies

Advertising and analytics service cookies that create day-to-day statistics and show ads on their site and on the advertiser’s partners websites.

Privacy Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HAL Author Manuscripts

Quality versus quantity: assessing individual research performance

José-alain sahel.

1 Institut de la vision INSERM : U968, Université Pierre et Marie Curie - Paris VI, CNRS : UMR7210, 17 rue Moreau 75012 Paris, FR

2 CIC - Quinze-Vingts INSERM : CIC503, Chno Des Quinze-Vingts PARIS VI 28, Rue de Charenton 75012 Paris, FR

3 Fondation Ophtalmologique Adolphe de Rothschild, 75019 Paris, FR

4 Institute of Ophthalmology University College of London (UCL), GB

Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality––a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment. Here, we draw on key issues raised by this report and comment on the suggestions for improving existing research evaluation practices.

BALANCING QUANTITY AND QUALITY

Evaluating individual scientific performance is an essential component of research assessment, and outcomes of such evaluations can play a key role in institutional research strategies, including funding schemes, hiring, firing, and promotions. However, there is little consensus and no internationally accepted standards by which to measure scientific performance objectively. Thus, the evaluation of individual researchers remains a notoriously difficult process with no standard solution. Marcus Tullius Cicero once wrote, “Non enim numero haec iudicantur, sed pondere” ( 1 ). Translation: The number does not matter, the quality does. In line with Cicero’s outlook on quality versus quantity, the French Academy of Sciences analyzed current bibliometric (citation metric) methods for evaluating individual researchers and made recommendations in January 2011 for the integration of quality assessment ( 2 ). The essence of the report is discussed in this Commentary.

Evaluation by experts in the field has been the primary means of assessing a researcher’s performance, although it can be biased by subjective factors, such as conflicts of interest, disciplinary or local favoritism, insufficient competence in the research area, or superficial examination. To ensure objective evaluation by experts, a quantitative analytical tool known as bibliometry (science metrics or citation metrics) has been integrated gradually into evaluation processes ( Fig. 1 ). Bibliometry started with the idea of an impact factor, which was first mentioned in Science in 1955 ( 3 ), and has evolved to weigh several aspects of published work, including journal impact factor, total number of citations, average number of citations per paper, average number of citations per author, average number of citations per year, the number of authors per paper, Hirsch’s h -index, Egghe’s g -index, and the contemporary h -index. The development of science metrics has accelerated recently, with the availability of online databases used to calculate bibliometric indicators, such as the Thomson Reuters Web of Science ( http://thomsonreuters.com/ ), Scopus ( http://www.scopus.com/home.url ), and Google Scholar ( http://scholar.google.com/ ). Within the past decade, metrics have secured a foothold in the evaluation of individual, team, and institutional research because the use of such metrics appears to be easier and faster than the qualitative assessment by experts. Because of the ease of use of various metrics, however, bibliometry tends to be applied in excessive and even incorrect ways, especially when used as standalone analyses.

An external file that holds a picture, illustration, etc.
Object name is halms608624f1.jpg

Can individual research performance be summarized by numbers?

CREDIT: IMAGE COURTESY OF D. FRANGOV (FRANGOV DIMITAR PLAMENOV COMPANY)

The French Academy of Sciences (FAS) is concerned that some of the current evaluation practices––in particular, the uncritical use of publication metrics––might be inadequate for evaluating individual scientific performance. In its recent review ( 2 ), the FAS addressed the advantages and limitations of the main existing quantitative indicators, stressed that judging the quality of a scientific work in terms of conceptual and technological innovation of the research is essential, and reaffirmed its position about the decisive role that experts must play in research assessment ( 2 , 4 ). It also strongly recommended that additional criteria be taken into consideration when assessing individual research performance. These criteria include teaching, mentoring, participation in collective tasks, and collaboration-building, in addition to quantitative parameters that are not measured by bibliometrics, such as number of patents, speaker invitations, international contracts, distinctions, and technology transfers. It appears that the best course of action will be a balanced combination of the qualitative (experts) and the quantitative (bibliometrics).

BIBLIOMETRICS: INDICATORS OR NOT?

Bibliometrics use mathematical and statistical methods to measure scientific output; thus, they provide a quantitative—not a qualitative—assessment of individual research performance. The most commonly used bibliometric indicators, as well as their strengths and weaknesses, are described below.

Impact factor

The impact factor, a major quantitative indicator of the quality and popularity of a journal, is defined by the median number of citations for a given period of the articles published in a journal. The impact factor of a journal is calculated by dividing the number of current-year citations by the source items published during the previous two years ( 5 ). According to the FAS, the impact factor of journals in which a researcher has published is a useful but highly controversial indicator of individual performance ( 2 ). The most common issue is variation among subject areas; in general, a basic science journal will have a higher average impact factor than journals in specialized or applied areas. Individual article quality within a journal is also not reflected by a journal’s impact factor because citations for an individual paper can be much higher or lower than what might be expected on the basis of that journal’s impact factor ( 2 , 6 , 7 ). In addition, self-citations are not corrected for when calculating the impact factor ( 6 ). On account of these limitations, the FAS considers the tendency of certain researchers to organize their work and publication policy according to the journal in which they intend to publish their article to be a dangerous practice. In extreme situations, such journal-centric behavior can trigger scientific misconduct. The FAS notes that there has been an increase in the practice of using journal impact factors for the evaluation of an individual researcher for the purpose of career advancement in some European countries, such as France, and in certain disciplines, such as biology and medicine ( 2 ).

Number of citations

The number of times an author has been cited is an important bibliometric indicator; however, it is a value that has several important limitations. First, citation number depends on the quality of the database used. Second, it does not consider where the author is located in the author list. Third, sometimes articles can have a considerable number of citations for reasons that might not relate to the quality or importance of the scientific content. Fourth, articles published in prestigious journals are privileged as compared with those with equal quality but published in journals of average notoriety. Fifth, depending on cultural issues, advantage can be given to citations of scientists from the same country, to scientists from other countries (in particular Americans, as often is the case in France), or to articles written in English rather than in French, for example ( 2 ). For these cultural reasons, novel and important papers might attract little attention for several years after their publication. Lastly, citation numbers also tend to be greater for review articles than for original research articles. Self-citations do not reflect the impact of a publication and should therefore not be included in a citation analysis when this is intended to give an assessment of the scientific achievement of a scientist ( 8 ).

New indicators ( h -index, g -index)

Recently, new bibliometric indicators borne out of databases indexing articles and their citations were introduced to address the needs of objectively evaluating individual researchers. In 2005, Jorge Hirsch proposed the h -index as a tool for quantifying the scientific impact of an individual researcher ( 9 ). The h -index of a scientist is the number of papers co-authored by the researcher with at least h citations each; for example, an h -index of 20 means that an individual researcher has co-authored 20 papers that have each been cited at least 20 times each. This index has the major advantage to measure simultaneously the scientist’s productivity (number of papers published over years) with the cumulative impact of the scientist’s output (the number of citations for each paper). Although the h -index is preferable to other standard single-number criteria (such as the total number of papers, total number of citations, or number of citations per paper), it has several disadvantages. First, it varies with scientific fields. As an example, h -indices in the life sciences are much higher than in physics ( 9 ). Second, it favors senior researchers by never decreasing with age, even if an individual discontinues scientific research ( 10 ). Third, citation databases provide different h -indexes as a result of differences in coverage, even when generated for the same author at the same time ( 11 , 12 ). Fourth, the h -index does not consider the context of the citations (such as negative findings or retracted works). Fifth, it is strongly affected by the total number of papers, which may underestimate scientists with short careers and scientists who have published only a few although notable papers. The h -index also integrates every publication of an individual researcher, regardless of his or her role in authorship, and does not distinguish articles of pathbreaking or influential scientific impact. Contemporary h -index (referred to as hc -index), as suggested by Sidiropoulos et al . ( 10 ), takes into account the age of each article and weights recently published work more heavily. As such, the hc -index may offer a fairer comparison between junior and senior academics than the regular h -index ( 13 ).

The g -index was introduced ( 14 ) to distinguish quality, giving more weight to highly cited articles. The g -index of a scientist is the highest number g of articles (a set of articles ordered in terms of decreasing citation counts) that together received g 2 or more citations; for example, a g -index of 20 means that 20 publications of a researcher have a total number of citations of at least 400. Egghe pointed out that the g -index value will always be higher than the h -index value, making easier to differentiate the performance of authors. If Researcher A has published 10 articles, and each has received 4 citations, the researcher’s h -index is 4. If the Researcher B has also written 10 articles, and 9 of them have received 4 citations each, the researcher’s h -index is also 4, regardless of how many citations the 10th article has received. However, if the tenth article has received 20 citations the g -index of the Researcher B would be 6; for 50 citations, the g -index would be 9 ( 15 ). Thus, one or several highly cited articles can influence the final g -index of an individual researcher, thus highlighting the impact of authors.

CHOOSING AN INDICATOR

Bibliometry is easy to use because of its simple calculations. However, it is important to realize that the purely bibliometric approaches are inadequate because no indicator alone can summarize the quality of the scientific performance of a researcher. The use of a set of metrics (such as number of citations, h -index, or g -index) would give a more accurate estimation of the researcher’s scientific impact. At the same time, metrics should not be made too complex because they can become a source of conceptual errors that are then difficult to identify. FAS discourages the use of metrics as a standalone evaluation tool, the use of only one bibliometric indicator, the use of the journal’s impact factor to evaluate the quality of an article, neglecting the impact of the scientific field/sub-field, and ignoring author placement in the case of multiple authorship ( 2 ).

In 2004, INSERM (the French National Institute of Health and Medical Research) introduced bibliometrics as part of its research assessment procedures. Bibliometric analysis is based on publication indicators that are validated by the evaluated researchers and are at the disposal of the evaluation committees. In addition to the basic indicators (citation numbers and journal impact factor), the measures used by INSERM include the number of publications in the first 10% of journals ranked by decreasing impact factor in a given field (top 10% impact factor, according to Thomson Reuters Journal Citation Reports) and the number of publications from an individual researcher that fall within the top 10% of articles (ranked by the total citations) in annual cohorts from each of the 22 disciplines defined by Thomson Reuters Essential Science Indicators. All indicators take into account the research fields, the year of publication, and the author’s position among the signers by assigning an index of 1 to the first or last author, an index of 0.5 for the second or the next to last author, and 0.25 for all other author positions. Notably, the author’s index can only be used in biomedical research because for other fields the rank of the authors may follow different rules, such as in physics, in which they are listed in alphabetical order.

Bibliometric indicator interpretation requires competent expert knowledge of metrics, and in order to ensure good practice, INSERM trains members of evaluation committees on state-of-the-art science metric methods. INSERM has noted that correlation analysis of publication—in other words, scoring by members of evaluation committees and the use of any bibliometric indicator alone—is rather low. For example, the articles of all teams received a number of citations irrespective of the journal in which they were published, with only low correlation between the journal impact factor and the number of times each publication was cited. No correlation was found between the journal impact factor and the individual publication citations, or between the “Top 1%” publications and the impact factor ( 16 ). INSERM analysis emphasizes the fact that each indicator has its advantages and limitations, and care must be taken not to consider them alone as “surrogate” markers of team performance. Several indicators must be taken into account when evaluating the overall output of a research team. The use of bibliometric indicators requires great vigilance; but, according to the INSERM experience, metrics enrich the evaluation committees’ debates about the scientific quality of team performance ( 16 ).

As reported by the FAS, bibliometric practices vary considerably from country to country. A worldwide Nature survey ( 17 ) emphasized that 70% of the interviewed academic scientists, department heads, and other administrators believe that bibliometrics are used for recruitment and promotions, and 63% of them consider the use of these measures to be inadequate. Many Anglo-Saxon countries use bibliometrics to evaluate performances of universities and research organizations, whereas for hiring and promotions, the curriculum vitae, interview process, and letters of recommendation “count” more than the bibliometric indicators ( 2 ). In contrast, metrics are used for recruiting in Chinese and Asian universities in general, although movement toward the use of letters of recommendation is currently underway ( 2 ). In France, an extensive use of publication metrics for individual and institutional evaluations has been noted in the biomedical sciences ( 2 ).

Research evaluation practices also vary by field and subfield owing in part to the large disparities across community sizes and the literature coverage provided by citation databases. As reviewed by the FAS, evaluation of individual researchers in the mechanical sciences, computing, and applied mathematics fields includes both the quality and the number of publications, as well as scientific awards and the number of invitations to speak at conferences, software, patents, and technology transfer agreements. Organization of scientific meetings and editorial responsibilities are also taken into consideration. The younger researchers are evaluated by experts during interviews and while they give seminars. In these fields, publication does not always play a leading role in transferring knowledge; thus, during a long professional career, metrics give rather weak and inaccurate estimation of research performance. Bibliometrics are therefore used only as a decision-making aid, but not as a main tool for evaluation.

In physics and its subfields, evaluation methods vary. In general, a combination of quantitative (number of publications, h -index) and qualitative measures (keynote and invited speeches, mentoring programs) plays a decisive role in the evaluation of senior scientists only. In astrophysics, metrics are largely used for evaluation, recruiting, promotions, and funding allocations. In chemistry, the main bibliometric indicators ( h -index, total number of citations, and number of citations per article) are taken into consideration when discussing the careers of senior researchers (those with more than 10 to 12 years of research activity). In recruiting young researchers, experts interview the candidate to examine ability to present and discuss the subject matter proficiently; the individual’s publication record is also considered. However, the national committees for chemistry of French scientific and university institutions [Centre National de la Recherche Scientifique (CNRS) and Conseil National des Universités (CNU), respectively] usually avoid bibliometrics altogether for an individual’s evaluation.

In economics, evaluation by experts in the field plays the most important role for recruitments and promotions, but bibliometric indicators are used to help this decision-making. For the humanities and social sciences (philosophy, history, law, sociology, psychology, languages, political sciences, and art) and for mathematics, the existing databases do not cover these fields sufficiently. As a consequence, these fields are not able to properly use bibliometrics. In contrast, in biology and medicine the quantitative indicators—in particular the journal’s impact factor—are widely used for evaluating individual researchers ( 2 ).

STRATEGIES AND RECOMMENDATIONS

The FAS acknowledged that bibliometrics could be a very useful evaluation tool when handled by experts in the field. According to its recommendations, the use of bibliometrics by monodisciplinary juries should be of nondecisive value; instead, the experts of these evaluation committees know the candidates well enough to compare more precisely and objectively the individual performance of each of them. In the case of pluridisciplinary (interdisciplinary) juries, bibliometrics can be successfully used, but only if the experts consider the differences between scientific fields and subfields (as mentioned above). For this purpose, the choice of indicators and the methodology to evaluate the full spectrum of research activity of a scientist should be initially validated. As emphasized by the FAS, bibliometrics should not be used for deciding which young scientists to recruit. In addition, the bibliometric set should be chosen depending on the purpose of the evaluation: recruitment, promotion, funding allocation, or distinction. Calculations should not be left to nonspecialists (such as administrators that could use the rapidly accessible data in a biased way) because the number of potential errors in judgement and assessment is too large. Frequent errors to be avoided include the homonyms, variations in the use of name initials, and the use of incomplete databases. It is important that the complete list of publications be checked by the researcher concerned. Researchers could even be asked to produce their own indicators (if provided with appropriate guidelines for calculation); these calculations should subsequently be approved. The evaluation process must be transparent and replicable, with clearly defined targets, context, and purpose of the assessment.

To improve the use of bibliometrics, a consensus has been reached by the FAS ( 2 ) to perform a series of studies and to evaluate various methodological approaches, including (i) retrospective studies to compare decisions made by experts and evaluating committees, with results potentially obtained by bibliometrics; (ii) studies to refine the existing indicators and bibliometric standards; (iii) authorship clarification; (iv) development of standards for originality and innovation; (v) discussion on the citation discrepancies on the basis of geographical- or field-localism; (vi) monitoring the bibliometric indicators of outstanding researchers (a category reserved for those who have made important and lasting research contributions their specific field and who have obtained international recognition); (vii) examining the prospective values of the indicators for researchers that changed their field orientation with time; (viii) examining the indicators of researchers receiving great awards such as Nobel Prize, Fields Medal, and medals of notorious academies and institutions; (ix) studies on how bibliometrics affect the scientific behavior of the researchers; and (x) establishment of standards of good practice in the use of bibliometrics for analyzing individual research performance.

FIXING THE FLAWS

Assessing research performance is important for recognizing productivity, innovation, and novelty and plays a major role in academic appointment and promotion. However, the means of assessment—namely, bibliometrics—are often flawed. Bibliometrics have enormous potential to assist the qualitative evaluation of individual researchers; however, none of the bibliometric indicators alone (or even considering a set of them) allow for an acceptable and well-balanced evaluation of the activity of a researcher. The use of bibliometrics should continue to evolve through in-depth discussion on what the metrics mean and how they can be best interpreted by experts in the given scientific field.

Acknowledgments

The author thanks K. Marazova (Institut de la Vision) for her major help in preparing this commentary and N. Haeffner-Cavaillon (INSERM) for critical reading and insights.

References and Notes

Oxford University Press

Oxford University Press's Academic Insights for the Thinking World

quality research work

“Unparalleled research quality”: An interview with Tanya Laplante, Head of Product Platforms

quality research work

Oxford Academic

  • By Tanya Laplante
  • April 24 th 2024

As part of our Publishing 101 blog series, we are interviewing “hidden” figures at Oxford University Press: colleagues who our authors would not typically work with but who make a crucial contribution to the success of their books.

Tanya explains how, as research behaviours have changed, we use digital platforms to ensure that our authors’ books reach readers worldwide.

What is your role at OUP?

I am Head of Product Platforms in the Academic Division of Oxford University Press. I oversee the strategic development for the platforms, such as Oxford Academic, that host our book and journal products and services.

What is the difference between the product and editorial departments at OUP?

While editorial focuses on individual works like books and journals, product is responsible for the overall success and health of OUP’s digital portfolio. Many departments contribute to our digital products and oversee various aspects of product creation and maintenance that are central to success: editorial, sales, marketing, data strategy, content operations, royalties, etc. Product’s role is to ensure that all of those aspects are working together in as seamless a way as possible to deliver high-quality, author-driven products to the people that read our content, namely researchers.

How did Oxford Academic become our home for academic books?

Research behaviours have changed over the past two decades, as we have steadily seen sales shift away from print towards online. Oxford Academic allows us to better connect the online version of our books into a wider aggregated library of content that is easily found on search engines like Google, which can dramatically enhance the reach and impact of our authors’ work. Digital dissemination allows us to put scholarship into the hands of researchers and learners worldwide who might never have access to a library print copy. 

Oxford Academic came out of a desire within the Press to create a single gateway into Academic’s content to streamline the research journeys of those searching for content in our books and journals. It launched with journals in 2017, with research books following in 2022. Digital-first publishing is a priority for OUP as it increases the discoverability and accessibility of our research, thereby magnifying its reach and impact for our authors.

What opportunities does Oxford Academic offer our authors?

Oxford Academic offers authors a modern, mobile-friendly, accessible, search engine optimized platform upon which to publish research. In practice, this means that it’s easier for researchers to find our authors’ work online. With journals and books on the same platform, reader journeys across the two formats are not only possible but can be informed by AI-driven recommendation widgets, so readers are recommended other relevant research.

Some of the key benefits for readers include being able to:

  • Quickly find books, journals, and images by building powerful searches, or using our robust links to related content  
  • Access content from their preferred device, and experience a modern platform with updated features and functionality  
  • Understand research quickly with our graphical or video abstracts, non-textual outputs such as images, multimedia, data, and code, and text shown with tables or images side-by-side

Our Insight Working Group is constantly reviewing reader behaviors to recommend development that will drive the use of our books and journals content and, consequently, the impact of authors’ research.

What do you enjoy most about your job?

I enjoy working across departments with various stakeholders to find innovative platform solutions for customers, readers, and authors. It can be a challenge to narrow down the areas of the platform that we should invest in. But I, and the stakeholders I work with, have developed a great deal of expertise that informs where investment is best placed. We want to invest in areas that will have the greatest impact for the greatest number of people that use platforms such as Oxford Academic, including authors, customers, and society partners.

How do you see the digital publishing landscape evolving over the next few years?

New publishing models (Open Access) and technological changes (AI) are both impacting scholarly publishing. With the expansion of Open Access and a changing funder landscape, OUP needs to demonstrate the value of what we, as a non-profit publisher, bring to the research ecosystem. We need to better educate funders, authors, and customers about the critical role we play in the publishing process: overseeing peer review, managing distribution, application of metadata, maximizing discoverability on digital platforms, and more.

In terms of AI and other technological changes, we need to optimize the benefits while minimizing the risks. We are entering a period of great change in digital publication, from review to submission to publication and discoverability. The quality of the research we publish on behalf of authors is unparalleled. We need to harness that quality and use AI as a way to enrich the user experience and increase discoverability of the content, while ensuring we are still driving users to a trusted version of record on OUP’s platforms.

In both of these spaces, we will also need to consider unique services and capabilities we can provide for key stakeholders in this space, including authors, customers, and society partners. What can OUP bring to the table to make their role in the publication process a more seamless one?

Feature image by Marius Masalar , via Unsplash .

Tanya Laplante , Head of Product Platforms, Oxford University Press.

  • Publishing 101
  • Series & Columns

Our Privacy Policy sets out how Oxford University Press handles your personal information, and your rights to object to your personal information being used for marketing to you or being processed as part of our business activities.

We will only use your personal information to register you for OUPblog articles.

Or subscribe to articles in the subject area by email or RSS

Related posts:

quality research work

Recent Comments

There are currently no comments.

Leave a Comment

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Air Quality Research Center

Job opening: postdoctoral research associate in the department of air quality research center.

  • by Camille Elise Hammond
  • April 22, 2024

Quick Summary

  • A postdoctoral research position is available at the Air Quality Research Center (AQRC) at the University of California, Davis (UC Davis). Position will start June 1, 2024.

Responsibilities: A postdoctoral research position is available at the Air Quality Research Center (AQRC) at the University of California, Davis (UC Davis). The selected Postdoctoral Scholar will primarily work on a project that aims to evaluate the effectiveness of two leading methane emissions control strategies: landfill cover methane oxidation and automated gas collection systems. Additionally, the project will involve collecting site-specific parameters in the field for testing, which will inform simulations in the California Landfill Methane Inventory Model (CALMIM). The successful candidate will join a research team, headed by Dr. Ramin Yazdani, to conduct on-site methane oxidation measurement and other field parameters at three selected Californian sites across different seasons, as part of a two-year research endeavor sponsored by the California Air Resources Board. This scholar may also contribute to other research projects related to landfill emissions measurements.

Field data and methane oxidation data will be collected on three distinct cover types at three landfill sites over one year, focusing on areas under gas vacuum and uncontrolled sections. Analyses will compare emission estimates from the model using default and site-specific input parameters, quantitatively assess emissions across varying cover types and materials, and propose model enhancements for better alignment with field conditions. The project will also gather and scrutinize landfill gas well data, monitor barometric pressure changes, measure the gas pressure gradient at the interface of waste and cover soil, and document emissions measurements to assess the effectiveness of automated gas collection systems in reducing emissions.

The appointed individual will be primarily responsible for setting up equipment, such as flux chambers, collecting soil and gas samples for on-site and laboratory analyses, recording field measurements, and maintaining field instruments. They will transport equipment to sites, design and execute laboratory and field experiments aligned with the project's objectives, calibrate gas analyzers and monitoring sensors, and compile research data. They will process and interpret this data and are expected to produce project reports, technical publications, and presentations.

Qualifications: Candidates should hold a Ph.D. in Atmospheric Sciences, Environmental Engineering, Agricultural Sciences, or a related field, and should not exceed three years of post-degree research experience. The individual must have the capability to independently carry out air emission measurements and modeling. They should also have a robust background in waste management technologies, including anaerobic digestion, aerobic composting, as well as expertise in data processing and statistical analysis.

Additional essential qualifications include outstanding oral and written English communication skills, a collaborative attitude, a track record of high-quality peer-reviewed scientific publications, and the ability to meet deadlines and work autonomously with excellence. The candidate must exhibit strong time management and record-keeping abilities, and must have or be able to obtain a valid U.S. driver's license.

Preference will be given to applicants with hands-on experience with instrumentation for measuring gas emissions using either static or dynamic flux chambers, proficiency in MATLAB and LabVIEW software, and experience in emission modeling.

Hiring Process: Application reviews will commence immediately and will remain ongoing until the position is secured. Initial interviews will be conducted via videoconference.

Compensation: We provide a competitive wage plus extensive benefits, such as health insurance, retirement plans, paid vacation and sick days, and supportive career development in scientific research. Selected candidates will be expected to disseminate their research findings at scientific conferences and in professional circles.

The University of California, Davis is a proactive affirmative action and equal opportunity institution dedicated to fostering an environment that upholds equality of opportunity and values diversity.

How to Apply:  Please submit a cover letter, curriculum vitae, samples of pertinent publications, and the names and contact details for three references via email.

Contact:  Dr. Ramin Yazdani,  [email protected]  Air Quality Research Center University of California, Davis 1 Shields Ave. Davis, CA 95616

Start Date: The position will start on June 1, 2024, or thereafter.

Length of Appointment: The term may last up to 2 years, contingent upon performance and funding.

Salary Range: The UC postdoctoral salary scale dictates the base salary according to the level of experience at the time of appointment. For the current salary scale of this position, refer to the following link:  https://ucdavis.app.box.com/s/d2cv7dqvg2moyw5ymr1gokliyn3ddfqz . The present entry-level salary for this role ranges from $64,480 to $71,908. Salaries above this base may be negotiated to match market conditions.

Primary Category

  • MyU : For Students, Faculty, and Staff

Hannah Kenagy and Melissa Ramirez join Department of Chemistry

Headshot photographs of Melissa Ramirez and Hannah Kenagy on a maroon and gold polka-dot background.

MINNEAPOLIS / ST. PAUL (04/22/2024) – The Department of Chemistry will welcome Dr. Hannah Kenagy and Dr. Melissa Ramirez to the faculty in January 2025. Both chemists will enter the department as Assistant Professors. 

Hannah S. Kenagy will join the department in January 2025 after completion of her postdoctoral training at the Massachusetts Institute of Technology (MIT), where she currently works as an NSF AGS Postdoctoral Fellow with Prof. Jesse Kroll and Prof. Colette Heald. Prior to her current position at MIT, Kenagy completed her PhD at the University of California Berkeley in 2021 with Ronald Cohen and her BS in Chemistry and the University of Chicago in 2016. 

At the University of Minnesota, the Kenagy research group will focus on atmospheric chemistry. Kenagy’s research explores how emissions into the atmosphere get physically and chemically transformed into gases and particles with impacts on air quality and climate. “We will use an integrated toolset for thinking about these questions, including lab experiments, field observations, and multi-scale modeling,” Kenagy says. “In particular, we’ll focus on questions regarding how atmospheric chemistry and composition are changing as we reduce our reliance on fossil fuel combustion and as temperatures continue to rise with climate change. Integrating measurements and models together will enable us to push forward our understanding of this changing chemistry.”

Kenagy is passionate about integrating environmental chemistry learning opportunities in her classrooms to make real-world connections for students. “Because so much of my research is relevant to air quality and climate – things that impact people’s daily lives, often inequitably – outreach is a really key component of my group’s work,” Kenagy says. She also engages in ongoing efforts to make science more accessible, and to ensure all students have the resources they need to thrive and develop a sense of belonging in science.

The UMN Department of Chemistry’s strong focus on environmental chemistry and the opportunities to engage in interdisciplinary research make the move to Minnesota particularly exciting for Kenagy. “I’m looking forward to joining a university with atmospheric scientists in a variety of departments across both the Minneapolis and St. Paul campuses. I also plan to make some measurements of urban chemistry across the Twin Cities, a unique environment that is impacted by agricultural and biogenic emissions in addition to more typical urban emissions. This mix of emissions makes the Twin Cities an interesting place to study the air!”

When she’s not busy in the office and lab, Kenagy loves being outside, hiking and swimming. She also loves music – she plays piano and sings – and cooking.  You can read more about Kenagy here.

Melissa Ramirez will also make her move to Minnesota in January of 2025. Currently, Ramirez is an NIH K99/R00 MOSAIC Scholar, NSF MPS-Ascend Fellow, and Caltech Presidential Postdoctoral Scholar in the laboratory of Prof. Brian Stoltz at the California Institute of Technology, where her research focuses on enantioselective quaternary center formation using experiments and computations. Before her postdoctoral position, Ramirez completed her PhD in Organic Chemistry at the University of California, Los Angeles with Prof. Ken Houk and Prof. Neil Garg in 2021 and her BA in Chemistry at the University of Pennsylvania in 2016. 

The Ramirez laboratory at UMN will develop experimental and computational approaches to address challenges associated with efficiency in the synthesis of pharmaceutically relevant small molecules. “The mission of my research program will be to establish synthetic methods in the areas of main group catalysis, asymmetric organocatalysis, and transition metal photochemistry with the aid of computations,” Ramirez writes. “Students trained in my lab will develop strong skills in synthetic and computational organic chemistry with a focus on reaction development. This synergistic skillset in synthesis and computations will also give rise to a range of opportunities for collaboration with the broader scientific community.” Ramirez aims to bridge synthesis and catalysis research with computational chemistry at UMN.

Ramirez says an important goal for her as a professor will be to challenge students, support them, and make them feel connected to the classroom regardless of their background. “Throughout my academic career, some of the most effective teachers I have had are those who believed in my potential even when I experienced self-doubt or failure,” Ramirez says. She is also looking forward to collaborating with the Chemistry Diversity, Equity, and Inclusion Committee to explore ways to better connect students with resources to help remove barriers to their science education and career. “I am excited to help recruit a diverse student body by helping organize the  CheMNext session and by continuing my close relationship with organizations such as the Alliance for Diversity in Science and Engineering and Científico Latino, which I have served on the organizational board for during my postdoc,” Ramirez says.

When she’s not on campus, Ramirez enjoys staying active. She’s an avid runner, loves Peloton, and likes taking high-intensity interval training (HIIT) classes.  You can learn more about Ramirez here.

The hiring of Kenagy and Ramirez follows the recent announcement of Dr. Jan-Niklas Boyn and Dr. Kade Head-Marsden joining the faculty in Fall 2024 . These four incoming Gophers will bring the Department of Chemistry total of new faculty hires to nine over the past three years. We are excited for these outstanding chemists to join our community, and be part of the ongoing growth of the College of Science and Engineering on the UMN-TC campus.

Related news releases

  • hUMNs of Chemistry #14
  • hUMNs of Chemistry #13
  • Jan-Niklas Boyn and Kade Head-Marsden join Department of Chemistry
  • hUMNs of Chemistry #11
  • Professor Peter Carr retires after over 45 years on Department of Chemistry faculty
  • Future undergraduate students
  • Future transfer students
  • Future graduate students
  • Future international students
  • Diversity and Inclusion Opportunities
  • Learn abroad
  • Living Learning Communities
  • Mentor programs
  • Programs for women
  • Student groups
  • Visit, Apply & Next Steps
  • Information for current students
  • Departments and majors overview
  • Departments
  • Undergraduate majors
  • Graduate programs
  • Integrated Degree Programs
  • Additional degree-granting programs
  • Online learning
  • Academic Advising overview
  • Academic Advising FAQ
  • Academic Advising Blog
  • Appointments and drop-ins
  • Academic support
  • Commencement
  • Four-year plans
  • Honors advising
  • Policies, procedures, and forms
  • Career Services overview
  • Resumes and cover letters
  • Jobs and internships
  • Interviews and job offers
  • CSE Career Fair
  • Major and career exploration
  • Graduate school
  • Collegiate Life overview
  • Scholarships
  • Diversity & Inclusivity Alliance
  • Anderson Student Innovation Labs
  • Information for alumni
  • Get engaged with CSE
  • Upcoming events
  • CSE Alumni Society Board
  • Alumni volunteer interest form
  • Golden Medallion Society Reunion
  • 50-Year Reunion
  • Alumni honors and awards
  • Outstanding Achievement
  • Alumni Service
  • Distinguished Leadership
  • Honorary Doctorate Degrees
  • Nobel Laureates
  • Alumni resources
  • Alumni career resources
  • Alumni news outlets
  • CSE branded clothing
  • International alumni resources
  • Inventing Tomorrow magazine
  • Update your info
  • CSE giving overview
  • Why give to CSE?
  • College priorities
  • Give online now
  • External relations
  • Giving priorities
  • Donor stories
  • Impact of giving
  • Ways to give to CSE
  • Matching gifts
  • CSE directories
  • Invest in your company and the future
  • Recruit our students
  • Connect with researchers
  • K-12 initiatives
  • Diversity initiatives
  • Research news
  • Give to CSE
  • CSE priorities
  • Corporate relations
  • Information for faculty and staff
  • Administrative offices overview
  • Office of the Dean
  • Academic affairs
  • Finance and Operations
  • Communications
  • Human resources
  • Undergraduate programs and student services
  • CSE Committees
  • CSE policies overview
  • Academic policies
  • Faculty hiring and tenure policies
  • Finance policies and information
  • Graduate education policies
  • Human resources policies
  • Research policies
  • Research overview
  • Research centers and facilities
  • Research proposal submission process
  • Research safety
  • Award-winning CSE faculty
  • National academies
  • University awards
  • Honorary professorships
  • Collegiate awards
  • Other CSE honors and awards
  • Staff awards
  • Performance Management Process
  • Work. With Flexibility in CSE
  • K-12 outreach overview
  • Summer camps
  • Outreach events
  • Enrichment programs
  • Field trips and tours
  • CSE K-12 Virtual Classroom Resources
  • Educator development
  • Sponsor an event

COMMENTS

  1. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  2. What is quality research? A guide to identifying the key features and

    Standards for quality research. Over the years, researchers, scientists and authors have come to a consensus about the standards used to check the quality of research. Determined through empirical observation, theoretical underpinnings and philosophy of science, these include: 1. Having a well-defined research topic and a clear hypothesis

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  4. Quality in Research: Asking the Right Question

    Research is the underpinning upon which clinicians, educators, and scholars build their work. It defines our field and frames the very way we think about it. Therefore, researchers have the responsibility to deliver quality products; we trust them to ensure rigor in their methods and soundness in their thinking.

  5. Assessing Research Quality

    Assessing Research Quality. This page presents information and tools to help evaluate the quality of a research study, as well as information on the ethics of research. The quality of social science and policy research can vary considerably. It is important that consumers of research keep this in mind when reading the findings from a research ...

  6. (PDF) What Is Quality in Research? Building a Framework of Design

    A literature-derived framework of research quality attributes is, thus, obtained, which is subject to an expert feedback process, involving scholars and practitioners in the fields of research ...

  7. Defining and assessing research quality in a transdisciplinary context

    2.2 Search terms. Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3.The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

  8. Standards for High-Quality and Objective Research and Analysis

    Our standards for high-quality and objective research and analysis describe how RAND research embodies our core values of quality and objectivity. The standards should be evident in the conduct of our research and analysis and in our research products. They substantiate our commitment to producing high-caliber, high-utility policy research and ...

  9. Sustainability

    The strategic relevance of innovation and scientific research has amplified the attention towards the definition of quality in research practice. However, despite the proliferation of evaluation metrics and procedures, there is a need to go beyond bibliometric approaches and to identify, more explicitly, what constitutes good research and which are its driving factors or determinants.

  10. Quality research needs good working conditions

    High-quality research requires appropriate employment and working conditions for researchers. However, many academic systems rely on short-term employment contracts, biased selection procedures ...

  11. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  12. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  13. Quality Research Through Peer Assessment

    Abstract. The chapter explores the notions of quality and/or quality assurance in scholarship through peer assessment or peer review. It takes peer assessment or peer review as critical components of quality control in scholarly publishing. While there is tacit knowledge that philosophically 'quality' is a notoriously elusive and value ...

  14. PDF Quality research needs good working conditions

    Research is an essential part of academic work. Resources are dedicated to skill building. Research infrastructure is more adequate. only on. Funding is conditional quality. Incentives promote ...

  15. What is quality, innovative research?

    Quality research means you have thoroughly examined the relevant theoretical frameworks and situated your researchable problem within this literature. It means that you have obtained ethical clearance and conducted the research in an ethical manner. It means you have used the most appropriate methods and have ensured the reliability and ...

  16. PDF What Are the Standards for Quality Research?

    Quality research most commonly refers to the scientific process encompassing all aspects of study design; in particular, it pertains to the judgment regarding the match ... notion that a researcher's work has value when it is judged by peers to have merit sufficient for acknowledgement in a new text or article. While journal publication and

  17. A Step-To-Step Guide to Write a Quality Research Article

    The word 'Research' when we talk about it among new researchers/students, students, feel blank and get fear what it is? Research means searching and refining old content in a new way. In simple words, for a literature review work, readers/researchers do not need to read many papers; they can refer to a single article on the respective topic.

  18. How to do high-quality clinical research 1: First steps

    Abstract. This is the first paper in a series of five on how to do good quality clinical research. It sets the scene for the four papers that follow. The aims of the series are to: promote reliable clinical research to inform clinical practice; help people new to research to get started (at any stage of their career); create teaching resources ...

  19. Creating a Culture of Quality

    Work-life Balance; All Topics; For Subscribers. ... Creating a Culture of Quality ... Bryan Kurey is the Senior Vice President of Research at SBI Growth Advisory. Post. Post. Share. Annotate.

  20. What is good quality research?

    Good quality research provides evidence that is robust, ethical, stands up to scrutiny and can be used to inform policy making. It should adhere to principles of professionalism, transparency, accountability and auditability. Design and data collection. Ensure that the research design and methods are appropriate for the research topic or question.

  21. Home

    About our work at AHRQ. The Agency for Healthcare Research and Quality's (AHRQ) mission is to produce evidence to make health care safer, higher quality, more accessible, equitable, and affordable, and to work within the U.S. Department of Health and Human Services and with other partners to make sure that the evidence is understood and used.

  22. Quality versus quantity: assessing individual research performance

    Abstract. Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality--a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating ...

  23. "Unparalleled research quality": an interview with Tanya Laplante, Head

    As part of our Publishing 101 blog series, we are interviewing "hidden" figures at Oxford University Press: colleagues who our authors would not typically work with but who make a crucial contribution to the success of their books. Tanya explains how, as research behaviours have changed, we use digital platforms to ensure that our authors' books reach readers worldwide.

  24. Full article: Innovations in efficient, effective and quality oriented

    Effective social work practice in evolving health systems necessitates innovation. This special issue includes articles that identify creative strategies that social work practitioners, supervisors, administrators and leaders use to meet the need for streamlined, well-organized and timesaving practices while responding in quality driven ways to the multi-dimensional concerns of patient ...

  25. Strategy 3: Nurse Bedside Shift Report

    Research shows that when patients are engaged in their health care, it can lead to measurable improvements in safety and quality. To promote stronger engagement, Agency for Healthcare Research and Quality developed the Guide to Patient and Family Engagement in Hospital Quality and Safety, a tested, evidence-based resource to help hospitals work as partners with patients and families to improve ...

  26. Pew Research Center

    Pew Research Center

  27. Job Opening: Postdoctoral Research Associate in the Department of Air

    Responsibilities: A postdoctoral research position is available at the Air Quality Research Center (AQRC) at the University of California, Davis (UC Davis). The selected Postdoctoral Scholar will primarily work on a project that aims to evaluate the effectiveness of two leading methane emissions control strategies: landfill cover methane oxidation and automated gas collection systems.

  28. Indoor Air Quality Matters

    Indoor air quality is also a disability rights issue. While everyone is vulnerable to illness and capable of developing long-term sequelae from viruses like SARS-CoV-2 (which causes COVID-19 and Long COVID), those with compromised immune systems and other vulnerabilities may be especially at risk of dire complications from infection.Similarly, while breathing in polluted air isn't good for ...

  29. Hannah Kenagy and Melissa Ramirez join Department of Chemistry

    "Because so much of my research is relevant to air quality and climate - things that impact people's daily lives, often inequitably - outreach is a really key component of my group's work," Kenagy says. She also engages in ongoing efforts to make science more accessible, and to ensure all students have the resources they need to ...