• Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

A Comprehensive Guide to Methodology in Research

Sumalatha G

Table of Contents

Research methodology plays a crucial role in any study or investigation. It provides the framework for collecting, analyzing, and interpreting data, ensuring that the research is reliable, valid, and credible. Understanding the importance of research methodology is essential for conducting rigorous and meaningful research.

In this article, we'll explore the various aspects of research methodology, from its types to best practices, ensuring you have the knowledge needed to conduct impactful research.

What is Research Methodology?

Research methodology refers to the system of procedures, techniques, and tools used to carry out a research study. It encompasses the overall approach, including the research design, data collection methods, data analysis techniques, and the interpretation of findings.

Research methodology plays a crucial role in the field of research, as it sets the foundation for any study. It provides researchers with a structured framework to ensure that their investigations are conducted in a systematic and organized manner. By following a well-defined methodology, researchers can ensure that their findings are reliable, valid, and meaningful.

When defining research methodology, one of the first steps is to identify the research problem. This involves clearly understanding the issue or topic that the study aims to address. By defining the research problem, researchers can narrow down their focus and determine the specific objectives they want to achieve through their study.

How to Define Research Methodology

Once the research problem is identified, researchers move on to defining the research questions. These questions serve as a guide for the study, helping researchers to gather relevant information and analyze it effectively. The research questions should be clear, concise, and aligned with the overall goals of the study.

After defining the research questions, researchers need to determine how data will be collected and analyzed. This involves selecting appropriate data collection methods, such as surveys, interviews, observations, or experiments. The choice of data collection methods depends on various factors, including the nature of the research problem, the target population, and the available resources.

Once the data is collected, researchers need to analyze it using appropriate data analysis techniques. This may involve statistical analysis, qualitative analysis, or a combination of both, depending on the nature of the data and the research questions. The analysis of data helps researchers to draw meaningful conclusions and make informed decisions based on their findings.

Role of Methodology in Research

Methodology plays a crucial role in research, as it ensures that the study is conducted in a systematic and organized manner. It provides a clear roadmap for researchers to follow, ensuring that the research objectives are met effectively. By following a well-defined methodology, researchers can minimize bias, errors, and inconsistencies in their study, thus enhancing the reliability and validity of their findings.

In addition to providing a structured approach, research methodology also helps in establishing the reliability and validity of the study. Reliability refers to the consistency and stability of the research findings, while validity refers to the accuracy and truthfulness of the findings. By using appropriate research methods and techniques, researchers can ensure that their study produces reliable and valid results, which can be used to make informed decisions and contribute to the existing body of knowledge.

Steps in Choosing the Right Research Methodology

Choosing the appropriate research methodology for your study is a critical step in ensuring the success of your research. Let's explore some steps to help you select the right research methodology:

Identifying the Research Problem

The first step in choosing the right research methodology is to clearly identify and define the research problem. Understanding the research problem will help you determine which methodology will best address your research questions and objectives.

Identifying the research problem involves a thorough examination of the existing literature in your field of study. This step allows you to gain a comprehensive understanding of the current state of knowledge and identify any gaps that your research can fill. By identifying the research problem, you can ensure that your study contributes to the existing body of knowledge and addresses a significant research gap.

Once you have identified the research problem, you need to consider the scope of your study. Are you focusing on a specific population, geographic area, or time frame? Understanding the scope of your research will help you determine the appropriate research methodology to use.

Reviewing Previous Research

Before finalizing the research methodology, it is essential to review previous research conducted in the field. This will allow you to identify gaps, determine the most effective methodologies used in similar studies, and build upon existing knowledge.

Reviewing previous research involves conducting a systematic review of relevant literature. This process includes searching for and analyzing published studies, articles, and reports that are related to your research topic. By reviewing previous research, you can gain insights into the strengths and limitations of different methodologies and make informed decisions about which approach to adopt.

During the review process, it is important to critically evaluate the quality and reliability of the existing research. Consider factors such as the sample size, research design, data collection methods, and statistical analysis techniques used in previous studies. This evaluation will help you determine the most appropriate research methodology for your own study.

Formulating Research Questions

Once the research problem is identified, formulate specific and relevant research questions. These questions will guide your methodology selection process by helping you determine what type of data you need to collect and how to analyze it.

Formulating research questions involves breaking down the research problem into smaller, more manageable components. These questions should be clear, concise, and measurable. They should also align with the objectives of your study and provide a framework for data collection and analysis.

When formulating research questions, consider the different types of data that can be collected, such as qualitative or quantitative data. Depending on the nature of your research questions, you may need to employ different data collection methods, such as interviews, surveys, observations, or experiments. By carefully formulating research questions, you can ensure that your chosen methodology will enable you to collect the necessary data to answer your research questions effectively.

Implementing the Research Methodology

After choosing the appropriate research methodology, it is time to implement it. This stage involves collecting data using various techniques and analyzing the gathered information. Let's explore two crucial aspects of implementing the research methodology:

Data Collection Techniques

Data collection techniques depend on the chosen research methodology. They can include surveys, interviews, observations, experiments, or document analysis. Selecting the most suitable data collection techniques will ensure accurate and relevant data for your study.

Data Analysis Methods

Data analysis is a critical part of the research process. It involves interpreting and making sense of the collected data to draw meaningful conclusions. Depending on the research methodology, data analysis methods can include statistical analysis, content analysis, thematic analysis, or grounded theory.

Ensuring the Validity and Reliability of Your Research

In order to ensure the validity and reliability of your research findings, it is important to address these two key aspects:

Understanding Validity in Research

Validity refers to the accuracy and soundness of a research study. It is crucial to ensure that the research methods used effectively measure what they intend to measure. Researchers can enhance validity by using proper sampling techniques, carefully designing research instruments, and ensuring accurate data collection.

Ensuring Reliability in Your Study

Reliability refers to the consistency and stability of the research results. It is important to ensure that the research methods and instruments used yield consistent and reproducible results. Researchers can enhance reliability by using standardized procedures, ensuring inter-rater reliability, and conducting pilot studies.

A comprehensive understanding of research methodology is essential for conducting high-quality research. By selecting the right research methodology, researchers can ensure that their studies are rigorous, reliable, and valid. It is crucial to follow the steps in choosing the appropriate methodology, implement the chosen methodology effectively, and address validity and reliability concerns throughout the research process. By doing so, researchers can contribute valuable insights and advances in their respective fields.

You might also like

AI for Meta-Analysis — A Comprehensive Guide

AI for Meta-Analysis — A Comprehensive Guide

Monali Ghosh

How To Write An Argumentative Essay

Beyond Google Scholar: Why SciSpace is the best alternative

Beyond Google Scholar: Why SciSpace is the best alternative

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

38k Accesses

52 Citations

57 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

framework in research methodology

Academic Success Center

Research Writing and Analysis

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Defense Schedule - Commons Calendar This link opens in a new window
  • Research Process Flow Chart
  • Research Alignment Chapter 1 This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Literature Review This link opens in a new window
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Conceptual Framework
  • Theoretical Framework
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Defining The Conceptual Framework

What is it.

  • The researcher’s understanding/hypothesis/exploration of either an existing framework/model or how existing concepts come together to inform a particular problem. Shows the reader how different elements come together to facilitate research and a clear understanding of results.
  • Informs the research questions/methodology (problem statement drives framework drives RQs drives methodology)
  • A tool (linked concepts) to help facilitate the understanding of the relationship among concepts or variables in relation to the real-world. Each concept is linked to frame the project in question.
  • Falls inside of a larger theoretical framework (theoretical framework = explains the why and how of a particular phenomenon within a particular body of literature).
  • Can be a graphic or a narrative – but should always be explained and cited
  • Can be made up of theories and concepts

What does it do?

  • Explains or predicts the way key concepts/variables will come together to inform the problem/phenomenon
  • Gives the study direction/parameters
  • Helps the researcher organize ideas and clarify concepts
  • Introduces your research and how it will advance your field of practice. A conceptual framework should include concepts applicable to the field of study. These can be in the field or neighboring fields – as long as important details are captured and the framework is relevant to the problem. (alignment)

What should be in it?

  • Variables, concepts, theories, and/or parts of other existing frameworks

Making a Conceptual Framework

How to make a conceptual framework.

  • With a topic in mind, go to the body of literature and start identifying the key concepts used by other studies. Figure out what’s been done by other researchers, and what needs to be done (either find a specific call to action outlined in the literature or make sure your proposed problem has yet to be studied in your specific setting). Use what you find needs to be done to either support a pre-identified problem or craft a general problem for study. Only rely on scholarly sources for this part of your research.
  • Begin to pull out variables, concepts, theories, and existing frameworks explained in the relevant literature.
  • If you’re building a framework, start thinking about how some of those variables, concepts, theories, and facets of existing frameworks come together to shape your problem. The problem could be a situational condition that requires a scholar-practitioner approach, the result of a practical need, or an opportunity to further an applicational study, project, or research. Remember, if the answer to your specific problem exists, you don’t need to conduct the study.
  • The actionable research you’d like to conduct will help shape what you include in your framework. Sketch the flow of your Applied Doctoral Project from start to finish and decide which variables are truly the best fit for your research.
  • Create a graphic representation of your framework (this part is optional, but often helps readers understand the flow of your research) Even if you do a graphic, first write out how the variables could influence your Applied Doctoral Project and introduce your methodology. Remember to use APA formatting in separating the sections of your framework to create a clear understanding of the framework for your reader.
  • As you move through your study, you may need to revise your framework.
  • Note for qualitative/quantitative research: If doing qualitative, make sure your framework doesn’t include arrow lines, which could imply causal or correlational linkages.

Conceptual Framework for DMFT Students

  • Conceptural and Theoretical Framework for DMFT Students This document is specific to DMFT students working on a conceptual or theoretical framework for their applied project.

Conceptual Framework Guide

  • Conceptual Framework Guide Use this guide to determine the guiding framework for your applied dissertation research.

Example Frameworks

Let’s say I’ve just taken a job as manager of a failing restaurant. Throughout first week, I notice the few customers they have are leaving unsatisfied. I need to figure out why and turn the establishment into a thriving restaurant. I get permission from the owner to do a study to figure out exactly what we need to do to raise levels of customer satisfaction. Since I have a specific problem and want to make sure my research produces valid results, I go to the literature to find out what others are finding about customer satisfaction in the food service industry. This particular restaurant is vegan focused – and my search of the literature doesn’t say anything specific about how to increase customer service in a vegan atmosphere, so I know this research needs to be done.

I find out there are different types of satisfaction across other genres of the food service industry, and the one I’m interested in is cumulative customer satisfaction. I then decide based on what I’m seeing in the literature that my definition of customer satisfaction is the way perception, evaluation, and psychological reaction to perception and evaluation of both tangible and intangible elements of the dining experience come together to inform customer expectations. Essentially, customer expectations inform customer satisfaction.

I then find across the literature many variables could be significant in determining customer satisfaction. Because the following keep appearing, they are the ones I choose to include in my framework: price, service, branding (branched out to include physical environment and promotion), and taste. I also learn by reading the literature, satisfaction can vary between genders – so I want to make sure to also collect demographic information in my survey. Gender, age, profession, and number of children are a few demographic variables I understand would be helpful to include based on my extensive literature review.

Note: this is a quantitative study. I’m including all variables in this study, and the variables I am testing are my independent variables. Here I’m working to see how each of the independent variables influences (or not) my dependent variable, customer satisfaction. If you are interested in qualitative study, read on for an example of how to make the same framework qualitative in nature.

Also note: when you create your framework, you’ll need to cite each facet of your framework. Tell the reader where you got everything you’re including. Not only is it in compliance with APA formatting, but also it raises your credibility as a researcher. Once you’ve built the narrative around your framework, you may also want to create a visual for your reader.

See below for one example of how to illustrate your framework:

framework in research methodology

If you’re interested in a qualitative study, be sure to omit arrows and other notations inferring statistical analysis. The only time it would be inappropriate to include a framework in qualitative study is in a grounded theory study, which is not something you’ll do in an applied doctoral study.

A visual example of a qualitative framework is below:

undefined

Additional Framework Resources

Some additional helpful resources in constructing a conceptual framework for study:.

  • Problem Statement, Conceptual Framework, and Research Question. McGaghie, W. C.; Bordage, G.; and J. A. Shea (2001). Problem Statement, Conceptual Framework, and Research Question. Retrieved on January 5, 2015 from http://goo.gl/qLIUFg
  • Building a Conceptual Framework: Philosophy, Definitions, and Procedure
  • https://www.scribbr.com/dissertation/conceptual-framework/
  • https://www.projectguru.in/developing-conceptual-framework-in-a-research-paper/

Conceptual Framework Research

A conceptual framework is a synthetization of interrelated components and variables which help in solving a real-world problem. It is the final lens used for viewing the deductive resolution of an identified issue (Imenda, 2014). The development of a conceptual framework begins with a deductive assumption that a problem exists, and the application of processes, procedures, functional approach, models, or theory may be used for problem resolution (Zackoff et al., 2019). The application of theory in traditional theoretical research is to understand, explain, and predict phenomena (Swanson, 2013). In applied research the application of theory in problem solving focuses on how theory in conjunction with practice (applied action) and procedures (functional approach) frames vision, thinking, and action towards problem resolution. The inclusion of theory in a conceptual framework is not focused on validation or devaluation of applied theories. A concise way of viewing the conceptual framework is a list of understood fact-based conditions that presents the researcher’s prescribed thinking for solving the identified problem. These conditions provide a methodological rationale of interrelated ideas and approaches for beginning, executing, and defining the outcome of problem resolution efforts (Leshem & Trafford, 2007).

The term conceptual framework and theoretical framework are often and erroneously used interchangeably (Grant & Osanloo, 2014). Just as with traditional research, a theory does not or cannot be expected to explain all phenomenal conditions, a conceptual framework is not a random identification of disparate ideas meant to incase a problem. Instead it is a means of identifying and constructing for the researcher and reader alike an epistemological mindset and a functional worldview approach to the identified problem.

Grant, C., & Osanloo, A. (2014). Understanding, Selecting, and Integrating a Theoretical Framework in Dissertation Research: Creating the Blueprint for Your “House. ” Administrative Issues Journal: Connecting Education, Practice, and Research, 4(2), 12–26

Imenda, S. (2014). Is There a Conceptual Difference between Theoretical and Conceptual Frameworks? Sosyal Bilimler Dergisi/Journal of Social Sciences, 38(2), 185.

Leshem, S., & Trafford, V. (2007). Overlooking the conceptual framework. Innovations in Education & Teaching International, 44(1), 93–105. https://doi-org.proxy1.ncu.edu/10.1080/14703290601081407

Swanson, R. (2013). Theory building in applied disciplines . San Francisco: Berrett-Koehler Publishers.

Zackoff, M. W., Real, F. J., Klein, M. D., Abramson, E. L., Li, S.-T. T., & Gusic, M. E. (2019). Enhancing Educational Scholarship Through Conceptual Frameworks: A Challenge and Roadmap for Medical Educators . Academic Pediatrics, 19(2), 135–141. https://doi-org.proxy1.ncu.edu/10.1016/j.acap.2018.08.003

Student Experience Feedback Buttons

Was this resource helpful.

  • << Previous: Purpose Statement
  • Next: Theoretical Framework >>
  • Last Updated: Apr 22, 2024 4:50 PM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

Book cover

Destigmatisation of People Living with HIV/AIDS in China pp 39–44 Cite as

Research Framework and Research Methodology

  • Xiaoping Wang 2  
  • First Online: 27 January 2022

93 Accesses

Part of the book series: A Sociological View of AIDS ((ASV))

The author has been studying AIDS since September 2004 and applying qualitative research as the main means of collecting information in view of the specificity of this group.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Author information

Authors and affiliations.

Shanxi Normal University, Taiyuan, China

Xiaoping Wang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xiaoping Wang .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Huazhong University of Science and Technology Press

About this chapter

Cite this chapter.

Wang, X. (2022). Research Framework and Research Methodology. In: Destigmatisation of People Living with HIV/AIDS in China. A Sociological View of AIDS. Springer, Singapore. https://doi.org/10.1007/978-981-16-8534-7_3

Download citation

DOI : https://doi.org/10.1007/978-981-16-8534-7_3

Published : 27 January 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-16-8533-0

Online ISBN : 978-981-16-8534-7

eBook Packages : Literature, Cultural and Media Studies Literature, Cultural and Media Studies (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Short report
  • Open access
  • Published: 12 April 2024

A modified action framework to develop and evaluate academic-policy engagement interventions

  • Petra Mäkelä   ORCID: orcid.org/0000-0002-0938-1175 1 ,
  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 2 &
  • Kathryn Oliver   ORCID: orcid.org/0000-0002-4326-5258 1  

Implementation Science volume  19 , Article number:  31 ( 2024 ) Cite this article

744 Accesses

23 Altmetric

Metrics details

There has been a proliferation of frameworks with a common goal of bridging the gap between evidence, policy, and practice, but few aim to specifically guide evaluations of academic-policy engagement. We present the modification of an action framework for the purpose of selecting, developing and evaluating interventions for academic-policy engagement.

We build on the conceptual work of an existing framework known as SPIRIT (Supporting Policy In Health with Research: an Intervention Trial), developed for the evaluation of strategies intended to increase the use of research in health policy. Our aim was to modify SPIRIT, (i) to be applicable beyond health policy contexts, for example encompassing social, environmental, and economic policy impacts and (ii) to address broader dynamics of academic-policy engagement. We used an iterative approach through literature reviews and consultation with multiple stakeholders from Higher Education Institutions (HEIs) and policy professionals working at different levels of government and across geographical contexts in England, alongside our evaluation activities in the Capabilities in Academic Policy Engagement (CAPE) programme.

Our modifications expand upon Redman et al.’s original framework, for example adding a domain of ‘Impacts and Sustainability’ to capture continued activities required in the achievement of desirable outcomes. The modified framework fulfils the criteria for a useful action framework, having a clear purpose, being informed by existing understandings, being capable of guiding targeted interventions, and providing a structure to build further knowledge.

The modified SPIRIT framework is designed to be meaningful and accessible for people working across varied contexts in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, to ultimately support the use of evidence in decision-making.

Peer Review reports

Contributions to the literature

There has been a proliferation of theories, models and frameworks relating to translation of research into practice. Few specifically relate to engagement between academia and policy.

Challenges of evidence-informed policy-making are receiving increasing attention globally. There is a growing number of academic-policy engagement interventions but a lack of published evaluations.

This article contributes a modified action framework that can be used to guide how academic-policy engagement interventions might be developed, evaluated, facilitated, and improved, to support the use of evidence in policy decision-making.

Our contribution demonstrates the potential for modification of existing, useful frameworks instead of creating brand-new frameworks. It provides an exemplar for others who are considering when and how to modify existing frameworks to address new or expanded purposes while respecting the conceptual underpinnings of the original work.

Academic-policy engagement refers to ways that Higher Education Institutions (HEIs) and their staff engage with institutions responsible for policy at national, regional, county or local levels. Academic-policy engagement is intended to support the use of evidence in decision-making and in turn, improve its effectiveness, and inform the identification of barriers and facilitators in policy implementation [ 1 , 2 , 3 ]. Challenges of evidence-informed policy-making are receiving increasing attention globally, including the implications of differences in cultural norms and mechanisms across national contexts [ 4 , 5 ]. Although challenges faced by researchers and policy-makers have been well documented [ 6 , 7 ], there has been less focus on actions at the engagement interface. Pragmatic guidance for the development, evaluation or comparison of structured responses to the challenges of academic-policy engagement is currently lacking [ 8 , 9 ].

Academic-policy engagement exists along a continuum of approaches from linear (pushing evidence out from academia or pulling evidence into policy), relational (promoting mutual understandings and partnerships), and systems approaches (addressing identified barriers and facilitators) [ 4 ]. Each approach is underpinned by sets of beliefs, assumptions and expectations, and each raises questions for implementation and evaluation. Little is known about which academic-policy engagement interventions work in which settings, with scarce empirical evidence to inform decisions about which interventions to use, when, with whom, or why, and how organisational contexts can affect motivation and capabilities for such engagement [ 10 ]. A deeper understanding through the evaluation of engagement interventions will help to identify inhibitory and facilitatory factors, which may or may not transfer across contexts [ 11 ].

The intellectual technologies [ 12 ] of implementation science have proliferated in recent decades, including models, frameworks and theories that address research translation and acknowledge difficulties in closing the gap between research, policy and practice [ 13 ]. Frameworks may serve overlapping purposes of describing or guiding processes of translating knowledge into practice (e.g. the Quality Implementation Framework [ 14 ]); or helping to explain influences on implementation outcomes (e.g. the Theoretical Domains Framework [ 15 ]); or guiding evaluation (e.g. the RE-AIM framework [ 16 , 17 ]. Frameworks can offer an efficient way to look across diverse settings and to identify implementation differences [ 18 , 19 ]. However, the abundance of options raises its own challenges when seeking a framework for a particular purpose, and the use of a framework may mean that more weight is placed on certain aspects, leading to a partial understanding [ 13 , 17 ].

‘Action frameworks’ are predictive models that intend to organise existing knowledge and enable a logical approach for the selection, implementation and evaluation of intervention strategies, thereby facilitating the expansion of that knowledge [ 20 ]. They can guide change by informing and clarifying practical steps to follow. As flexible entities, they can be adapted to accommodate new purposes. Framework modification may include the addition of constructs or changes in language to expand applicability to a broader range of settings [ 21 ].

We sought to identify one organising framework for evaluation activities in the Capabilities in Academic-Policy Engagement (CAPE) programme (2021–2023), funded by Research England. The CAPE programme aimed to understand how best to support effective and sustained engagement between academics and policy professionals across the higher education sector in England [ 22 ]. We first searched the literature and identified an action framework that was originally developed between 2011 and 2013, to underpin a trial known as SPIRIT (Supporting Policy In health with Research: an Intervention Trial) [ 20 , 23 ]. This trial evaluated strategies intended to increase the use of research in health policy and to identify modifiable points for intervention.

We selected the SPIRIT framework due to its potential suitability as an initial ‘road map’ for our evaluation of academic-policy interventions in the CAPE programme. The key elements of the original framework are catalysts, organisational capacity, engagement actions, and research use. We wished to build on the framework’s embedded conceptual work, derived from literature reviews and semi-structured interviews, to identify policymakers’ views on factors that assist policy agencies’ use of research [ 20 ]. The SPIRIT framework developers defined its “locus for change” as the policy organisation ( [ 20 ], p. 151). They proposed that it could offer the beginning of a process to identify and test pathways in policy agencies’ use of evidence.

Our goal was to modify SPIRIT to accommodate a different locus for change: the engagement interface between academia and policy. Instead of imagining a linear process in which knowledge comes from researchers and is transmitted to policy professionals, we intended to extend the framework to multidirectional relational and system interfaces. We wished to include processes and influences at individual, organisational and system levels, to be relevant for HEIs and their staff, policy bodies and professionals, funders of engagement activities, and facilitatory bodies. Ultimately, we seek to address a gap in understanding how engagement strategies work, for whom, how they are facilitated, and to improve the evaluation of academic-policy engagement.

We aimed to produce a conceptually guided action framework to enable systematic evaluation of interventions intending to support academic-policy engagement.

We used a pragmatic combination of processes for framework modification during our evaluation activities in the CAPE programme [ 22 ]. The CAPE programme included a range of interventions: seed funding for academic and policy professional collaboration in policy-focused projects, fellowships for academic placements in policy settings, or for policy professionals with HEI staff, training for policy professionals, and a range of knowledge exchange events for HEI staff and policy professionals. We modified the SPIRIT framework through iterative processes shown in Table  1 , including reviews of literature; consultations with HEI staff and policy professionals across a range of policy contexts and geographic settings in England, through the CAPE programme; and piloting, refining and seeking feedback from stakeholders in academic-policy engagement.

A number of characteristics of the original SPIRIT framework could be applied to academic-policy engagement. While keeping the core domains, we modified the framework to capture dynamics of engagement at multiple academic and policy levels (individuals, organisations and system), extending beyond the original unidirectional focus on policy agencies’ use of research. Components of the original framework, the need for modifications, and their corresponding action-oriented implications are shown in Table  2 . We added a new domain, ‘Impacts and Sustainability’, to consider transforming and enduring aspects at the engagement interface. The modified action framework is shown in Fig.  1 .

figure 1

SPIRIT Action Framework Modified for Academic-Policy Engagement Interventions (SPIRIT-ME), adapted with permission from the Sax Institute. Legend: The framework acknowledges that elements in each domain may influence other elements through mechanisms of action and that these do not necessarily flow through the framework in a ‘pipeline’ sequence. Mechanisms of action are processes through which engagement strategies operate to achieve desired outcomes. They might rely on influencing factors, catalysts, an aspect of an intervention action, or a combination of elements

Identifying relevant theories or models for missing elements

Catalysts and capacity.

Within our evaluation of academic-policy interventions, we identified a need to develop the original domain of catalysts beyond ‘policy/programme need for research’ and ‘new research with potential policy relevance’. Redman et al. characterised a catalyst as “a need for information to answer a particular problem in policy or program design, or to assist in supporting a case for funding” in the original framework (p. 149). We expanded this “need for information” to a perceived need for engagement, by either HEI staff or policy professionals, linking to the potential value they perceived in engaging. Specifically, there was a need to consider catalysts at the level of individual engagement, for example HEI staff wanting research to have real-world impact, or policy professionals’ desires to improve decision-making in policy, where productive interactions between academic and policy stakeholders are “necessary interim steps in the process that lead to societal impact” ( [ 24 ], p. 214). The catalyst domain expands the original emphasis on a need for research, to take account of challenges to be overcome by both the academic and policy communities in knowing how, and with whom, to engage and collaborate with [ 25 ].

We used a model proposing that there are three components for any behaviour: capability, opportunity and motivation, which is known as the COM-B model [ 26 ]. Informed by CAPE evaluation activities and our discussions with stakeholders, we mapped the opportunity and motivation constructs into the ‘catalysts’ domain of the original framework. Opportunity is an attribute of the system that can facilitate engagement. It may be a tangible factor such as the availability of seed funding, or a perceived social opportunity such as institutional support for engagement activities. Opportunity can act at the macro level of systems and organisational structures. Motivation acts at the micro level, deriving from an individual’s mental processes that stimulate and direct their behaviours; in this case, taking part in academic-policy engagement actions. The COM-B model distinguishes between reflective motivation through conscious planning and automatic motivation that may be instinctive or affective [ 26 ].

We presented an early application of the COM-B model to catalysts for engagement at an academic conference, enabling an informal exploration of attendees’ subjective views on the clarity and appropriateness, when developing the framework. This application introduces possibilities for intervention development and support by highlighting ‘opportunities’ and ‘motivations’ as key catalysts in the modified framework.

Within the ‘capacity’ domain, we retained the original levels of individuals, organisations and systems. We introduced individual capability as a construct from the COM-B model, describing knowledge, skills and abilities to generate behaviour change as a precursor of academic-policy engagement. This reframing extends the applicability to HEI staff as well as policy professionals. It brings attention to different starting conditions for individuals, such as capabilities developed through previous experience, which can link with social opportunity (for example, through training or support) as a catalyst.

Engagement actions

We identified a need to modify the original domain ‘engagement actions’ to extend the focus beyond the use of research. We added three categories of engagement actions described by Best and Holmes [ 27 ]: linear, relational, and systems. These categories were further specified through a systematic mapping of international organisations’ academic-policy engagement activities [ 5 ]. This framework modification expands the domain to encompass: (i) linear ‘push’ of evidence from academia or ‘pull’ of evidence into policy agencies; (ii) relational approaches focused on academic-policy-maker collaboration; and (iii) systems’ strategies to facilitate engagement for example through strategic leadership, rewards or incentives [ 5 ].

We retained the elements in the original framework’s ‘outcomes’ domain (instrumental, tactical, conceptual and imposed), which we found could apply to outcomes of engagement as well as research use. For example, discussions between a policy professional and a range of academics could lead to a conceptual outcome by considering an issue through different disciplinary lenses. We expanded these elements by drawing on literature on engagement outcomes [ 28 ] and through sense-checking with stakeholders in CAPE. We added capacity-building (changes to skills and expertise), connectivity (changes to the number and quality of relationships), and changes in organisational culture or attitude change towards engagement.

Impacts and sustainability

The original framework contained endpoints described as: ‘Better health system and health outcomes’ and ‘Research-informed health policy and policy documents’. For modification beyond health contexts and to encompass broader intentions of academic-policy engagement, we replaced these elements with a new domain of ‘Impacts and sustainability’. This domain captures the continued activities required in achievement of desirable outcomes [ 29 ]. The modification allows consideration of sustainability in relation to previous stages of engagement interventions, through the identification of beneficial effects that are sustained (or not), in which ways, and for whom. Following Borst [ 30 ], we propose a shift from the expectation that ‘sustainability’ will be a fixed endpoint. Instead, we emphasise the maintenance work needed over time, to sustain productive engagement.

Influences and facilitators

We modified the overarching ‘Policy influences’ (such as public opinion and media) in the original framework, to align with factors influencing academic-policy engagement beyond policy agencies’ use of research. We included influences at the level of the individual (for example, individual moral discretion [ 31 ]), the organisation (for example, managerial practices [ 31 ]) and the system (for example, career incentives [ 32 ]). Each of these processes takes place in the broader context of social, policy and financial environments (that is, potential sources of funding for engagement actions) [ 29 ].

We modified the domain ‘Reservoir of relevant and reliable research’ underpinning the original framework, replacing it with ‘Reservoir of people skills’, to emphasise intangible facilitatory work at the engagement interface, in place of concrete research outputs. We used the ‘Promoting Action on Research Implementation in Health Services’ (PARiHS) framework [ 33 , 34 ], which gives explicit consideration to facilitation mechanisms for researchers and policy-makers [ 13 ] . Here, facilitation expertise includes mechanisms that focus on particular goals (task-oriented facilitation) or enable changes in ways of working (holistic-oriented facilitation). Task-orientated facilitation skills might include, for example, the provision of contacts, practical help or project management skills, while holistic-oriented facilitation involves building and sustaining partnerships or support skills’ development across a range of capabilities. These conceptualisations aligned with our consultations with facilitators of engagement in CAPE. We further extended these to include aspects identified in our evaluation activities: strategic planning, contextual awareness and entrepreneurial orientation.

Piloting and refining the modified framework through stakeholder engagement

We piloted an early version of the modified framework to develop a survey for all CAPE programme participants. During this pilot stage, we sought feedback from the CAPE delivery team members across HEI and policy contexts in England. CAPE delivery team members are based at five collaborating universities with partners in the Parliamentary Office for Science and Technology (POST) and Government Office for Science (GO-Science), and Nesta (a British foundation that supports innovation). The HEI members include academics and professional services knowledge mobilisation staff, responsible for leading and coordinating CAPE activities. The delivery team comprised approximately 15–20 individuals (with some fluctuations according to individual availabilities).

We assessed appropriateness and utility, refined terminology, added domain elements and explored nuances. For example, stakeholders considered the multi-layered possibilities within the domain ‘capacity’, where some HEI or policy departments may demonstrate a belief that it is important to use research in policy, but this might not be the perception of the organisation as a whole. We also sought stakeholders’ views on the utility of the new domains, for example, the identification of facilitator expertise such as acting as a knowledge broker or intermediary; providing training, advice or guidance; facilitating engagement opportunities; creating engagement programmes; and sustainability of engagement that could be conceptualised at multiple levels: personally, in processes or through systems.

Testing against criteria for useful action framework

The modified framework fulfils the properties of a useful action framework [ 20 ]:

It has a clearly articulated purpose: development and evaluation of academic-policy engagement interventions through linear, relational and/or system approaches. It has identified loci for change, at the level of the individual, the organisation or system.

It has been informed by existing understandings, including conceptual work of the original SPIRIT framework, conceptual models identified from the literature, published empirical findings, understandings from consultation with stakeholders, and evaluation activities in CAPE.

It can be applied to the development, implementation and evaluation of targeted academic-policy engagement actions, the selection of points for intervention and identification of potential outcomes, including the work of sustaining them and unanticipated consequences.

It provides a structure to build knowledge by guiding the generation of hypotheses about mechanisms of action in academic-policy engagement interventions, or by adapting the framework further through application in practice.

The proliferation of frameworks to articulate processes of research translation reveals a need for their adaptation when applied in specific contexts. The majority of models in implementation science relate to translation of research into practice. By contrast, our focus was on engagement between academia and policy. There are a growing number of academic-policy engagement interventions but a lack of published evaluations [ 10 ].

Our framework modification provides an exemplar for others who are considering how to adapt existing conceptual frameworks to address new or expanded purposes. Field et al. identified the multiple, idiosyncratic ways that the Knowledge to Action Framework has been applied in practice, demonstrating its ‘informal’ adaptability to different healthcare settings and topics [ 35 ]. Others have reported on specific processes for framework refinement or extension. Wiltsey Stirman et al. adopted a framework that characterised forms of intervention modification, using a “pragmatic, multifaceted approach” ( [ 36 ], p.2). The authors later used the modified version as a foundation to build a further framework to encompass implementation strategies in a range of settings [ 21 ]. Oiumet et al. used the approach of borrowing from a different disciplinary field for framework adaptation, by using a model of absorptive capacity from management science to develop a conceptual framework for civil servants’ absorption of research knowledge [ 37 ].

We also took the approach of “adapting the tools we think with” ( [ 38 ], p.305) during our evaluation activities on the CAPE programme. Our conceptual modifications align with the literature on motivation and entrepreneurial orientation in determining policy-makers’ and researchers’ intentions to carry out engagement in addition to ‘usual’ roles [ 39 , 40 ]. Our framework offers an enabler for academic-policy engagement endeavours, by providing a structure for approaches beyond the linear transfer of information, emphasising the role of multidirectional relational activities, and the importance of their facilitation and maintenance. The framework emphasises the relationship between individuals’ and groups’ actions, and the social contexts in which these are embedded. It offers additional value by capturing the organisational and systems level factors that influence evidence-informed policymaking, incorporating the dynamic features of contexts shaping engagement and research use.

Conclusions

Our modifications extend the original SPIRIT framework’s focus on policy agencies’ use of research, to encompass dynamic academic-policy engagement at the levels of individuals, organisations and systems. Informed by the knowledge and experiences of policy professionals, HEI staff and knowledge mobilisers, it is designed to be meaningful and accessible for people working across varied contexts and functions in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, and it fulfils Redman et al.’s criteria as a useful action framework [ 20 ].

We are testing the ‘SPIRIT-Modified for Engagement’ framework (SPIRIT-ME) through our ongoing evaluation of academic-policy engagement activities. Further empirical research is needed to explore how the framework may capture ‘additionality’, that is, to identify what is achieved through engagement actions in addition to what would have happened anyway, including long-term changes in strategic behaviours or capabilities [ 41 , 42 , 43 ]. Application of the modified framework in practice will highlight its strengths and limitations, to inform further iterative development and adaptation.

Availability of data and materials

Not applicable.

Stewart R, Dayal H, Langer L, van Rooyen C. Transforming evidence for policy: do we have the evidence generation house in order? Humanit Soc Sci Commun. 2022;9(1):1–5.

Article   Google Scholar  

Sanderson I. Complexity, ‘practical rationality’ and evidence-based policy making. Policy Polit. 2006;34(1):115–32.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, et al. Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLOS Med. 2015;12(10):e1001895.

Article   PubMed   PubMed Central   Google Scholar  

Bonell C, Meiksin R, Mays N, Petticrew M, McKee M. Defending evidence informed policy making from ideological attack. BMJ. 2018;10(362):k3827.

Hopkins A, Oliver K, Boaz A, Guillot-Wright S, Cairney P. Are research-policy engagement activities informed by policy theory and evidence? 7 challenges to the UK impact agenda. Policy Des Pract. 2021;4(3):341–56.

Google Scholar  

Head BW. Toward More “Evidence-Informed” Policy Making? Public Adm Rev. 2016;76(3):472–84.

Walker LA, Lawrence NS, Chambers CD, Wood M, Barnett J, Durrant H, et al. Supporting evidence-informed policy and scrutiny: A consultation of UK research professionals. PLoS ONE. 2019;14(3):e0214136.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Graham ID, Tetroe J, Group the KT. Planned action theories. In: Knowledge Translation in Health Care. John Wiley and Sons, Ltd; 2013. p. 277–87. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118413555.ch26 Cited 2023 Nov 1

Davies HT, Powell AE, Nutley SM. Mobilising knowledge to improve UK health care: learning from other countries and other sectors – a multimethod mapping study. Southampton (UK): NIHR Journals Library; 2015. (Health Services and Delivery Research). Available from: http://www.ncbi.nlm.nih.gov/books/NBK299400/ Cited 2023 Nov 1

Oliver K, Hopkins A, Boaz A, Guillot-Wright S, Cairney P. What works to promote research-policy engagement? Evid Policy. 2022;18(4):691–713.

Nelson JP, Lindsay S, Bozeman B. The last 20 years of empirical research on government utilization of academic social science research: a state-of-the-art literature review. Adm Soc. 2023;28:00953997231172923.

Bell D. Technology, nature and society: the vicissitudes of three world views and the confusion of realms. Am Sch. 1973;42:385–404.

Milat AJ, Li B. Narrative review of frameworks for translating research evidence into policy and practice. Public Health Res Pract. 2017; Available from: https://apo.org.au/sites/default/files/resource-files/2017-02/apo-nid74420.pdf Cited 2023 Nov 1

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80.

Article   PubMed   Google Scholar  

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7(1):37.

Glasgow RE, Battaglia C, McCreight M, Ayele RA, Rabin BA. Making implementation science more rapid: use of the RE-AIM framework for mid-course adaptations across five health services research projects in the veterans health administration. Front Public Health. 2020;8. Available from: https://www.frontiersin.org/articles/10.3389/fpubh.2020.00194 Cited 2023 Jun 13

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci IS. 2015 Apr 21 10. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4406164/ Cited 2020 May 4

Sheth A, Sinfield JV. An analytical framework to compare innovation strategies and identify simple rules. Technovation. 2022;1(115):102534.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, et al. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136:147–55.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

CAPE. CAPE. 2021. CAPE Capabilities in Academic Policy Engagement. Available from: https://www.cape.ac.uk/ Cited 2021 Aug 3

CIPHER Investigators. Supporting policy in health with research: an intervention trial (SPIRIT)—protocol for a stepped wedge trial. BMJ Open. 2014;4(7):e005293.

Spaapen J, Van Drooge L. Introducing ‘productive interactions’ in social impact assessment. Res Eval. 2011;20(3):211–8.

Williams C, Pettman T, Goodwin-Smith I, Tefera YM, Hanifie S, Baldock K. Experiences of research-policy engagement in policymaking processes. Public Health Res Pract. 2023. Online early publication. https://doi.org/10.17061/phrp33232308 .

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.

Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy J Res Debate Pract. 2010;6(2):145–59.

Edwards DM, Meagher LR. A framework to evaluate the impacts of research on policy and practice: A forestry pilot study. For Policy Econ. 2020;1(114):101975.

Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101(11):2059–67.

Borst RAJ, Wehrens R, Bal R, Kok MO. From sustainability to sustaining work: What do actors do to sustain knowledge translation platforms? Soc Sci Med. 2022;1(296):114735.

Zacka B. When the state meets the street: public service and moral agency. Harvard university press; 2017. Available from: https://books.google.co.uk/books?hl=en&lr=&id=3KdFDwAAQBAJ&oi=fnd&pg=PP1&dq=zacka+when+the+street&ots=x93YEHPKhl&sig=9yXKlQiFZ0XblHrbYKzvAMwNWT4 Cited 2023 Nov 28

Torrance H. The research excellence framework in the United Kingdom: processes, consequences, and incentives to engage. Qual Inq. 2020;26(7):771–9.

Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297–304.

Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A guide for applying a revised version of the PARIHS framework for implementation. Implement Sci. 2011;6(1):99.

Field B, Booth A, Ilott I, Gerrish K. Using the knowledge to action framework in practice: a citation analysis and systematic review. Implement Sci. 2014;9(1):172.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Ouimet M, Landry R, Ziam S, Bédard PO. The absorption of research knowledge by public civil servants. Evid Policy. 2009;5(4):331–50.

Martin D, Spink MJ, Pereira PPG. Multiple bodies, political ontologies and the logic of care: an interview with Annemarie Mol. Interface - Comun Saúde Educ. 2018;22:295–305.

Sajadi HS, Majdzadeh R, Ehsani-Chimeh E, Yazdizadeh B, Nikooee S, Pourabbasi A, et al. Policy options to increase motivation for improving evidence-informed health policy-making in Iran. Health Res Policy Syst. 2021;19(1):91.

Athreye S, Sengupta A, Odetunde OJ. Academic entrepreneurial engagement with weak institutional support: roles of motivation, intention and perceptions. Stud High Educ. 2023;48(5):683–94.

Bamford D, Reid I, Forrester P, Dehe B, Bamford J, Papalexi M. An empirical investigation into UK university–industry collaboration: the development of an impact framework. J Technol Transf. 2023 Nov 13; Available from: https://doi.org/10.1007/s10961-023-10043-9 Cited 2023 Dec 20

McPherson AH, McDonald SM. Measuring the outcomes and impacts of innovation interventions assessing the role of additionality. Int J Technol Policy Manag. 2010;10(1–2):137–56.

Hind J. Additionality: a useful way to construct the counterfactual qualitatively? Eval J Australas. 2010;10(1):28–35.

Download references

Acknowledgements

We are very grateful to the CAPE Programme Delivery Group members, for many discussions throughout this work. Our thanks also go to the Sax Institute, Australia (where the original SPIRIT framework was developed), for reviewing and providing helpful feedback on the article. We also thank our reviewers who made very constructive suggestions, which have strengthened and clarified our article.

The evaluation of the CAPE programme, referred to in this report, was funded by Research England. The funding body had no role in the design of the study, analysis, interpretation or writing the manuscript.

Author information

Authors and affiliations.

Department of Health Services Research and Policy, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, Kings Cross, London, WC1H 9SH, UK

Petra Mäkelä & Kathryn Oliver

Health and Social Care Workforce Research Unit, The Policy Institute, Virginia Woolf Building, Kings College London, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

You can also search for this author in PubMed   Google Scholar

Contributions

PM conceptualised the modification of the framework reported in this work. All authors made substantial contributions to the design of the work. PM drafted the initial manuscript. AB and KO contributed to revisions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Petra Mäkelä .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was granted for the overarching CAPE evaluation by the London School of Hygiene and Tropical Medicine Research Ethics Committee (reference 26347).

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mäkelä, P., Boaz, A. & Oliver, K. A modified action framework to develop and evaluate academic-policy engagement interventions. Implementation Sci 19 , 31 (2024). https://doi.org/10.1186/s13012-024-01359-7

Download citation

Received : 09 January 2024

Accepted : 20 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1186/s13012-024-01359-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-informed policy
  • Academic-policy engagement
  • Framework modification

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

framework in research methodology

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 April 2024

The design and evaluation of gamified online role-play as a telehealth training strategy in dental education: an explanatory sequential mixed-methods study

  • Chayanid Teerawongpairoj 1 ,
  • Chanita Tantipoj 1 &
  • Kawin Sipiyaruk 2  

Scientific Reports volume  14 , Article number:  9216 ( 2024 ) Cite this article

Metrics details

  • Health care
  • Health services
  • Public health

To evaluate user perceptions and educational impact of gamified online role-play in teledentistry as well as to construct a conceptual framework highlighting how to design this interactive learning strategy, this research employed an explanatory sequential mixed-methods design. Participants were requested to complete self-perceived assessments toward confidence and awareness in teledentistry before and after participating in a gamified online role-play. They were also asked to complete a satisfaction questionnaire and participate in an in-depth interview to investigate their learning experience. The data were analyzed using descriptive statistics, paired sample t-test, one-way analysis of variance, and framework analysis. There were 18 participants who completed self-perceived assessments and satisfaction questionnaire, in which 12 of them participated in a semi-structured interview. There were statistically significant increases in self-perceived confidence and awareness after participating in the gamified online role-play ( P  < 0.001). In addition, the participants were likely to be satisfied with this learning strategy, where usefulness was perceived as the most positive aspect with a score of 4.44 out of 5, followed by ease of use (4.40) and enjoyment (4.03). The conceptual framework constructed from the qualitative findings has revealed five key elements in designing a gamified online role-play, including learner profile, learning settings, pedagogical components, interactive functions, and educational impact. The gamified online role-play has demonstrated its potential in improving self-perceived confidence and awareness in teledentistry. The conceptual framework developed in this research could be considered to design and implement a gamified online role-play in dental education. This research provides valuable evidence on the educational impact of gamified online role-play in teledentistry and how it could be designed and implemented in dental education. This information would be supportive for dental instructors or educators who are considering to implement teledentistry training in their practice.

Similar content being viewed by others

framework in research methodology

Games in dental education: playing to learn or learning to play?

Andresa Costa Pereira & Anthony Damien Walmsley

framework in research methodology

A bibliometric analysis of the use of the Gamification Octalysis Framework in training: evidence from Web of Science

Sattwik Mohanty & Prabu Christopher B

framework in research methodology

Helping general dental practitioners use the Index of Orthodontic Treatment Need: an assessment of available educational apps

David G. Murray, Gautham Sivamurthy & Peter Mossey

Introduction

Telehealth has gained significant attention from various organization due to its potential to improve healthcare quality and accessibility 1 . It can be supportive in several aspects in healthcare, including medical and nursing services, to enhance continuous monitoring and follow-up 2 . Its adoption has increased substantially during the COVID-19 pandemic, aiming to provide convenient healthcare services 3 . Even though the COVID-19 outbreak has passed, many patients still perceive telehealth as an effective tool in reducing a number of visits and enhancing access to health care services 4 , 5 . This supports the use of telehealth in the post-COVID-19 era.

Teledentistry, a form of telehealth specific to dentistry, has been employed to improve access to dental services 6 . This system offers benefits ranging from online history taking, oral diagnosis, treatment monitoring, and interdisciplinary communication among dental professionals, enabling comprehensive and holistic treatment planning for patients 7 . Teledentistry can also reduce travel time and costs associated with dental appointments 8 , 9 , 10 . There is evidence that teledentistry serves as a valuable tool to enhance access to dental care for patients 11 . Additionally, in the context of long-term management in patients, telehealth has contributed to patient-centered care, by enhancing their surrounding environments 12 . Therefore, teledentistry should be emphasized as one of digital dentistry to enhance treatment quality.

Albeit the benefits of teledentistry, available evidence demonstrates challenges and concerns in the implementation of telehealth. Lack of awareness and knowledge in the use of telehealth can hinder the adoption of telehealth 13 . Legal issues and privacy concerns also emerge as significant challenges in telehealth use 14 . Moreover, online communication skills and technology literacy, including competency in using technological tools and applications, have been frequently reported as challenges in teledentistry 15 , 16 . Concerns regarding limitations stemming from the lack of physical examination are also significant 17 . These challenges and complexities may impact the accuracy of diagnosis and the security and confidentiality of patient information. Therefore, telehealth training for dental professionals emerges as essential prerequisites to effectively navigate the use of teledentistry, fostering confidence and competence in remote oral healthcare delivery.

The feasibility and practicality of telehealth in dental education present ongoing challenges and concerns. Given the limitations of teledentistry compared to face-to-face appointments, areas of training should encompass the telehealth system, online communication, technical issues, confidentiality concerns, and legal compliance 18 . However, there is currently no educational strategy that effectively demonstrates the importance and application of teledentistry 19 . A role-play can be considered as a teaching strategy where learners play a role that closely resembles real-life scenarios. A well-organized storytelling allows learner to manage problematic situations, leading to the development of problem-solving skill 20 , 21 . When compared to traditional lecture-based learning, learners can also enhance their communication skills through conversations with simulated patients 22 , 23 . In addition, they could express their thoughts and emotions during a role-play through experiential learning 20 , 24 , 25 . Role-play through video teleconference would be considered as a distance learning tool for training dental professionals to effectively use teledentistry.

While there have been studies supporting online role-play as an effective learning tool due to its impact of flexibility, engagement, and anonymity 26 , 27 , no evidence has been yet reported whether or not this learning strategy could have potential for training teledentistry. Given the complicated issues in telehealth, role-play for training teledentistry should incorporate different learning aspects compared to face-to-face communication with patients. In addition, game components have proved to be supportive in dental education 28 , 29 . Consequently, this research aimed to evaluate user perceptions and educational impact of gamified online role-play to enhance learner competence and awareness in using teledentistry as well as to construct a conceptual framework highlighting how to design and implement this interactive learning strategy. This research would introduce and promote the design and implementation of gamified online role-play as a learning tool for training teledentistry. To achieve the aim, specific objectives were established as follows:

1. To design a gamified online role-play for teledentistry training.

2. To investigate learner perceptions regarding their confidence and awareness in the use of teledentistry after completing the gamified online role-play.

3. To explore user satisfactions toward the use of gamified online role-play.

4. To develop a conceptual framework for designing and implementing a gamified online role-play for teledentistry training.

Materials and methods

Research design.

This research employed an explanatory sequential mixed-methods design, where a quantitative phase was firstly performed followed by a qualitative phase 30 , 31 . The quantitative phase was conducted based on pre-experimental research using one-group pretest–posttest design. Participants were requested to complete self-perceived assessments toward confidence and awareness in the use of teledentistry before and after participating in a gamified online role-play. They were also asked to complete a satisfaction questionnaire in using a gamified online role-play for training teledentistry. The qualitative phase was afterwards conducted to explore in-depth information through semi-structured interviews, in order to enhance an understanding of the quantitative phase, and to develop a conceptual framework for designing and implementing an online role-play for training teledentistry.

A gamified online role-play for training teledentistry

A gamified online role-play was designed and developed by the author team. To ensure its educational impact was significant, the expected learning outcomes were formulated based on insights gathered from a survey with experienced instructors from the Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University. These learning outcomes covered areas of online communication skill, technical issues, technology literacy of patients, limitations of physical examination, and privacy concerns of personal information. Learning scenario and instructional content were subsequently designed to support learners in achieving the expected learning outcomes, with their alignments validated by three experts in dental education. A professional actress underwent training to role-play a patient with a dental problem, requesting a virtual consultation or teledentistry. Before conducting data collection, the simulated patient was required to undergo a training and adjusting process with a pilot group under supervision of two experts in advanced general dentistry and dental education who had experience with teledentistry to ensure realism and completeness of learning content.

According to the role-play scenario, an actress was assigned to portray a 34-year-old female with chief complaints of pain around both ears, accompanied by difficulties in chewing food due to tooth loss. She was instructed to express her anxiety and nervousness about addressing these issues. Additionally, it was specified that she could not take a day off from work during this period. Despite this constraint, she required a dental consultation to receive advice for initial self-care, as her symptoms significantly impacted her daily life. Furthermore, she was designated to encounter difficulties with the technological use of the teledentistry platform.

The game components were implemented into the online role-play to enhance motivation and engagement. As challenge and randomness appear to be game elements 32 , 33 , five challenge cards were designed and embedded into the online role-play, where a participant was asked to randomly select one of them before interacting with the simulated patient. The challenging situations were potential technical concerns which could occur frequently during video conferencing, including network problems (e.g., internet disconnection and poor connection) and audiovisual quality issues. The participants were blinded to the selected card, while it was revealed to only the simulated patient. The challenging conditions were mimicked by the organizers and simulated patient, allowing learners to deal with difficulties. Therefore, both challenges and randomness were implemented into this learning intervention not only to create learning situations but also to enhance engagement.

A feedback system was carefully considered and implemented into the gamified online role-play. Immediate feedback appears to be a key feature of interactive learning environments 29 . Formative feedback was instantly delivered to learners through verbal and non-verbal communication, including words (content), tone of voice, facial expressions, and gestures of the simulated patient. This type of feedback allowed participants to reflect on whether or not their inputs were appropriate, enabling them to learn from their mistakes, or so-called the role of failure 34 . Summative feedback was also provided at the end of the role-play through a reflection from a simulated patient and suggestions from an instructor.

Learners were able to interact with the simulated patient using an online meeting room by Cisco WebEx. According to the research setting (Fig.  1 ), a learner was asked to participate in the role-play activity using a computer laptop in a soundproof room, while a simulated patient was arranged in a prepared location showing her residential environment. The researcher and instructor also joined the online meeting room and observed the interaction between the simulated patient and learners during the role-play activity whether or not all necessary information was accurately obtained. The role-play activity took around 30 minutes.

figure 1

A diagram demonstrating the setting of gamified online role-play.

Research participants

Quantitative phase.

The participants in this research were postgraduate students from the Residency Training Program in Advanced General Dentistry at Mahidol University Faculty of Dentistry in academic year 2022, using a volunteer sampling. This program was selected because its objective was to develop graduates capable of integrating competencies from various dental disciplines to provide comprehensive dental care for both normal patients and those with special needs. Therefore, teledentistry should be a supportive component of their service. The recruitment procedure involved posting a recruiting text in the group chat of the residents. Those interested in participating in the research were informed to directly contact us to request more information, and they were subsequently allowed to decide whether they would like to participate. This approach ensured that participation was voluntary. Although there could be a non-response bias within this non-probability sampling technique 35 , it was considered as appropriate for this study, as participants were willing to have contribution in the learning activity, and therefore accurate and reliable research findings with no dropout could be achieved 36 .

The inclusion and exclusion criteria were established to determine the eligibility of prospective participants for this research. This study included postgraduate students from Years 1 to 3 in the Residency Training Program in Advanced General Dentistry at Mahidol University Faculty of Dentistry, enrolled during the academic year 2022. They were also required to at least complete the first semester to be eligible for this research to ensure familiarity with comprehensive dental care. However, they were excluded if they had previous involvement in the pilot testing of the gamified online role-play or if they were not fluent in the Thai language. The sample size was determined using a formula for two dependent samples (comparing means) 37 . To detect a difference in self-perceived confidence and awareness between pre- and post-assessments at a power of 90% and a level of statistical significance of 1%, five participants were required. With an assumed dropout rate of 20%, the number of residents per year (Year 1–3) was set to be 6. Therefore, 18 residents were required for this research.

Qualitative phase

The participants from the quantitative phase were selected for semi-structured interviews using a purposive sampling. This sampling method involved the selection of information-rich participants based on specific criteria deemed relevant to the research objective and to ensure a diverse representation of perspectives and experiences within the sample group 38 . In this research, the information considered for the purposive sampling included demographic data (e.g., sex and year of study), along with self-perceived assessment scores. By incorporating perceptions from a variety of participants, a broad spectrum of insights from different experiences in comprehensive dental practice and diverse improvement levels in self-perceived confidence and awareness could inform the design and implementation of the training program effectively. The sample size for this phase was determined based on data saturation, wherein interviews continued until no new information or emerging themes were retrieved. This method ensured thorough exploration of the research topic and maximized the richness of the qualitative data obtained.

Outcome assessments

To evaluate the gamified online role-play, a triangular design approach was employed, enabling the researchers to compare the research outcomes from different assessment methods. In this research, self-perceived assessments (confidence and awareness) in teledentistry, satisfactions toward gamified online role-play, and learner experience were assessed to assure the quality and feasibility of the gamified online role-play.

Self-perceived confidence and awareness toward teledentistry

All participants were requested to rate their perceptions of teledentistry before and after participating in the gamified online role-play (Supplementary material 1 ). The self-perceived assessment was developed based on previous literature 39 , 40 , 41 , 42 . The assessment scores would inform whether or not the participants could improve their self-perceived confidence and awareness through a learning activity. The assessment consisted of two parts, which were (1) self-perceived confidence and (2) self-perceived awareness. Each part contained six items, which were similar between the pre- and post-assessments. All items were designed using a 5-point Likert scale, where 1 being ‘strongly disagree’ and 5 being ‘strongly agree’.

Satisfactions toward the gamified online role-play

All participants were asked to complete the satisfaction questionnaire after participating in the gamified online role-play, to investigate whether or not they felt satisfied with their learning (Supplementary material 2 ). The questionnaire was developed based on previous literature regarding gamification and role-play 41 , 42 , 43 , 44 . Most of the items were designed using a 5-point Likert scale, where 1 being ‘very dissatisfied’ and 5 being ‘very satisfied’. They were grouped into three aspects, which were (1) Perceived usefulness, (2) Perceived ease of use, and (3) Perceived enjoyment.

Learner experiences within the gamified online role-play

Semi-structured interviews were conducted with the purposively selected participants to gather in-depth information regarding their learning experiences within the gamified online role-play. This technique allowed researchers to ask additional interesting topics raised from the responses of participants. A topic guide for interviews were constructed based on the findings of previous literature 45 , 46 , 47 . The interview was conducted in a private room by a researcher who was trained in conducting qualitative research including interviews. The interview sessions took approximately 45–60 minutes, where all responses from participants were recorded using a digital audio recorder with their permission. The recorded audios were transcribed using a verbatim technique by a transcription service under a confidential agreement.

Validity and reliability of data collection tools

To enhance the quality of self-perceived assessment and satisfaction questionnaire, they were piloted and revised to assure their validity and reliability. According to the content validity, three experts in advanced general dentistry were asked to evaluate the questionnaire, where problematic items were iteratively revised until they achieved the index of item-objective congruence (IOC) higher than 0.5. To perform a test–retest reliability, the validated versions of both self-perceived assessment and satisfaction questionnaire were afterwards piloted in residents from other programs, and the data were analyzed using an intraclass correlation coefficient (ICC), where the values of all items were 0.7 or greater. The data from the first pilot completion of both data collection tools were analyzed using Cronbach’s alpha to ensure the internal consistency of all constructs. The problematic items were deleted to achieve the coefficient alpha of 0.7 or greater for all constructs, which was considered as acceptable internal consistency.

Data analysis

The quantitative data retrieved from self-perceived assessment and satisfaction questionnaire were analyzed with the Statistical Package for Social Sciences software (SPSS, version 29, IBM Corp.). Descriptive statistics were performed to present an overview of the data. The scores from pre- and post-assessments were analyzed using a paired sample t-test to evaluate whether or not the participants would better self-perceive their confidence and awareness in teledentistry after participating in the gamified online role-play. One-way analysis of variance (ANOVA) was conducted to compare whether or not there were statistically significant differences in self-perceived assessment and satisfaction scores among the three academic years.

The qualitative data retrieved from semi-structured interviews were analyzed using a framework analysis, where its procedure involved transcription, familiarization with the interview data, coding, developing an analytical framework, indexing, charting, and data interpreting qualitative findings 48 . In this research, the initial codes had been pre-defined from previous literature and subsequently adjusted following the analysis of each transcript to develop an analytical framework (themes and subthemes), requiring several iterations until no additional codes emerged. Subsequently, the established categories and codes were applied consistently across all transcripts (indexing). The data from each transcript were then charted to develop a matrix, facilitating the management and summarization of qualitative findings. This method enabled the researchers to compare and contrast differences within the data and to identify connections between categories, thereby exploring their relationships and informing data interpretation.

The procedure of framework analysis necessitated a transparent process for data management and interpretation of emerging themes to ensure the robustness of research 49 . The transparency of this analytic approach enabled two researchers (C.Te. and K.S.) to independently analyze the qualitative data, and the emerging themes afterwards were discussed to obtain consensus among the researchers. This technique can be considered as a triangular approach to assure the intercoder reliability and internal validity of this research. The transparent process also allowed an external expert in dental education to verify the accuracy of the analysis. All emerging themes and the decision on data saturation were based on a discussion of all researchers until an agreement was made. NVivo (version 14, QSR International) was used to performed the qualitative data analysis. Subsequently, a conceptual framework was constructed to demonstrate emerging themes and subthemes together with their relationships.

Ethical consideration

The ethical approval for the study was approved by the Institutional Review Board of Faculty of Dentistry and Faculty of Pharmacy, Mahidol University on 29 th September 2022, the ethical approval number: MU-DT/PY-IRB 2022/049.2909. All methods were performed in accordance with the relevant guidelines and regulations. Although the data were not anonymous in nature as they contained identifiable data, they were coded prior to the analysis to assure confidentiality of participants.

Informed consent

Informed consent was obtained from all participants.

There were 18 residents from Year 1 to 3 of the Residency Training Program in Advanced General Dentistry who participated in this research (six from each year). Of these, there were 14 females and 4 males. There was no participant dropout, as all of them completed all required tasks, including the pre- and post-perceived assessments, gamified online role-play, and satisfaction questionnaire. According to the purposive sampling, the participants from the quantitative phase were selected for semi-structured interviews by considering sex, year of study, and self-perceived assessment scores. Twelve students (ten females and two males) participated in semi-structured interviews, where their characteristics are presented in Table 1 .

Internal consistency of all constructs

The data collected from the research participants, in addition to the pilot samples, were analyzed with Cronbach’s alpha to confirm the internal consistency. The coefficient alpha of all constructs demonstrated high internal consistency, as demonstrated in Table 2 .

Self-perceived assessments toward confidence and awareness of teledentistry

There were statistically significant increases in the assessment scores of self-perceived confidence and awareness after participating in the gamified online role-play ( P  < 0.001). According to Table 3 , there was an increase in self-perceived confidence from 3.38 (SD = 0.68) for the pre-assessment to 4.22 (SD = 0.59) for the post-assessment ( P  < 0.001). The findings of self-perceived awareness also showed score improvement from 4.16 (SD = 0.48) to 4.55 (SD = 0.38) after interacting with the simulated patient ( P  < 0.001).

According to Fig.  2 , participants demonstrated a higher level of self-perceived assessments for both self-confidence and awareness in all aspects after participating in the gamified online role-play for teledentistry training.

figure 2

Self-perceived assessments toward confidence and awareness of teledentistry.

When comparing the self-perceived assessment scores toward confidence and awareness in the use of teledentistry among the three years of study (Year 1–3), there were no statistically significant differences in the pre-assessment, post-assessment score, and score difference (Table 4 ).

Satisfactions toward the use of gamified online role-play

According to Fig.  3 , participants exhibited high levels of satisfaction with the use of gamified online role-play across all three aspects. The aspect of usefulness received the highest satisfaction rating with a score of 4.44 (SD = 0.23) out of 5, followed by ease of use and enjoyment, scoring 4.40 (SD = 0.23) and 4.03 (SD = 0.21), respectively. Particularly, participants expressed the highest satisfaction levels regarding the usefulness of gamified online role-play for identifying their role (Mean = 4.72, SD = 0.46) and developing problem-solving skills associated with teledentistry (Mean = 4.61, SD = 0.50). Additionally, they reported satisfaction with the learning sequence presented in the gamified online role-play (Mean = 4.61, SD = 0.50). However, participants did not strongly perceive that the format of the gamified online role-play could engage them with the learning task for an extended period (Mean = 3.72, SD = 0.83).

figure 3

Satisfactions toward the use of gamified online role-play.

When comparing the satisfaction levels perceived by participants from different academic years (Table 5 ), no statistically significant differences were observed among the three groups for all three aspects ( P  > 0.05).

Following the framework analysis of qualitative data, there were five emerging themes, including: (1) learner profile, (2) learning settings of the gamified online role-play, (3) pedagogical components, (4) interactive functions, and (5) educational impact.

Theme 1: Learner profile

Learner experience and preferences appeared to have impact on how the participants perceived the use of gamified online role-play for teledentistry training. When learners preferred role-play or realized benefits of teledentistry, they were likely to support this learning intervention. In addition, they could have seen an overall picture of the assigned tasks before participating in this research.

“I had experience with a role-play activity when I was dental undergraduates, and I like this kind of learning where someone role-plays a patient with specific personalities in various contexts. This could be a reason why I felt interested to participate in this task (the gamified online role-play). I also believed that it would be supportive for my clinical practice.” Participant 12, Year 1, Female “Actually, I' have seen in several videos (about teledentistry), where dentists were teaching patients to perform self-examinations, such as checking their own mouth and taking pictures for consultations. Therefore, I could have thought about what I would experience during the activity (within the gamified online role-play).” Participant 8, Year 2, Female

Theme 2: Learning settings of the gamified online role-play

Subtheme 2.1: location.

Participants had agreed that the location for conducting a gamified online role-play should be in a private room without any disturbances, enabling learners to focus on the simulated patient. This could allow them to effectively communicate and understand of the needs of patient, leading to a better grasp of lesson content. In addition, the environments of both learners and simulated patient should be authentic to the learning quality.

“The room should be a private space without any disturbances. This will make us feel confident and engage in conversations with the simulated patient.” Participant 10, Year 1, Female “… simulating a realistic environment can engage me to interact with the simulated patient more effectively ...” Participant 8, Year 2, Female

Subtheme 2.2: Time allocated for the gamified online role-play

The time allocated for the gamified online role-play in this research was considered as appropriate, as participants believed that a 30-minutes period should be suitable to take information and afterwards give some advice to their patient. In addition, a 10-minutes discussion on how they interact with the patient could be supportive for participants to enhance their competencies in the use of teledentistry.

“… it would probably take about 20 minutes because we would need to gather a lot of information … it might need some time to request and gather various information … maybe another 10-15 minutes to provide some advice.” Participant 7, Year 1, Female “I think during the class … we could allocate around 30 minutes for role-play, … we may have discussion of learner performance for 10-15 minutes ... I think it should not be longer than 45 minutes in total.” Participant 6, Year 2, Female

Subtheme 2.3: Learning consequence within a postgraduate curriculum

Most participants suggested that the gamified online role-play in teledentistry should be arranged in the first year of their postgraduate program. This could maximize the effectiveness of online role-play, as they would be able to implement teledentistry for their clinical practice since the beginning of their training. However, some participants suggested that this learning approach could be rearranged in either second or third year of the program. As they already had experience in clinical practice, the gamified online role-play would reinforce their competence in teledentistry.

"Actually, it would be great if this session could be scheduled in the first year … I would feel more comfortable when dealing with my patients through an online platform." Participant 11, Year 2, Male "I believe this approach should be implemented in the first year because it allows students to be trained in teledentistry before being exposed to real patients. However, if this approach is implemented in either the second or third year when they have already had experience in patient care, they would be able to better learn from conversations with simulated patients." Participant 4, Year 3, Male

Theme 3: Pedagogical components

Subtheme 3.1: learning content.

Learning content appeared to be an important component of pedagogical aspect, as it would inform what participants should learn from the gamified online role-play. Based on the interview data, participants reported they could learn how to use a video teleconference platform for teledentistry. The conditions of simulated patient embedded in an online role-play also allowed them to realize the advantages of teledentistry. In addition, dental problems assigned to the simulated patient could reveal the limitations of teledentistry for participants.

“The learning tasks (within the gamified online role-play) let me know how to manage patients through the teleconference.” Participant 5, Year 2, Female “… there seemed to be limitations (of teledentistry) … there could be a risk of misdiagnosis … the poor quality of video may lead to diagnostic errors … it is difficult for patients to capture their oral lesions.” Participant 3, Year 2, Female

Subtheme 3.2: Feedback

During the use of online role-play, the simulated patient can provide formative feedback to participants through facial expressions and tones of voice, enabling participants to observe and learn to adjust their inquiries more accurately. In addition, at the completion of the gamified online role-play, summative feedback provided by instructors could summarize the performance of participants leading to further improvements in the implementation of teledentistry.

“I knew (whether or not I interacted correctly) from the gestures and emotions of the simulated patient between the conversation. I could have learnt from feedback provided during the role-play, especially from the facial expressions of the patient.” Participant 11, Year 2, Male “The feedback provided at the end let me know how well I performed within the learning tasks.” Participant 2, Year 1, Female

Theme 4: Interactive functions

Subtheme 4.1: the authenticity of the simulated patient.

Most participants believed that a simulated patient with high acting performance could enhance the flow of role-play, allowing learners to experience real consequences. The appropriate level of authenticity could engage learners with the learning activity, as they would have less awareness of time passing in the state of flow. Therefore, they could learn better from the gamified online role-play.

"It was so realistic. ... This allowed me to talk with the simulated patient naturally ... At first, when we were talking, I was not sure how I should perform … but afterwards I no longer had any doubts and felt like I wanted to explain things to her even more." Participant 3, Year 2, Female "At first, I believed that if there was a factor that could influence learning, it would probably be a simulated patient. I was impressed by how this simulated patient could perform very well. It made the conversation flow smoothly and gradually." Participant 9, Year 3, Female

Subtheme 4.2: Entertaining features

Participants were likely to be satisfied with the entertaining features embedded in the gamified online role-play. They felt excited when they were being exposed to the unrevealed challenge which they had randomly selected. In addition, participants suggested to have more learning scenarios or simulated patients where they could randomly select to enhance randomness and excitement.

“It was a playful experience while communicating with the simulated patient. There are elements of surprise from the challenge cards that make the conversation more engaging, and I did not feel bored during the role-play.” Participant 4, Year 3, Male “I like the challenge card we randomly selected, as we had no idea what we would encounter … more scenarios like eight choices and we can randomly choose to be more excited. I think we do not need additional challenge cards, as some of them have already been embedded in patient conditions.” Participant 5, Year 2, Female

Subtheme 4.3: Level of difficulty

Participants suggested the gamified online role-play to have various levels of difficulty, so learners could have a chance to select a suitable level for their competence. The difficulties could be represented through patient conditions (e.g., systemic diseases or socioeconomic status), personal health literacy, and emotional tendencies. They also recommended to design the gamified online role-play to have different levels where learners could select an option that is suitable for them.

“The patient had hidden their information, and I needed to bring them out from the conversation.” Participant 12, Year 1, Female “Patients' emotions could be more sensitive to increase level of challenges. This can provide us with more opportunities to enhance our management skills in handling patient emotions.” Participant 11, Year 2, Male “… we can gradually increase the difficult level, similar to playing a game. These challenges could be related to the simulated patient, such as limited knowledge or difficulties in communication, which is likely to occur in our profession.” Participant 6, Year 2, Female

Theme 5: Educational impact

Subtheme 5.1: self-perceived confidence in teledentistry, communication skills.

Participants were likely to perceive that they could learn from the gamified online role-play and felt more confident in the use of teledentistry. This educational impact was mostly achieved from the online conversation within the role-play activity, where the participants could improve their communication skills through a video teleconference platform.

“I feel like the online role-play was a unique form of learning. I believe that I gained confidence from the online communication the simulated patient. I could develop skills to communicate effectively with real patients.” Participant 11, Year 2, Male “I believe it support us to train communication skills ... It allowed us to practice both listening and speaking skills more comprehensively.” Participant 4, Year 3, Male

Critical thinking and problem-solving skills

In addition to communication skills, participants reported that challenges embedded in the role-play allowed them to enhance critical thinking and problem-solving skills, which were a set of skills required to deal with potential problems in the use of teledentistry.

"It was a way of training before experiencing real situations … It allowed us to think critically whether or not what we performed with the simulated patients was appropriate." Participant 7, Year 1, Female “It allowed us to learn how to effectively solve the arranged problems in simulated situation. We needed to solve problems in order to gather required information from the patient and think about how to deliver dental advice through teledentistry.” Participant 11, Year 2, Male

Subtheme 5.2: Self perceived awareness in teledentistry

Participants believed that they could realize the necessity of teledentistry from the gamified online role-play. The storytelling or patient conditions allowed learners to understand how teledentistry could have both physical and psychological support for dental patients.

“From the activity, I would consider teledentistry as a convenient tool for communicating with patients, especially if a patient cannot go to a dental office”. Participant 5, Year 2, Female “I learned about the benefits of teledentistry, particularly in terms of follow-up. The video conference platform could support information sharing, such as drawing images or presenting treatment plans, to patients.” Participant 8, Year 2, Female

A conceptual framework of learning experience within a gamified online role-play

Based on the qualitative findings, a conceptual framework was developed in which a gamified online role-play was conceptualized as a learning strategy in supporting learners to be able to implement teledentistry in their clinical practice (Fig.  4 ).

figure 4

The conceptual framework of key elements in designing a gamified online role-play.

The conceptual framework has revealed key elements to be considered in designing a gamified online role-play. Learner profile, learning settings, pedagogical components, and interactive functions are considered as influential factors toward user experience within the gamified online role-play. The well-designed learning activity will support learners to achieve expected learning outcomes, considered as educational impact of the gamified online role-play. The contributions of these five key elements to the design of gamified online role-play were interpreted, as follows:

Learner profile: This element tailors the design of gamified online role-plays for teledentistry training involves considering the background knowledge, skills, and experiences of target learners to ensure relevance and engagement.

Learning settings: The element focuses the planning for gamified online role-plays in teledentistry training involves selecting appropriate contexts, such as location and timing, to enhance accessibility and achieve learning outcomes effectively.

Pedagogical components: This element emphasizes the alignment between learning components and learning outcomes within gamified online role-plays, to ensure that the content together with effective feedback design can support learners in improving their competencies from their mistakes.

Interactive functions: This element highlights interactivity features integrated into gamified online role-plays, such as the authenticity and entertaining components to enhance immersion and engagement, together with game difficulty for optimal flow. All these features should engage learners with the learning activities until the achievement of learner outcomes.

Educational impact: This element represents the expected learning outcomes, which will inform the design of learning content and activities within gamified online role-plays. In addition, this element could be considered to evaluate the efficacy of gamified online role-plays, reflecting how well learning designs align with the learning outcomes.

A gamified online role-play can be considered as a learning strategy for teledentistry according to its educational impact. This pedagogical approach could mimic real-life practice, where dental learners could gain experience in the use of teledentistry in simulated situations before interacting with actual patients. Role-play could provide learners opportunities to develop their required competencies, especially communication and real-time decision-making skills, in a predictable and safe learning environment 20 , 23 , 46 . Potential obstacles could also be arranged for learners to deal with, leading to the enhancement of problem-solving skill 50 . In addition, the recognition of teledentistry benefits can enhance awareness and encourage its adoption and implementation, which could be explained by the technology acceptance model 51 . Therefore, a gamified online role-play with a robust design and implementation appeared to have potential in enhancing self-perceived confidence and awareness in the use of teledentistry.

The pedagogical components comprised learning content, which was complemented by assessment and feedback. Learners could develop their competence with engagement through the learning content, gamified by storytelling of the online role-play 52 , 53 . Immediate feedback provided through facial expression and voice tone of simulated patients allowed participants to learn from their failure, considered as a key feature of game-based learning 29 , 45 . The discussion of summative feedback provided from an instructor at the end of role-play activity could support a debriefing process enabling participants to reflect their learning experience, considered as important of simulation-based game 54 . These key considerations should be initially considered in the design of gamified online role-play.

The interactive functions can be considered as another key component for designing and evaluating the gamified online role-play 45 . Several participants enjoyed with a learning process within the gamified online role-play and suggested it to have more learning scenarios. In other words, this tool could engage learners with an instructional process, leading to the achievement of learning outcomes 29 , 45 . As challenge and randomness appear to be game elements 32 , 33 , this learning intervention assigned a set of cards with obstacle tasks for learners to randomly pick up before interacting with simulated patients, which was perceived by participants as a feature to make the role-play more challenging and engaging. This is consistent with previous research, where challenging content for simulated patients could make learners more engaged with a learning process 55 . However, the balance between task challenges and learner competencies is certainly required for the design of learning activities 56 , 57 . The authenticity of simulated patient and immediate feedback could also affect the game flow, leading to the enhancement of learner engagement 45 . These elements could engage participants with a learning process, leading to the enhancement of educational impact.

The educational settings for implementing gamified online role-play into dental curriculum should be another concern. This aspect has been recognized as significant in existing evidence 45 . As this research found no significant differences in all aspects among the three groups of learners, this learning intervention demonstrated the potential for its implementation at any time of postgraduate dental curriculum. This argument can be supported by previous evidence where a role-play could be adaptable for learning at any time, as it requires a short learning period but provides learners with valuable experience prior to being exposed in real-life scenarios 58 . This strategy also provides opportunities for learners who have any question or concern to seek advice or guidance from their instructors 59 . Although the gamified online role-play can be arranged in the program at any time, the first academic year should be considered, as dental learners would be confidence in implementing teledentistry for their clinical practice.

While a gamified online role-play demonstrated its strengths as an interactive learning strategy specifically for teledentistry, there are a couple of potential drawbacks that need to be addressed. The requirement for synchronous participation could limit the flexibility of access time for learners (synchronous interactivity limitation). With only one learner able to engage with a simulated patient at a time (limited participants), more simulated patients would be required if there are a number of learners, otherwise they would need to wait for their turn. Time and resources are significantly required for preparing simulated patients 60 . Despite the use of trained and calibrated professional actors/actresses, inauthenticity may be perceived during role-plays, requiring a significant amount of effort to achieve both interactional and clinical authenticities 46 . Future research could investigate asynchronous learning approaches utilizing non-player character (NPC) controlled by an artificial intelligence system as a simulated patient. This setup would enable multiple learners to have the flexibility to engage with the material at their own pace and at times convenient to them 29 . While there are potential concerns about using gamified online role-plays, this interactive learning intervention offers opportunities for dental professionals to enhance their teledentistry competency in a safe and engaging environment.

Albeit the robust design and data collection tools to assure reliability and validity as well as transparency of this study, a few limitations were raised leading to a potential of further research. While this research recruited only postgraduate students to evaluate the feasibility of gamified online role-play in teledentistry training, further research should include not only experienced dental practitioners but also undergraduate students to confirm its potential use in participants with different learner profiles. More learning scenarios in other dental specialties should also be included to validate its effectiveness, as different specialties could have different limitations and variations. Additional learning scenarios from various dental disciplines should be considered to validate the effectiveness of gamified online role-plays, as different specialties may present unique limitations and variations. A randomized controlled trial with robust design should be required to compare the effectiveness of gamified online role-play with different approaches in training the use of teledentistry.

Conclusions

This research supports the design and implementation of a gamified online role-play in dental education, as dental learners could develop self-perceived confidence and awareness with satisfaction. A well-designed gamified online role-play is necessary to support learners to achieve expected learning outcomes, and the conceptual framework developed in this research can serve as a guidance to design and implement this interactive learning strategy in dental education. However, further research with robust design should be required to validate and ensure the educational impact of gamified online role-play in dental education. Additionally, efforts should be made to develop gamified online role-play in asynchronous learning approaches to enhance the flexibility of learning activities.

Data availability

The data that support the findings of this study are available from the corresponding author, up-on reasonable request. The data are not publicly available due to information that could compromise the privacy of research participants.

Van Dyk, L. A review of telehealth service implementation frameworks. Int. J. Environ. Res. Public Health 11 (2), 1279–1298 (2014).

Article   PubMed   PubMed Central   Google Scholar  

Bartz, C. C. Nursing care in telemedicine and telehealth across the world. Soins. 61 (810), 57–59 (2016).

Article   PubMed   Google Scholar  

Lin, G.S.S., Koh, S.H., Ter, K.Z., Lim, C.W., Sultana, S., Tan, W.W. Awareness, knowledge, attitude, and practice of teledentistry among dental practitioners during COVID-19: A systematic review and meta-analysis. Medicina (Kaunas). 58 (1), 130 (2022).

Wolf, T.G., Schulze, R.K.W., Ramos-Gomez, F., Campus, G. Effectiveness of telemedicine and teledentistry after the COVID-19 pandemic. Int. J. Environ. Res. Public Health. 19 (21), 13857 (2022).

Gajarawala, S. N. & Pelkowski, J. N. Telehealth benefits and barriers. J. Nurse Pract. 17 (2), 218–221 (2021).

Jampani, N. D., Nutalapati, R., Dontula, B. S. & Boyapati, R. Applications of teledentistry: A literature review and update. J. Int. Soc. Prev. Community Dent. 1 (2), 37–44 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Khan, S. A. & Omar, H. Teledentistry in practice: literature review. Telemed. J. E. Health. 19 (7), 565–567 (2013).

Baheti, M. J. B. S., Toshniwal, N. G. & Misal, A. Teledentistry: A need of the era. Int. J. Dent. Med. Res. 1 (2), 80–91 (2014).

Google Scholar  

Datta, N., Derenne, J., Sanders, M. & Lock, J. D. Telehealth transition in a comprehensive care unit for eating disorders: Challenges and long-term benefits. Int. J. Eat. Disord. 53 (11), 1774–1779 (2020).

Bursell, S. E., Brazionis, L. & Jenkins, A. Telemedicine and ocular health in diabetes mellitus. Clin. Exp. Optom. 95 (3), 311–327 (2012).

da Costa, C. B., Peralta, F. D. S. & Ferreira de Mello, A. L. S. How has teledentistry been applied in public dental health services? An integrative review. Telemed. J. E. Health. 26 (7), 945–954 (2020).

Heckemann, B., Wolf, A., Ali, L., Sonntag, S. M. & Ekman, I. Discovering untapped relationship potential with patients in telehealth: A qualitative interview study. BMJ Open. 6 (3), e009750 (2016).

Pérez-Noboa, B., Soledispa-Carrasco, A., Padilla, V. S. & Velasquez, W. Teleconsultation apps in the COVID-19 pandemic: The case of Guayaquil City, Ecuador. IEEE Eng. Manag. Rev. 49 (1), 27–37 (2021).

Article   Google Scholar  

Wamsley, C. E., Kramer, A., Kenkel, J. M. & Amirlak, B. Trends and challenges of telehealth in an academic institution: The unforeseen benefits of the COVID-19 global pandemic. Aesthetic Surg. J. 41 (1), 109–118 (2020).

Jonasdottir, S. K., Thordardottir, I. & Jonsdottir, T. Health professionals’ perspective towards challenges and opportunities of telehealth service provision: A scoping review. Int. J. Med. Inform. 167 , 104862 (2022).

Tan, S. H. X., Lee, C. K. J., Yong, C. W. & Ding, Y. Y. Scoping review: Facilitators and barriers in the adoption of teledentistry among older adults. Gerodontology. 38 (4), 351–365 (2021).

Minervini, G. et al. Teledentistry in the management of patients with dental and temporomandibular disorders. BioMed. Res. Int. 2022 , 7091153 (2022).

Edirippulige, S. & Armfield, N. Education and training to support the use of clinical telehealth: A review of the literature. J. Telemed. Telecare. 23 (2), 273–282 (2017).

Article   CAS   PubMed   Google Scholar  

Mariño, R. & Ghanim, A. Teledentistry: A systematic review of the literature. J. Telemed. Telecare. 19 (4), 179–183 (2013).

Article   ADS   PubMed   Google Scholar  

Armitage-Chan, E. & Whiting, M. Teaching professionalism: Using role-play simulations to generate professionalism learning outcomes. J. Vet. Med. Educ. 43 (4), 359–363 (2016).

Spyropoulos, F., Trichakis, I. & Vozinaki, A.-E. A narrative-driven role-playing game for raising flood awareness. Sustainability. 14 (1), 554 (2022).

Jiang, W. K. et al. Role-play in endodontic teaching: A case study. Chin. J. Dent. Res. 23 (4), 281–288 (2020).

PubMed   Google Scholar  

Vizeshfar, F., Zare, M. & Keshtkaran, Z. Role-play versus lecture methods in community health volunteers. Nurse Educ. Today. 79 , 175–179 (2019).

Nestel, D. & Tierney, T. Role-play for medical students learning about communication: Guidelines for maximising benefits. BMC Med. Educ. 7 , 3 (2007).

Gelis, A. et al. Peer role-play for training communication skills in medical students: A systematic review. Simulat. Health. 15 (2), 106–111 (2020).

Cornelius, S., Gordon, C. & Harris, M. Role engagement and anonymity in synchronous online role play. Int. Rev. Res. Open Distrib. Learn. 12 (5), 57–73 (2011).

Bell, M. Online role-play: Anonymity, engagement and risk. Educ. Med. Int. 38 (4), 251–260 (2001).

Sipiyaruk, K., Gallagher, J. E., Hatzipanagos, S. & Reynolds, P. A. A rapid review of serious games: From healthcare education to dental education. Eur. J. Dent. Educ. 22 (4), 243–257 (2018).

Sipiyaruk, K., Hatzipanagos, S., Reynolds, P. A. & Gallagher, J. E. Serious games and the COVID-19 pandemic in dental education: An integrative review of the literature. Computers. 10 (4), 42 (2021).

Morse, J.M., Niehaus, L. Mixed Method Design: Principles and Procedures. (2016).

Creswell, J. W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches 3rd edn. (SAGE Publications, 2009).

Cheng, V. W. S., Davenport, T., Johnson, D., Vella, K. & Hickie, I. B. Gamification in apps and technologies for improving mental health and well-being: Systematic review. JMIR Ment. Health. 6 (6), e13717 (2019).

Gallego-Durán, F. J. et al. A guide for game-design-based gamification. Informatics. 6 (4), 49 (2019).

Gee, J. P. Learning and games. In The Ecology of Games: Connecting Youth, Games, and Learning (ed. Salen, K.) 21–40 (MIT Press, 2008).

Cheung, K. L., ten Klooster, P. M., Smit, C., de Vries, H. & Pieterse, M. E. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health. BMC Public Health. 17 (1), 276 (2017).

Murairwa, S. Voluntary sampling design. Int. J. Adv. Res. Manag. Social Sci. 4 (2), 185–200 (2015).

Chow, S.-C., Shao, J., Wang, H. & Lokhnygina, Y. Sample Size Calculations in Clinical Research (CRC Press, 2017).

Book   Google Scholar  

Palinkas, L. A. et al. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration Policy Mental Health Mental Health Services Res. 42 (5), 533–544 (2015).

McIlvried, D. E., Prucka, S. K., Herbst, M., Barger, C. & Robin, N. H. The use of role-play to enhance medical student understanding of genetic counseling. Genet. Med. 10 (10), 739–744 (2008).

Schlegel, C., Woermann, U., Shaha, M., Rethans, J.-J. & van der Vleuten, C. Effects of communication training on real practice performance: A role-play module versus a standardized patient module. J. Nursing Educ. 51 (1), 16–22 (2012).

Manzoor, I. M. F. & Hashmi, N. R. Medical students’ perspective about role-plays as a teaching strategy in community medicine. J. Coll. Physicians Surg. Pak. 22 (4), 222–225 (2012).

Cornes, S., Gelfand, J. M. & Calton, B. Foundational telemedicine workshop for first-year medical students developed during a pandemic. MedEdPORTAL. 17 , 11171 (2021).

King, J., Hill, K. & Gleason, A. All the world’sa stage: Evaluating psychiatry role-play based learning for medical students. Austral. Psychiatry. 23 (1), 76–79 (2015).

Arayapisit, T. et al. An educational board game for learning orofacial spaces: An experimental study comparing collaborative and competitive approaches. Anatomical Sci. Educ. 16 (4), 666–676 (2023).

Sipiyaruk, K., Hatzipanagos, S., Vichayanrat, T., Reynolds, P.A., Gallagher, J.E. Evaluating a dental public health game across two learning contexts. Educ. Sci. 12 (8), 517 (2022).

Pilnick, A. et al. Using conversation analysis to inform role play and simulated interaction in communications skills training for healthcare professionals: Identifying avenues for further development through a scoping review. BMC Med. Educ. 18 (1), 267 (2018).

Lane, C. & Rollnick, S. The use of simulated patients and role-play in communication skills training: A review of the literature to August 2005. Patient Educ. Counseling. 67 (1), 13–20 (2007).

Gale, N. K., Heath, G., Cameron, E., Rashid, S. & Redwood, S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med. Res. Methodol. 13 (1), 117 (2013).

Ritchie, J., Lewis, J., Nicholls, C. M. & Ormston, R. Qualitative Research Practice: A Guide for Social Science Students and Researchers (Sage, 2014).

Chen, J. C. & Martin, A. R. Role-play simulations as a transformative methodology in environmental education. J. Transform. Educ. 13 (1), 85–102 (2015).

Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manag. Inform. Syst. Quart. 13 (3), 319–340 (1989).

Novak, E., Johnson, T. E., Tenenbaum, G. & Shute, V. J. Effects of an instructional gaming characteristic on learning effectiveness, efficiency, and engagement: Using a storyline for teaching basic statistical skills. Interact. Learn. Environ. 24 (3), 523–538 (2016).

Marchiori, E. J. et al. A narrative metaphor to facilitate educational game authoring. Comput. Educ. 58 (1), 590–599 (2012).

Luctkar-Flude, M. et al. Effectiveness of debriefing methods for virtual simulation: A systematic review. Clin. Simulat. Nursing. 57 , 18–30 (2021).

Joyner, B. & Young, L. Teaching medical students using role play: Twelve tips for successful role plays. Med. Teach. 28 (3), 225–229 (2006).

Csikszentmihalyi, M. Flow: The Psychology of Optimal Performance (HarperCollins Publishers, 1990).

Buajeeb, W., Chokpipatkun, J., Achalanan, N., Kriwattanawong, N. & Sipiyaruk, K. The development of an online serious game for oral diagnosis and treatment planning: Evaluation of knowledge acquisition and retention. BMC Med. Educ. 23 (1), 830 (2023).

Littlefield, J. H., Hahn, H. B. & Meyer, A. S. Evaluation of a role-play learning exercise in an ambulatory clinic setting. Adv. Health. Sci. Educ. Theory Pract. 4 (2), 167–173 (1999).

Alkin, M. C. & Christie, C. A. The use of role-play in teaching evaluation. Am. J. Evaluat. 23 (2), 209–218 (2002).

Lovell, K. L., Mavis, B. E., Turner, J. L., Ogle, K. S. & Griffith, M. Medical students as standardized patients in a second-year performance-based assessment experience. Med. Educ. Online. 3 (1), 4301 (1998).

Download references

Acknowledgements

The authors would like to express our sincere gratitude to participants for their contributions in this research. We would also like to thank the experts who provided their helpful suggestions in the validation process of the data collection tools.

This research project was funded by the Faculty of Dentistry, Mahidol University. The APC was funded by Mahidol University.

Author information

Authors and affiliations.

Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University, Bangkok, Thailand

Chayanid Teerawongpairoj & Chanita Tantipoj

Department of Orthodontics, Faculty of Dentistry, Mahidol University, Bangkok, Thailand

Kawin Sipiyaruk

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, C.Te., C.Ta., and K.S.; methodology, C.Te., C.Ta., and K.S.; validation, C.Te., C.Ta., and K.S.; investigation, C.Te. and K.S.; formal analysis, C.Te., C.Ta., and K.S.; resources, C.Te., C.Ta., and K.S.; data curation, C.Ta. and K.S.; writing-original draft preparation, C.Te., C.Ta., and K.S.; writing-review and editing, C.Te., C.Ta., and K.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Kawin Sipiyaruk .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Teerawongpairoj, C., Tantipoj, C. & Sipiyaruk, K. The design and evaluation of gamified online role-play as a telehealth training strategy in dental education: an explanatory sequential mixed-methods study. Sci Rep 14 , 9216 (2024). https://doi.org/10.1038/s41598-024-58425-9

Download citation

Received : 30 September 2023

Accepted : 28 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1038/s41598-024-58425-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dental education
  • Distance learning
  • Game-based learning
  • Gamification

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

framework in research methodology

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Research design: the methodology for interdisciplinary research framework

1 Biometris, Wageningen University and Research, PO Box 16, 6700 AA Wageningen, The Netherlands

Jarl K. Kampen

2 Statua, Dept. of Epidemiology and Medical Statistics, Antwerp University, Venusstraat 35, 2000 Antwerp, Belgium

Many of today’s global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods’ combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework’s utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework’s potential in inclusive interdisciplinary research, and last but not least, research integrity.

Introduction

Current challenges, e.g., energy, water, food security, one world health and urbanization, involve the interaction between humans and their environment. A (mono)disciplinary approach, be it a psychological, economical or technical one, is too limited to capture any one of these challenges. The study of the interaction between humans and their environment requires knowledge, ideas and research methodology from different disciplines (e.g., ecology or chemistry in the natural sciences, psychology or economy in the social sciences). So collaboration between natural and social sciences is called for (Walsh et al. 1975 ).

Over the past decades, different forms of collaboration have been distinguished although the terminology used is diverse and ambiguous. For the present paper, the term interdisciplinary research is used for (Aboelela et al. 2007 , p. 341):

any study or group of studies undertaken by scholars from two or more distinct scientific disciplines. The research is based upon a conceptual model that links or integrates theoretical frameworks from those disciplines, uses study design and methodology that is not limited to any one field, and requires the use of perspectives and skills of the involved disciplines throughout multiple phases of the research process.

Scientific disciplines (e.g., ecology, chemistry, biology, psychology, sociology, economy, philosophy, linguistics, etc.) are categorized into distinct scientific cultures: the natural sciences, the social sciences and the humanities (Kagan 2009 ). Interdisciplinary research may involve different disciplines within a single scientific culture, and it can also cross cultural boundaries as in the study of humans and their environment.

A systematic review of the literature on natural-social science collaboration (Fischer et al. 2011 ) confirmed the general impression of this collaboration to be a challenge. The nearly 100 papers in their analytic set mentioned more instances of barriers than of opportunities (72 and 46, respectively). Four critical factors for success or failure in natural-social science collaboration were identified: the paradigms or epistemologies in the current (mono-disciplinary) sciences, the skills and competences of the scientists involved, the institutional context of the research, and the organization of collaborations (Fischer et al. 2011 ). The so-called “paradigm war” between neopositivist versus constructivists within the social and behavioral sciences (Onwuegbuzie and Leech 2005 ) may complicate pragmatic collaboration further.

It has been argued that interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences (Frischknecht 2000 ) and accordingly, some interdisciplinary programs have been developed since (Baker and Little 2006 ; Spelt et al. 2009 ). The overall effect of interdisciplinary programs can be expected to be small as most programs are mono-disciplinary and based on a single paradigm (positivist-constructivist, qualitative-quantitative; see e.g., Onwuegbuzie and Leech 2005 ). We saw in our methodology teaching, consultancy and research practices working with heterogeneous groups of students and staff, that most had received mono-disciplinary training with a minority that had received multidisciplinary training, with few exceptions within the same paradigm. During our teaching and consultancy for heterogeneous groups of students and staff aimed at designing interdisciplinary research, we built the framework for methodology in interdisciplinary research (MIR). With the MIR framework, we aspire to contribute to the critical factors skills and competences (Fischer et al. 2011 ) for social and natural sciences collaboration. Note that the scale of interdisciplinary research projects we have in mind may vary from comparably modest ones (e.g., finding a link between noise reducing asphalt and quality of life; Vuye et al. 2016 ) to very large projects (finding a link between anthropogenic greenhouse gas emissions, climate change, and food security; IPCC 2015 ).

In the following section of this paper we describe the MIR framework and elaborate on its components. The third section gives two examples of the application of the MIR framework. The paper concludes with a discussion of the MIR framework in the broader contexts of mixed methods research, inclusive research, and other promising strains of research.

The methodology in interdisciplinary research framework

Research as a process in the methodology in interdisciplinary research framework.

The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999 ), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research. That means that it helps the MIR framework to put the common goal of the researchers at the center, instead of the diversity of their respective backgrounds. The MIR framework also introduces an agenda: the research team needs to carefully think through different parts of the design of their study before starting its execution (Fig.  1 ). First, the team discusses the conceptual design of their study which contains the ‘why’ and ‘what’ of the research. Second, the team discusses the technical design of the study which contains the ‘how’ of the research. Only after the team agrees that the complete research design is sufficiently crystalized, the execution of the work (including fieldwork) starts.

An external file that holds a picture, illustration, etc.
Object name is 11135_2017_513_Fig1_HTML.jpg

The Methodology of Interdisciplinary Research framework

Whereas the conceptual and technical designs are by definition interdisciplinary team work, the respective team members may do their (mono)disciplinary parts of fieldwork and data analysis on a modular basis (see Bruns et al. 2017 : p. 21). Finally, when all evidence is collected, an interdisciplinary synthesis of analyses follows which conclusions are input for the final report. This implies that the MIR framework allows for a range of scales of research projects, e.g., a mixed methods project and its smaller qualitative and quantitative modules, or a multi-national sustainability project and its national sociological, economic and ecological modules.

The conceptual design

Interdisciplinary research design starts with the “conceptual design” which addresses the ‘why’ and ‘what’ of a research project at a conceptual level to ascertain the common goals pivotal to interdisciplinary collaboration (Fischer et al. 2011 ). The conceptual design includes mostly activities such as thinking, exchanging interdisciplinary knowledge, reading and discussing. The product of the conceptual design is called the “conceptual frame work” which comprises of the research objective (what is to be achieved by the research), the theory or theories that are central in the research project, the research questions (what knowledge is to be produced), and the (partial) operationalization of constructs and concepts that will be measured or recorded during execution. While the members of the interdisciplinary team and the commissioner of the research must reach a consensus about the research objective, the ‘why’, the focus in research design must be the production of the knowledge required to achieve that objective the ‘what’.

With respect to the ‘why’ of a research project, an interdisciplinary team typically starts with a general aim as requested by the commissioner or funding agency, and a set of theories to formulate a research objective. This role of theory is not always obvious to students from the natural sciences, who tend to think in terms of ‘models’ with directly observable variables. On the other hand, students from the social sciences tend to think in theories with little attention to observable variables. In the MIR framework, models as simplified descriptions or explanations of what is studied in the natural sciences play the same role in informing research design, raising research questions, and informing how a concept is understood, as do theories in social science.

Research questions concern concepts, i.e. general notions or ideas based on theory or common sense that are multifaceted and not directly visible or measurable. For example, neither food security (with its many different facets) nor a person’s attitude towards food storage may be directly observed. The operationalization of concepts, the transformation of concepts into observable indicators, in interdisciplinary research requires multiple steps, each informed by theory. For instance, in line with particular theoretical frameworks, sustainability and food security may be seen as the composite of a social, an economic and an ecological dimension (e.g., Godfray et al. 2010 ).

As the concept of interest is multi-disciplinary and multi-dimensional, the interdisciplinary team will need to read, discuss and decide on how these dimensions and their indicators are weighted to measure the composite interdisciplinary concept to get the required interdisciplinary measurements. The resulting measure or measures for the interdisciplinary concept may be of the nominal, ordinal, interval and ratio level, or a combination thereof. This operationalization procedure is known as the port-folio approach to widely defined measurements (Tobi 2014 ). Only after the research team has finalized the operationalization of the concepts under study, the research questions and hypotheses can be made operational. For example, a module with descriptive research questions may now be turned into an operational one like, what are the means and variances of X1, X2, and X3 in a given population? A causal research question may take on the form, is X (a composite of X1, X2 and X3) a plausible cause for the presence or absence of Y? A typical qualitative module could study, how do people talk about X1, X2 and X3 in their everyday lives?

The technical design

Members of an interdisciplinary team usually have had different training with respect to research methods, which makes discussing and deciding on the technical design more challenging but also potentially more creative than in a mono-disciplinary team. The technical design addresses the issues ‘how, where and when will research units be studied’ (study design), ‘how will measurement proceed’ (instrument selection or design), ‘how and how many research units will be recruited’ (sampling plan), and ‘how will collected data be analyzed and synthesized’ (analysis plan). The MIR framework provides the team a set of topics and their relationships to one another and to generally accepted quality criteria (see Fig.  1 ), which helps in designing this part of the project.

Interdisciplinary teams need be pragmatic as the research questions agreed on are leading in decisions on the data collection set-up (e.g., a cross-sectional study of inhabitants of a region, a laboratory experiment, a cohort study, a case control study, etc.), the so-called “study design” (e.g., Kumar 2014 ; De Vaus 2001 ; Adler and Clark 2011 ; Tobi and van den Brink 2017 ) instead of traditional ‘pet’ approaches. Typical study designs for descriptive research questions and research questions on associations are the cross-sectional study design. Longitudinal study designs are required to investigate development over time and cause-effect relationships ideally are studied in experiments (e.g., Kumar 2014 ; Shipley 2016 ). Phenomenological questions concern a phenomenon about which little is known and which has to be studied in the environment where it takes place, which calls for a case study design (e.g., Adler and Clark 2011 : p. 178). For each module, the study design is to be further explicated by the number of data collection waves, the level of control by the researcher and its reference period (e.g., Kumar 2014 ) to ensure the teams common understanding.

Then, decisions about the way data is to be collected, e.g., by means of certified instruments, observation, interviews, questionnaires, queries on existing data bases, or a combination of these are to be made. It is especially important to discuss the role of the observer (researcher) as this is often a source of misunderstanding in interdisciplinary teams. In the sciences, the observer is usually considered a neutral outsider when reading a standardized measurement instrument (e.g., a pyranometer to measure incoming solar radiation). In contrast, in the social sciences, the observer may be (part of) the measurement instrument, for example in participant observation or when doing in-depth interviews. After all, in participant observation the researcher observes from a member’s perspective and influences what is observed owing to the researcher’s participation (Flick 2006 : p. 220). Similarly in interviews, by which we mean “a conversation that has a structure and a purpose determined by the one party—the interviewer” (Kvale 2007 : p. 7), the interviewer and the interviewee are part of the measurement instrument (Kvale and Brinkmann 2009 : p. 2). In on-line and mail questionnaires the interviewer is eliminated as part of the instrument by standardizing the questions and answer options. Queries on existing data bases refer to the use of secondary data or secondary analysis. Different disciplines tend to use different bibliographic data bases (e.g., CAB Abstracts, ABI/INFORM or ERIC) and different data repositories (e.g., the European Social Survey at europeansocialsurvey.org or the International Council for Science data repository hosted by www.pangaea.de ).

Depending on whether or not the available, existing, measurement instruments tally with the interdisciplinary operationalisations from the conceptual design, the research team may or may not need to design instruments. Note that in some cases the social scientists’ instinct may be to rely on a questionnaire whereas the collaboration with another discipline may result in more objective possibilities (e.g., compare asking people about what they do with surplus medication, versus measuring chemical components from their input into the sewer system). Instrument design may take on different forms, such as the design of a device (e.g., pyranometer), a questionnaire (Dillman 2007 ) or a part thereof (e.g., a scale see DeVellis 2012 ; Danner et al. 2016 ), an interview guide with topics or questions for the interviewees, or a data extraction form in the context of secondary analysis and literature review (e.g., the Cochrane Collaboration aiming at health and medical sciences or the Campbell Collaboration aiming at evidence based policies).

Researchers from different disciplines are inclined to think of different research objects (e.g., animals, humans or plots), which is where the (specific) research questions come in as these identify the (possibly different) research objects unambiguously. In general, research questions that aim at making an inventory, whether it is an inventory of biodiversity or of lodging, call for a random sampling design. Both in the biodiversity and lodging example, one may opt for random sampling of geographic areas by means of a list of coordinates. Studies that aim to explain a particular phenomenon in a particular context would call for a purposive sampling design (non-random selection). Because studies of biodiversity and housing obey the same laws in terms of appropriate sampling design for similar research questions, individual students and researchers are sensitized to commonalities of their respective (mono)disciplines. For example, a research team interested in the effects of landslides on a socio-ecological system may select for their study one village that suffered from landslides and one village that did not suffer from landslides that have other characteristics in common (e.g., kind of soil, land use, land property legislation, family structure, income distribution, et cetera).

The data analysis plan describes how data will be analysed, for each of the separate modules and for the project at large. In the context of a multi-disciplinary quantitative research project, the data analysis plan will list the intended uni-, bi- and multivariate analyses such as measures for distributions (e.g., means and variances), measures for association (e.g., Pearson Chi square or Kendall Tau) and data reduction and modelling techniques (e.g., factor analysis and multiple linear regression or structural equation modelling) for each of the research modules using the data collected. When applicable, it will describe interim analyses and follow-up rules. In addition to the plans at modular level, the data analysis plan must describe how the input from the separate modules, i.e. different analyses, will be synthesized to answer the overall research question. In case of mixed methods research, the particular type of mixed methods design chosen describes how, when, and to what extent the team will synthesize the results from the different modules.

Unfortunately, in our experience, when some of the research modules rely on a qualitative approach, teams tend to refrain from designing a data analysis plan before starting the field work. While absence of a data analysis plan may be regarded acceptable in fields that rely exclusively on qualitative research (e.g., ethnography), failure to communicate how data will be analysed and what potential evidence will be produced posits a deathblow to interdisciplinarity. For many researchers not familiar with qualitative research, the black box presented as “qualitative data analysis” is a big hurdle, and a transparent and systematic plan is a sine qua non for any scientific collaboration. The absence of a data analysis plan for all modules results in an absence of synthesis of perspectives and skills of the disciplines involved, and in separate (disciplinary) research papers or separate chapters in the research report without an answer to the overall research question. So, although researchers may find it hard to write the data analysis plan for qualitative data, it is pivotal in interdisciplinary research teams.

Similar to the quantitative data analysis plan, the qualitative data analysis plan presents the description of how the researcher will get acquainted with the data collected (e.g., by constructing a narrative summary per interviewee or a paired-comparison of essays). Additionally, the rules to decide on data saturation need be presented. Finally, the types of qualitative analyses are to be described in the data analysis plan. Because there is little or no standardized terminology in qualitative data analysis, it is important to include a precise description as well as references to the works that describe the method intended (e.g., domain analysis as described by Spradley 1979 ; or grounded theory by means of constant-comparison as described by Boeije 2009 ).

Integration

To benefit optimally from the research being interdisciplinary the modules need to be brought together in the integration stage. The modules may be mono- or interdisciplinary and may rely on quantitative, qualitative or mixed methods approaches. So the MIR framework fits the view that distinguishes three multimethods approaches (quali–quali, quanti–quanti, and quali–quant).

Although the MIR framework has not been designed with the intention to promote mixed methods research, it is suitable for the design of mixed methods research as the kind of research that calls for both quantitative and qualitative components (Creswell and Piano Clark 2011 ). Indeed, just like the pioneers in mixed methods research (Creswell and Piano Clark 2011 : p. 2), the MIR framework deconstructs the package deals of paradigm and data to be collected. The synthesis of the different mono or interdisciplinary modules may benefit from research done on “the unique challenges and possibilities of integration of qualitative and quantitative approaches” (Fetters and Molina-Azorin 2017 : p. 5). We distinguish (sub) sets of modules being designed as convergent, sequential or embedded (adapted from mixed methods design e.g., Creswell and Piano Clark 2011 : pp. 69–70). Convergent modules, whether mono or interdisciplinary, may be done parallel and are integrated after completion. Sequential modules are done after one another and the first modules inform the latter ones (this includes transformative and multiphase mixed methods design). Embedded modules are intertwined. Here, modules depend on one another for data collection and analysis, and synthesis may be planned both during and after completion of the embedded modules.

Scientific quality and ethical considerations in the design of interdisciplinary research

A minimum set of jargon related to the assessment of scientific quality of research (e.g., triangulation, validity, reliability, saturation, etc.) can be found scattered in Fig.  1 . Some terms are reserved by particular paradigms, others may be seen in several paradigms with more or less subtle differences in meaning. In the latter case, it is important that team members are prepared to explain and share ownership of the term and respect the different meanings. By paying explicit attention to the quality concepts, researchers from different disciplines learn to appreciate each other’s concerns for good quality research and recognize commonalities. For example, the team may discuss measurement validity of both a standardized quantitative instrument and that of an interview and discover that the calibration of the machine serves a similar purpose as the confirmation of the guarantee of anonymity at the start of an interview.

Throughout the process of research design, ethics require explicit discussion among all stakeholders in the project. Ethical issues run through all components in the MIR framework in Fig.  1 . Where social and medical scientists may be more sensitive to ethical issues related to humans (e.g., the 1979 Belmont Report criteria of beneficence, justice, and respect), others may be more sensitive to issues related to animal welfare, ecology, legislation, the funding agency (e.g., implications for policy), data and information sharing (e.g., open access publishing), sloppy research practices, or long term consequences of the research. This is why ethics are an issue for the entire interdisciplinary team and cannot be discussed on project module level only.

The MIR framework in practice: two examples

Teaching research methodology to heterogeneous groups of students, institutional context and background of the mir framework.

Wageningen University and Research (WUR) advocates in its teaching and research an interdisciplinary approach to the study of global issues related to the motto “To explore the potential of nature to improve the quality of life.” Wageningen University’s student population is multidisciplinary and international (e.g., Tobi and Kampen 2013 ). Traditionally, this challenge of diversity in one classroom is met by covering a width of methodological topics and examples from different disciplines. However, when students of various programmes received methodological education in mixed classes, students of some disciplines would regard with disinterest or even disdain methods and techniques of the other disciplines. Different disciplines, especially from the qualitative respectively quantitative tradition in the social sciences (Onwuegbuzie and Leech 2005 : p. 273), claim certain study designs, methods of data collection and analysis as their territory, a claim reflected in many textbooks. We found that students from a qualitative tradition would not be interested, and would not even study, content like the design of experiments and quantitative data collection; and students from a quantitative tradition would ignore case study design and qualitative data collection. These students assumed they didn’t need any knowledge about ‘the other tradition’ for their future careers, despite the call for interdisciplinarity.

To enhance interdisciplinarity, WUR provides an MSc course mandatory for most students, in which multi-disciplinary teams do research for a commissioner. Students reported difficulties similar to the ones found in the literature: miscommunication due to talking different scientific languages and feelings of distrust and disrespect due to prejudice. This suggested that research methodology courses ought help prepare for interdisciplinary collaboration by introducing a single methodological framework that 1) creates sensitivity to the pros and challenges of interdisciplinary research by means of a common vocabulary and fosters respect for other disciplines, 2) starts from the research questions as pivotal in decision making on research methods instead of tradition or ontology, and 3) allows available methodologies and methods to be potentially applicable to any scientific research problem.

Teaching with MIR—the conceptual framework

As a first step, we replaced textbooks by ones refusing the idea that any scientific tradition has exclusive ownership of any methodological approach or method. The MIR framework further guides our methodology teaching in two ways. First, it presents a logical sequence of topics (first conceptual design, then technical design; first research question(s) or hypotheses, then study design; etc.). Second, it allows for a conceptual separation of topics (e.g., study design from instrument design). Educational programmes at Wageningen University and Research consistently stress the vital importance of good research design. In fact, 50% of the mark in most BSc and MSc courses in research methodology is based on the assessment of a research proposal that students design in small (2-4 students) and heterogeneous (discipline, gender and nationality) groups. The research proposal must describe a project which can be executed in practice, and which limitations (measurement, internal, and external validity) are carefully discussed.

Groups start by selecting a general research topic. They discuss together previously attained courses from a range of programs to identify personal and group interests, with the aim to reach an initial research objective and a general research question as input for the conceptual design. Often, their initial research objective and research question are too broad to be researchable (e.g., Kumar 2014 : p. 64; Adler and Clark 2011 : p. 71). In plenary sessions, the (basics of) critical assessment of empirical research papers is taught with special attention to the ‘what’ and ‘why’ section of research papers. During tutorials students generate research questions until the group agrees on a research objective, with one general research question that consists of a small set of specific research questions. Each of the specific research questions may stem from a different discipline, whereas answering the general research question requires integrating the answers to all specific research questions.

The group then identifies the key concepts in their research questions, while exchanging thoughts on possible attributes based on what they have learnt from previous courses (theories) and literature. When doing so they may judge the research question as too broad, in which case they will turn to the question strategies toolbox again. Once they agree on the formulation of the research questions and the choice of concepts, tasks are divided. In general, each student turns to the literature he/she is most familiar with or interested in, for the operationalization of the concept into measurable attributes and writes a paragraph or two about it. In the next meeting, the groups read and discuss the input and decide on the set-up and division of tasks with respect to the technical design.

Teaching with MIR—the technical framework

The technical part of research design distinguishes between study design, instrument design, sampling design, and the data analysis plan. In class, we first present students with a range of study designs (cross sectional, experimental, etc.). Student groups select an appropriate study design by comparing the demands made by the research questions with criteria for internal validity. When a (specific) research question calls for a study design that is not seen as practically feasible or ethically possible, they will rephrase the research question until the demands of the research question tally with the characteristics of at least one ethical, feasible and internally valid study design.

While following plenary sessions during which different random and non-random sampling or selection strategies are taught, groups start working on their sampling design. The groups make two decisions informed by their research question: the population(s) of research units, and the requirements of the sampling strategy for each population. Like many other aspects in research design, this can be an iterative process. For example, suppose the research question mentioned “local policy makers,” which is too vague for a sampling design. Then the decision may be to limit the study to “policy makers at the municipality level in the Netherlands” and adapt the general and the specific research questions accordingly. Next, the group identifies whether a sample design needs to focus on diversity (e.g., when the objective is to make an inventory of possible local policies), representativeness (e.g., when the objective is to estimate prevalence of types of local policies), or people with particular information (e.g., when the objective is to study people having experience with a given local policy). When a sample has to representative, the students must produce an assessment of external validity, whereas when the aim is to map diversity the students must discuss possible ways of source triangulation. Finally, in conjunction with the data analysis plan, students decide on the sample size and/or the saturation criteria.

When the group has agreed on their population(s) and the strategy for recruiting research units, the next step is to finalize the technical aspects of operationalisation i.e. addressing the issue of exactly how information will be extracted from the research units. Depending on what is practically feasible qua measurement, the choice of a data collection instrument may be a standardised (e.g., a spectrograph, a questionnaire) or less standardised (e.g., semi-structured interviews, visual inspection) one. The students have to discuss the possibilities of method triangulation, and explain the possible weaknesses of their data collection plan in terms of measurement validity and reliability.

Recent developments

Presently little attention is payed to the data analysis plan, procedures for synthesis and reporting because the programmes differ on their offer in data analysis courses, and because execution of the research is not part of the BSc and MSc methodology courses. Recently, we have designed one course for an interdisciplinary BSc program in which the research question is put central in learning and deciding on statistics and qualitative data analysis. Nonetheless, during the past years the number of methodology courses for graduate students that supported the MIR framework have been expanded, e.g., a course “From Topic to Proposal”; separate training modules on questionnaire construction, interviewing, and observation; and optional courses on quantitative and qualitative data analysis. These courses are open to (and attended by) PhD students regardless of their program. In Flanders (Belgium), the Flemish Training Network for Statistics and Methodology (FLAMES) has for the last four years successfully applied the approach outlined in Fig.  1 in its courses for research design and data collection methods. The division of the research process in terms of a conceptual design, technical design, operationalisation, analysis plan, and sampling plan, has proved to be appealing for students of disciplines ranging from linguistics to bioengineering.

Researching with MIR: noise reducing asphalt layers and quality of life

Research objective and research question.

This example of the application of the MIR framework comes from a study about the effects of “noise reducing asphalt layers” on the quality of life (Vuye et al. 2016 ), a project commissioned by the City of Antwerp in 2015 and executed by a multidisciplinary research team of Antwerp University (Belgium). The principal researcher was an engineer from the Faculty of Applied Engineering (dept. Construction), supported by two researchers from the Faculty of Medicine and Health Sciences (dept. of Epidemiology and Social Statistics), one with a background in qualitative and one with a background in quantitative research methods. A number of meetings were held where the research team and the commissioners discussed the research objective (the ‘what’ and ‘why’).The research objective was in part dictated by the European Noise Directive 2002/49/EC, which forces all EU member states to draft noise action plans, and the challenge in this study was to produce evidence of a link between the acoustic and mechanical properties of different types of asphalt, and the quality of life of people living in the vicinity of the treated roads. While there was literature available about the effects of road surface on sound, and other studies had studied the link between noise and health, no study was found that produced evidence simultaneously about noise levels of roads and quality of life. The team therefore decided to test the hypothesis that traffic noise reduction has a beneficial effect on the quality of life of people into the central research. The general research question was, “to what extent does the placing of noise reducing asphalt layers increase the quality of life of the residents?”

Study design

In order to test the effect of types of asphalt, initially a pretest–posttest experiment was designed, which was expanded by several added experimental (change of road surface) and control (no change of road surface) groups. The research team gradually became aware that quality of life may not be instantly affected by lower noise levels, and that a time lag is involved. A second posttest aimed to follow up on this effect although it could only be implemented in a selection of experimental sites.

Instrument selection and design

Sound pressure levels were measured by an ISO-standardized procedure called the Statistical Pass-By (SPB) method. A detailed description of the method is in Vuye et al. ( 2016 ). No such objective procedure is available for measuring quality of life, which can only be assessed by self-reports of the residents. Some time was needed for the research team to accept that measuring a multidimensional concept like quality of life is more complicated than just having people rate their “quality of life” on a 10 point scale. For instance, questions had to be phrased in a way that gave not away the purpose of the research (Hawthorne effect), leading to the inclusion of questions about more nuisances than traffic noise alone. This led to the design of a self-administered questionnaire, with questions of Flanders Survey on Living Environment (Departement Leefmilieu, Natuur & Energie 2013 ) appended by new questions. Among other things, the questionnaire probed for experienced nuisance by sound, quality of sleep, effort to concentrate, effort to have a conversation inside or outside the home, physical complaints such as headaches, etc.

Sampling design

The selected sites needed to accommodate both types of measurements: that of noise from traffic and quality of life of residents. This was a complicating factor that required several rounds of deliberation. While countrywide only certain roads were available for changing the road surface, these roads had to be mutually comparable in terms of the composition of the population, type of residential area (e.g., reports from the top floor of a tall apartment building cannot be compared to those at ground level), average volume of traffic, vicinity of hospitals, railroads and airports, etc. At the level of roads therefore, targeted sampling was applied, whereas at the level of residents the aim was to realize a census of all households within a given perimeter from the treated road surfaces. Considerations about the reliability of applied instruments were guiding decisions with respect to sampling. While the measurements of the SPB method were sufficiently reliable to allow for relatively few measurements, the questionnaire suffered from considerable nonresponse which hampered statistical power. It was therefore decided to increase the power of the study by adding control groups in areas where the road surface was not replaced. This way, detecting an effect of the intervention did not solely depend on the turnout of the pre and the post-test.

Data analysis plan

The statistical analysis had to account for the fact that data were collected at two different levels: the level of the residents filling out the questionnaires, and the level of the roads which surface was changed. Because survey participation was confidential, results of the pre- and posttest could only be compared at aggregate (street) level. The analysis had to control for confounding variables (e.g., sample composition, variety in traffic volume, etc.), experimental factors (varieties in experimental conditions, and controls), and non-normal dependent variables. The statistical model appropriate for analysis of such data is a Generalised Linear Mixed Model.

Data were collected during the course of 2015, 2016 and 2017 and are awaiting final analysis in Spring 2017. Intermediate analyses resulted in several MSc theses, conference presentations, and working papers that reported on parts of the research.

In this paper we presented the Methodology in Interdisciplinary Research framework that we developed over the past decade building on our experience as lecturers, consultants and researchers. The MIR framework recognizes research methodology and methods as important content in the critical factor skills and competences. It approaches research and collaboration as a process that needs to be designed with the sole purpose to answer the general research question. For the conceptual design the team members have to discuss and agree on the objective of their communal efforts without squeezing it into one single discipline and, thus, ignoring complexity. The specific research questions, when formulated, contribute to (self) respect in collaboration as they represent and stand witness of the need for interdisciplinarity. In the technical design, different parts were distinguished to stimulate researchers to think and design research out of their respective disciplinary boxes and consider, for example, an experimental design with qualitative data collection, or a case study design based on quantitative information.

In our teaching and consultancy, we first developed a MIR framework for social sciences, economics, health and environmental sciences interdisciplinarity. It was challenged to include research in the design discipline of landscape architecture. What characterizes research in landscape architecture and other design principles, is that the design product as well as the design process may be the object of study. Lenzholder et al. ( 2017 ) therefore distinguish three kinds of research in landscape architecture. The first kind, “Research into design” studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, “Research for design” generates knowledge that feeds into the noun and the verb ‘design’, which means it precedes the design(ing). The third kind, Research through Design(ing) employs designing as a research method. At first, just like Deming and Swaffield ( 2011 ), we were a bit skeptical about “designing” as a research method. Lenzholder et al. ( 2017 ) pose that the meaning of research through design has evolved through a (neo)positivist, constructivist and transformative paradigm to include a pragmatic stance that resembles the pragmatic stance assumed in the MIR framework. We learned that, because landscape architecture is such an interdisciplinary field, the process approach and the distinction between a conceptual and technical research design was considered very helpful and embraced by researchers in landscape architecture (Tobi and van den Brink 2017 ).

Mixed methods research (MMR) has been considered to study topics as diverse as education (e.g., Powell et al. 2008 ), environmental management (e.g., Molina-Azorin and Lopez-Gamero 2016 ), health psychology (e.g., Bishop 2015 ) and information systems (e.g., Venkatesh et al. 2013 ). Nonetheless, the MIR framework is the first to put MMR in the context of integrating disciplines beyond social inquiry (Greene 2008 ). The splitting of the research into modules stimulates the identification and recognition of the contribution of both distinct and collaborating disciplines irrespective of whether they contribute qualitative and/or quantitative research in the interdisciplinary research design. As mentioned in Sect.  2.4 the integration of the different research modules in one interdisciplinary project design may follow one of the mixed methods designs. For example, we witnessed at several occasions the integration of social and health sciences in interdisciplinary teams opting for sequential modules in a sequential exploratory mixed methods fashion (e.g., Adamson 2005 : 234). In sustainability science research, we have seen the design of concurrent modules for a concurrent nested mixed methods strategy (ibid) in research integrating the social and natural sciences and economics.

The limitations of the MIR framework are those of any kind of collaboration: it cannot work wonders in the absence of awareness of the necessity and it requires the willingness to work, learn, and research together. We developed MIR framework in and alongside our own teaching, consultancy and research, it has not been formally evaluated and compared in an experiment with teaching, consultancy and research with, for example, the regulative cycle for problem solving (van Strien 1986 ), or the wheel of science from Babbie ( 2013 ). In fact, although we wrote “developed” in the previous sentence, we are fully aware of the need to further develop and refine the framework as is.

The importance of the MIR framework lies in the complex, multifaceted nature of issues like sustainability, food security and one world health. For progress in the study of these pressing issues the understanding, construction and quality of interdisciplinary portfolio measurements (Tobi 2014 ) are pivotal and require further study as well as procedures facilitating the integration across different disciplines.

Another important strain of further research relates to the continuum of Responsible Conduct of Research (RCR), Questionable Research Practices (QRP), and deliberate misconduct (Steneck 2006 ). QRP includes failing to report all of a study’s conditions, stopping collecting data earlier than planned because one found the result one had been looking for, etc. (e.g., John et al. 2012 ; Simmons et al. 2011 ; Kampen and Tamás 2014 ). A meta-analysis on selfreports obtained through surveys revealed that about 2% of researchers had admitted to research misconduct at least once, whereas up to 33% admitted to QRPs (Fanelli 2009 ). While the frequency of QRPs may easily eclipse that of deliberate fraud (John et al. 2012 ) these practices have received less attention than deliberate misconduct. Claimed research findings may often be accurate measures of the prevailing biases and methodological rigor in a research field (Fanelli and Ioannidis 2013 ; Fanelli 2010 ). If research misconduct and QRP are to be understood then the disciplinary context must be grasped as a locus of both legitimate and illegitimate activity (Fox 1990 ). It would be valuable to investigate how working in interdisciplinary teams and, consequently, exposure to other standards of QRP and RCR influence research integrity as the appropriate research behavior from the perspective of different professional standards (Steneck 2006 : p. 56). These differences in scientific cultures concern criteria for quality in design and execution of research, reporting (e.g., criteria for authorship of a paper, preferred publication outlets, citation practices, etc.), archiving and sharing of data, and so on.

Other strains of research include interdisciplinary collaboration and negotiation, where we expect contributions from the “science of team science” (Falk-Krzesinski et al. 2010 ); and compatibility of the MIR framework with new research paradigms such as “inclusive research” (a mode of research involving people with intellectual disabilities as more than just objects of research; e.g., Walmsley and Johnson 2003 ). Because of the complexity and novelty of inclusive health research a consensus statement was developed on how to conduct health research inclusively (Frankena et al., under review). The eight attributes of inclusive health research identified may also be taken as guiding attributes in the design of inclusive research according to the MIR framework. For starters, there is the possibility of inclusiveness in the conceptual framework, particularly in determining research objectives, and in discussing possible theoretical frameworks with team members with an intellectual disability which Frankena et al. labelled the “Designing the study” attribute. There are also opportunities for inclusiveness in the technical design, and in execution. For example, the inclusiveness attribute “generating data” overlaps with the operationalization and measurement instrument design/selection and the attribute “analyzing data” aligns with the data analysis plan in the technical design.

On a final note, we hope to have aroused the reader’s interest in, and to have demonstrated the need for, a methodology for interdisciplinary research design. We further hope that the MIR framework proposed and explained in this article helps those involved in designing an interdisciplinary research project to get a clearer view of the various processes that must be secured during the project’s design and execution. And we look forward to further collaboration with scientists from all cultures to contribute to improving the MIR framework and make interdisciplinary collaborations successful.

Acknowledgements

The MIR framework is the result of many discussions with students, researchers and colleagues, with special thanks to Peter Tamás, Jennifer Barrett, Loes Maas, Giel Dik, Ruud Zaalberg, Jurian Meijering, Vanessa Torres van Grinsven, Matthijs Brink, Gerda Casimir, and, last but not least, Jenneken Naaldenberg.

  • Aboelela SW, Larson E, Bakken S, Carrasquillo O, Formicola A, Glied SA, Gebbie KM. Defining interdisciplinary research: conclusions from a critical review of the literature. Health Serv. Res. 2007; 42 (1):329–346. doi: 10.1111/j.1475-6773.2006.00621.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Adamson J. Combined qualitative and quantitative designs. In: Bowling A, Ebrahim S, editors. Handbook of Health Research Methods: Investigation, Measurement and Analysis. Maidenhead: Open University Press; 2005. pp. 230–245. [ Google Scholar ]
  • Adler ES, Clark R. An Invitation to Social Research: How it’s Done. 4. London: Sage; 2011. [ Google Scholar ]
  • Babbie ER. The Practice of Social Research. 13. Belmont Ca: Wadsworth Cengage Learning; 2013. [ Google Scholar ]
  • Baker GH, Little RG. Enhancing homeland security: development of a course on critical infrastructure systems. J. Homel. Secur. Emerg. Manag. 2006 [ Google Scholar ]
  • Bishop FL. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective. Br. J. Health. Psychol. 2015; 20 (1):5–20. doi: 10.1111/bjhp.12122. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boeije HR. Analysis in Qualitative Research. London: Sage; 2009. [ Google Scholar ]
  • Bruns D, van den Brink A, Tobi H, Bell S. Advancing landscape architecture research. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods And Methodology. New York: Routledge; 2017. pp. 11–23. [ Google Scholar ]
  • Creswell JW, Piano Clark VL. Designing and Conducting Mixed Methods Research. 2. Los Angeles: Sage; 2011. [ Google Scholar ]
  • Danner D, Blasius J, Breyer B, Eifler S, Menold N, Paulhus DL, Ziegler M. Current challenges, new developments, and future directions in scale construction. Eur. J. Psychol. Assess. 2016; 32 (3):175–180. doi: 10.1027/1015-5759/a000375. [ CrossRef ] [ Google Scholar ]
  • Deming ME, Swaffield S. Landscape Architecture Research. Hoboken: Wiley; 2011. [ Google Scholar ]
  • Departement Leefmilieu, Natuur en Energie: Uitvoeren van een uitgebreide schriftelijke enquête en een beperkte CAWI-enquête ter bepaling van het percentage gehinderden door geur, geluid en licht in Vlaanderen–SLO-3. Leuven: Market Analysis & Synthesis. www.lne.be/sites/default/files/atoms/files/lne-slo-3-eindrapport.pdf (2013). Accessed 8 March 2017
  • De Vaus D. Research Design in Social Research. London: Sage; 2001. [ Google Scholar ]
  • DeVellis RF. Scale Development: Theory and Applications. 3. Los Angeles: Sage; 2012. [ Google Scholar ]
  • Dillman DA. Mail and Internet Surveys. 2. Hobroken: Wiley; 2007. [ Google Scholar ]
  • Falk-Krzesinski HJ, Borner K, Contractor N, Fiore SM, Hall KL, Keyton J, Uzzi B, et al. Advancing the science of team science. CTS Clin. Transl. Sci. 2010; 3 (5):263–266. doi: 10.1111/j.1752-8062.2010.00223.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fanelli D. How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS ONE. 2009 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fanelli D. Positive results increase down the hierarchy of the sciences. PLoS ONE. 2010 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fanelli D, Ioannidis JPA. US studies may overestimate effect sizes in softer research. Proc. Natl. Acad. Sci. USA. 2013; 110 (37):15031–15036. doi: 10.1073/pnas.1302997110. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fetters MD, Molina-Azorin JF. The journal of mixed methods research starts a new decade: principles for bringing in the new and divesting of the old language of the field. J. Mixed Methods Res. 2017; 11 (1):3–10. doi: 10.1177/1558689816682092. [ CrossRef ] [ Google Scholar ]
  • Fischer ARH, Tobi H, Ronteltap A. When natural met social: a review of collaboration between the natural and social sciences. Interdiscip. Sci. Rev. 2011; 36 (4):341–358. doi: 10.1179/030801811X13160755918688. [ CrossRef ] [ Google Scholar ]
  • Flick U. An Introduction to Qualitative Research. 3. London: Sage; 2006. [ Google Scholar ]
  • Fox MF. Fraud, ethics, and the disciplinary contexts of science and scholarship. Am. Sociol. 1990; 21 (1):67–71. doi: 10.1007/BF02691783. [ CrossRef ] [ Google Scholar ]
  • Frischknecht PM. Environmental science education at the Swiss Federal Institute of Technology (ETH) Water Sci. Technol. 2000; 41 (2):31–36. [ PubMed ] [ Google Scholar ]
  • Godfray HCJ, Beddington JR, Crute IR, Haddad L, Lawrence D, Muir JF, Pretty J, Robinson S, Thomas SM, Toulmin C. Food security: the challenge of feeding 9 billion people. Science. 2010; 327 (5967):812–818. doi: 10.1126/science.1185383. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene JC. Is mixed methods social inquiry a distinctive methodology? J. Mixed Methods Res. 2008; 2 (1):7–22. doi: 10.1177/1558689807309969. [ CrossRef ] [ Google Scholar ]
  • IPCC.: Climate Change 2014 Synthesis Report. Geneva: Intergovernmental Panel on Climate Change. www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf (2015) Accessed 8 March 2017
  • John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 2012; 23 (5):524–532. doi: 10.1177/0956797611430953. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kagan J. The Three Cultures: Natural Sciences, Social Sciences and the Humanities in the 21st Century. Cambridge: Cambridge University Press; 2009. [ Google Scholar ]
  • Kampen JK, Tamás P. Should I take this seriously? A simple checklist for calling bullshit on policy supporting research. Qual. Quant. 2014; 48 :1213–1223. doi: 10.1007/s11135-013-9830-8. [ CrossRef ] [ Google Scholar ]
  • Kumar R. Research Methodology: A Step-by-Step Guide for Beginners. 1. Los Angeles: Sage; 1999. [ Google Scholar ]
  • Kumar R. Research Methodology: A Step-by-Step Guide for Beginners. 4. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Kvale S. Doing Interviews. London: Sage; 2007. [ Google Scholar ]
  • Kvale S, Brinkmann S. Interviews: Learning the Craft of Qualitative Interviews. 2. London: Sage; 2009. [ Google Scholar ]
  • Lenzholder S, Duchhart I, van den Brink A. The relationship between research design. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods and Methodology. New York: Routledge; 2017. pp. 54–64. [ Google Scholar ]
  • Molina-Azorin JF, Lopez-Gamero MD. Mixed methods studies in environmental management research: prevalence, purposes and designs. Bus. Strateg. Environ. 2016; 25 (2):134–148. doi: 10.1002/bse.1862. [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie AJ, Leech NL. Taking the “Q” out of research: teaching research methodology courses without the divide between quantitative and qualitative paradigms. Qual. Quant. 2005; 39 (3):267–296. doi: 10.1007/s11135-004-1670-0. [ CrossRef ] [ Google Scholar ]
  • Powell H, Mihalas S, Onwuegbuzie AJ, Suldo S, Daley CE. Mixed methods research in school psychology: a mixed methods investigation of trends in the literature. Psychol. Sch. 2008; 45 (4):291–309. doi: 10.1002/pits.20296. [ CrossRef ] [ Google Scholar ]
  • Shipley B. Cause and Correlation in Biology. 2. Cambridge: Cambridge University Press; 2016. [ Google Scholar ]
  • Simmons JP, Nelson LD, Simonsohn U. False positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 2011; 22 :1359–1366. doi: 10.1177/0956797611417632. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spelt EJH, Biemans HJA, Tobi H, Luning PA, Mulder M. Teaching and learning in interdisciplinary higher education: a systematic review. Educ. Psychol. Rev. 2009; 21 (4):365–378. doi: 10.1007/s10648-009-9113-z. [ CrossRef ] [ Google Scholar ]
  • Spradley JP. The Ethnographic Interview. New York: Holt, Rinehart and Winston; 1979. [ Google Scholar ]
  • Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci. Eng. Eth. 2006; 12 (1):53–74. doi: 10.1007/s11948-006-0006-y. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tobi H. Measurement in interdisciplinary research: the contributions of widely-defined measurement and portfolio representations. Measurement. 2014; 48 :228–231. doi: 10.1016/j.measurement.2013.11.013. [ CrossRef ] [ Google Scholar ]
  • Tobi H, Kampen JK. Survey error in an international context: an empirical assessment of crosscultural differences regarding scale effects. Qual. Quant. 2013; 47 (1):553–559. doi: 10.1007/s11135-011-9476-3. [ CrossRef ] [ Google Scholar ]
  • Tobi H, van den Brink A. A process approach to research in landscape architecture. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods and Methodology. New York: Routledge; 2017. pp. 24–34. [ Google Scholar ]
  • van Strien PJ. Praktijk als wetenschap: Methodologie van het sociaal-wetenschappelijk handelen [Practice as science. Methodology of social scientific acting.] Assen: Van Gorcum; 1986. [ Google Scholar ]
  • Venkatesh V, Brown SA, Bala H. Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Q. 2013; 37 (1):21–54. doi: 10.25300/MISQ/2013/37.1.02. [ CrossRef ] [ Google Scholar ]
  • Vuye C, Bergiers A, Vanhooreweder B. The acoustical durability of thin noise reducing asphalt layers. Coatings. 2016 [ Google Scholar ]
  • Walmsley J, Johnson K. Inclusive Research with People with Learning Disabilities: Past, Present and Futures. London: Jessica Kingsley; 2003. [ Google Scholar ]
  • Walsh WB, Smith GL, London M. Developing an interface between engineering and social sciences- interdisciplinary team-approach to solving societal problems. Am. Psychol. 1975; 30 (11):1067–1071. doi: 10.1037/0003-066X.30.11.1067. [ CrossRef ] [ Google Scholar ]

Our approach

  • Responsibility
  • Infrastructure
  • Try Meta AI

COMPUTER VISION

Imagine flash: accelerating emu diffusion models with backward distillation.

April 18, 2024

Diffusion models are a powerful generative framework, but come with expensive inference. Existing acceleration methods often compromise image quality or fail under complex conditioning when operating in an extremely low-step regime. In this work, we propose a novel distillation framework tailored to enable high-fidelity, diverse sample generation using just one to three steps. Our approach comprises three key components: (i) Backward Distillation, which mitigates training-inference discrepancies by calibrating the student on its own backward trajectory; (ii) Shifted Reconstruction Loss that dynamically adapts knowledge transfer based on the current time step; and (iii) Noise Correction, an inference time technique that enhances sample quality by addressing singularities in noise prediction. Through extensive experiments, we demonstrate that our method outperforms existing competitors in quantitative metrics and human evaluations. Remarkably, it achieves performance comparable to the teacher model using only three denoising steps, enabling efficient high-quality generation.

Jonas Kohler

Albert Pumarola

Edgar Schoenfeld

Artsiom Sanakoyeu

Roshan Sumbaly

Peter Vajda

Research Topics

Computer Vision

Related Publications

March 20, 2024

SceneScript: Reconstructing Scenes With An Autoregressive Structured Language Model

Armen Avetisyan , Chris Xie , Henry Howard-Jenkins , Tsun-Yi Yang , Samir Aroudj , Suvam Patra , Fuyang Zhang , Duncan Frost , Luke Holland , Campbell Orme , Jakob Julian Engel , Edward Miller , Richard Newcombe , Vasileios Balntas

February 13, 2024

IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation

Luke Melas-Kyriazi , Iro Laina , Christian Rupprecht , Natalia Neverova , Andrea Vedaldi , Oran Gafni , Filippos Kokkinos

January 25, 2024

LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks

Felix Xu , Di Lin , Jianjun Zhao , Jianlang Chen , Lei Ma , Qing Guo , Wei Feng , Xuhong Ren

December 08, 2023

Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignment

Sherry Xue , Kristen Grauman

framework in research methodology

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment..

Product experiences

Foundational models

Latest news

Meta © 2024

IMAGES

  1. Research methodology framework

    framework in research methodology

  2. 15 Research Methodology Examples (2023)

    framework in research methodology

  3. Proposed framework of research methodology.

    framework in research methodology

  4. Research framework flow chart Research framework flow chart (adapted

    framework in research methodology

  5. Research methodology framework showing different components.

    framework in research methodology

  6. Conceptual framework of the research methodology

    framework in research methodology

VIDEO

  1. Framework & Methodology

  2. Part 04: Conceptual Framework (Research Methods and Methodology) By. Dr. Walter

  3. Theoretical Framework

  4. Research Design ~ Why & What

  5. Conceptual Framework

  6. Theoretical Framework

COMMENTS

  1. Methodological Framework

    Definition: Methodological framework is a set of procedures, methods, and tools that guide the research process in a systematic and structured manner. It provides a structure for conducting research, collecting and analyzing data, and drawing conclusions. The framework outlines the steps to be taken in a research project, including the research ...

  2. Conceptual Framework

    Conceptual Framework Methodology is a research method that is commonly used in academic and scientific research to develop a theoretical framework for a study. It is a systematic approach that helps researchers to organize their thoughts and ideas, identify the variables that are relevant to their study, and establish the relationships between ...

  3. What is a Theoretical Framework? How to Write It (with Examples)

    A theoretical framework guides the research process like a roadmap for the study, so you need to get this right. Theoretical framework 1,2 is the structure that supports and describes a theory. A theory is a set of interrelated concepts and definitions that present a systematic view of phenomena by describing the relationship among the variables for explaining these phenomena.

  4. What Is a Conceptual Framework?

    Developing a conceptual framework in research. A conceptual framework is a representation of the relationship you expect to see between your variables, or the characteristics or properties that you want to study. Conceptual frameworks can be written or visual and are generally developed based on a literature review of existing studies about ...

  5. How methodological frameworks are being developed: evidence from a

    There is no formal definition of a methodological framework amongst the academic community. There is, however, unspoken agreement that a methodological framework provides structured practical guidance or a tool to guide the user through a process, using stages or a step-by-step approach [1,2,3,4,5].Specific descriptions of a methodological framework include: 'a body of methods, rules and ...

  6. How methodological frameworks are being developed: evidence from a

    Methodology is defined as the group of methods used in a specified field, and framework is defined as a structure of rules or ideas. The primary research question posed in this review is 'what approaches are used in developing a methodological framework and is there consistency in those approaches to enable making suggestions for developing ...

  7. A tutorial on methodological studies: the what, when, how and why

    They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. ... We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research. Main text.

  8. Using the framework method for the analysis of qualitative data in

    The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve ...

  9. What Is a Theoretical Framework?

    A theoretical framework is a foundational review of existing theories that serves as a roadmap for developing the arguments you will use in your own work. Theories are developed by researchers to explain phenomena, draw connections, and make predictions. In a theoretical framework, you explain the existing theories that support your research ...

  10. A Comprehensive Guide to Methodology in Research

    Research methodology plays a crucial role in any study or investigation. It provides the framework for collecting, analyzing, and interpreting data, ensuring that the research is reliable, valid, and credible. Understanding the importance of research methodology is essential for conducting rigorous and meaningful research.

  11. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  12. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  13. PDF Methodological Frameworks and Sampling in Qualitative Research

    Qualitative ethnographic research has its deep roots in the study of culture in anthropology. Theoretically and as a methodology, culture is the essence of ethnography. Early anthropological studies focused on 'exotic' cultures of the 'primitive' world (such as the work of Malinoswki [1922] and Mead [1928]).

  14. Building a Conceptual Framework: Philosophy, Definitions, and Procedure

    A conceptual framework is defined as a network or a "plane" of linked concepts. Conceptual framework analysis offers a procedure of theorization for building conceptual frameworks based on grounded theory method. The advantages of conceptual framework analysis are its flexibility, its capacity for modification, and its emphasis on ...

  15. PDF Frameworks for Qualitative Research

    research methods. For many qualitative researchers it is their general frame-work or paradigm that is most important to them. If they are devoted to a particular research method, it is often because that method is an expression of their paradigm. If paradigms or frameworks are a central issue for qualitative researchers,

  16. The Central Role of Theory in Qualitative Research

    The use of theory in science is an ongoing debate in the production of knowledge. Related to qualitative research methods, a variety of approaches have been set forth in the literature using the terms conceptual framework, theoretical framework, paradigm, and epistemology.

  17. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    This Research Methods article is designed to provide an overview of each of these elements and delineate the purpose of each in the educational research process. We describe what biology education researchers should consider as they conduct literature reviews, identify theoretical frameworks, and construct conceptual frameworks.

  18. Research Framework

    Abstract. This section presents the research design, provides a description and justification of the methodological approach and methods used, and details the research framework for the study. In addition, it presents the research objectives and highlights the research hypothesis; discusses about the research area, sampling techniques used, and ...

  19. Research design: the methodology for interdisciplinary research framework

    2.1 Research as a process in the methodology in interdisciplinary research framework. The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research.That means that it helps the MIR framework to put the common goal ...

  20. Literature review as a research methodology: An ...

    This is generally referred to as the "literature review," "theoretical framework," or "research background." However, for a literature review to become a proper research methodology, as with any other research, follow proper steps need to be followed and action taken to ensure the review is accurate, precise, and trustworthy.

  21. Conceptual Framework

    Informs the research questions/methodology (problem statement drives framework drives RQs drives methodology) A tool (linked concepts) to help facilitate the understanding of the relationship among concepts or variables in relation to the real-world. Each concept is linked to frame the project in question.

  22. Framework Analysis

    Framework Analysis. Definition: Framework Analysis is a qualitative research method that involves organizing and analyzing data using a predefined analytical framework. The analytical framework is a set of predetermined themes or categories that are derived from the research questions or objectives. The framework provides a structured approach ...

  23. Research Frameworks: Critical Components for Reporting Qualitative

    The Importance of Research Frameworks. Researchers may draw on several elements to frame their research. Generally, a framework is regarded as "a set of ideas that you use when you are forming your decisions and judgements"13 or "a system of rules, ideas, or beliefs that is used to plan or decide something."14 Research frameworks may consist of a single formal theory or part thereof ...

  24. Research Framework and Research Methodology

    In the social sciences, there are two different research methods: quantitative and qualitative methods. Quantitative research, philosophically based on positivism, identifies correlations and causal relationships between things through experiments and surveys and explores related social phenomena based on research hypotheses, which can be said to be its basic feature.

  25. A modified action framework to develop and evaluate academic-policy

    Background There has been a proliferation of frameworks with a common goal of bridging the gap between evidence, policy, and practice, but few aim to specifically guide evaluations of academic-policy engagement. We present the modification of an action framework for the purpose of selecting, developing and evaluating interventions for academic-policy engagement. Methods We build on the ...

  26. The design and evaluation of gamified online role-play as a ...

    The conceptual framework developed in this research could be considered to design and implement a gamified online role-play in dental education. ... Using the framework method for the analysis of ...

  27. Enhancing assisted diagnostic accuracy in scalp psoriasis: A Multi

    Background. Dermoscopy is a common method of scalp psoriasis diagnosis, and several artificial intelligence techniques have been used to assist dermoscopy in the diagnosis of nail fungus disease, the most commonly used being the convolutional neural network algorithm; however, convolutional neural networks are only the most basic algorithm, and the use of object detection algorithms to assist ...

  28. Research design: the methodology for interdisciplinary research framework

    Research as a process in the methodology in interdisciplinary research framework. The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research.That means that it helps the MIR framework to put the common goal of ...

  29. Imagine Flash: Accelerating Emu Diffusion Models with Backward

    In this work, we propose a novel distillation framework tailored to enable high-fidelity, diverse sample generation using just one to three steps. Our approach comprises three key components: (i) Backward Distillation, which mitigates training-inference discrepancies by calibrating the student on its own backward trajectory; (ii) Shifted ...