• How it works

researchprospect post subheader

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from ResearchProspect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

ResearchProspect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, ResearchProspect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

Repository of ten perfect research question examples will provide you a better perspective about how to create research questions.

How to write a hypothesis for dissertation,? A hypothesis is a statement that can be tested with the help of experimental or theoretical research.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • How it works

"Christmas Offer"

Terms & conditions.

As the Christmas season is upon us, we find ourselves reflecting on the past year and those who we have helped to shape their future. It’s been quite a year for us all! The end of the year brings no greater joy than the opportunity to express to you Christmas greetings and good wishes.

At this special time of year, Research Prospect brings joyful discount of 10% on all its services. May your Christmas and New Year be filled with joy.

We are looking back with appreciation for your loyalty and looking forward to moving into the New Year together.

"Claim this offer"

In unfamiliar and hard times, we have stuck by you. This Christmas, Research Prospect brings you all the joy with exciting discount of 10% on all its services.

Offer valid till 5-1-2024

We love being your partner in success. We know you have been working hard lately, take a break this holiday season to spend time with your loved ones while we make sure you succeed in your academics

Discount code: RP23720

researchprospect post subheader

Published by Nicolas at March 21st, 2024 , Revised On March 12, 2024

The Ultimate Guide To Research Methodology

Research methodology is a crucial aspect of any investigative process, serving as the blueprint for the entire research journey. If you are stuck in the methodology section of your research paper , then this blog will guide you on what is a research methodology, its types and how to successfully conduct one. 

Table of Contents

What Is Research Methodology?

Research methodology can be defined as the systematic framework that guides researchers in designing, conducting, and analyzing their investigations. It encompasses a structured set of processes, techniques, and tools employed to gather and interpret data, ensuring the reliability and validity of the research findings. 

Research methodology is not confined to a singular approach; rather, it encapsulates a diverse range of methods tailored to the specific requirements of the research objectives.

Here is why Research methodology is important in academic and professional settings.

Facilitating Rigorous Inquiry

Research methodology forms the backbone of rigorous inquiry. It provides a structured approach that aids researchers in formulating precise thesis statements , selecting appropriate methodologies, and executing systematic investigations. This, in turn, enhances the quality and credibility of the research outcomes.

Ensuring Reproducibility And Reliability

In both academic and professional contexts, the ability to reproduce research outcomes is paramount. A well-defined research methodology establishes clear procedures, making it possible for others to replicate the study. This not only validates the findings but also contributes to the cumulative nature of knowledge.

Guiding Decision-Making Processes

In professional settings, decisions often hinge on reliable data and insights. Research methodology equips professionals with the tools to gather pertinent information, analyze it rigorously, and derive meaningful conclusions.

This informed decision-making is instrumental in achieving organizational goals and staying ahead in competitive environments.

Contributing To Academic Excellence

For academic researchers, adherence to robust research methodology is a hallmark of excellence. Institutions value research that adheres to high standards of methodology, fostering a culture of academic rigour and intellectual integrity. Furthermore, it prepares students with critical skills applicable beyond academia.

Enhancing Problem-Solving Abilities

Research methodology instills a problem-solving mindset by encouraging researchers to approach challenges systematically. It equips individuals with the skills to dissect complex issues, formulate hypotheses , and devise effective strategies for investigation.

Understanding Research Methodology

In the pursuit of knowledge and discovery, understanding the fundamentals of research methodology is paramount. 

Basics Of Research

Research, in its essence, is a systematic and organized process of inquiry aimed at expanding our understanding of a particular subject or phenomenon. It involves the exploration of existing knowledge, the formulation of hypotheses, and the collection and analysis of data to draw meaningful conclusions. 

Research is a dynamic and iterative process that contributes to the continuous evolution of knowledge in various disciplines.

Types of Research

Research takes on various forms, each tailored to the nature of the inquiry. Broadly classified, research can be categorized into two main types:

  • Quantitative Research: This type involves the collection and analysis of numerical data to identify patterns, relationships, and statistical significance. It is particularly useful for testing hypotheses and making predictions.
  • Qualitative Research: Qualitative research focuses on understanding the depth and details of a phenomenon through non-numerical data. It often involves methods such as interviews, focus groups, and content analysis, providing rich insights into complex issues.

Components Of Research Methodology

To conduct effective research, one must go through the different components of research methodology. These components form the scaffolding that supports the entire research process, ensuring its coherence and validity.

Research Design

Research design serves as the blueprint for the entire research project. It outlines the overall structure and strategy for conducting the study. The three primary types of research design are:

  • Exploratory Research: Aimed at gaining insights and familiarity with the topic, often used in the early stages of research.
  • Descriptive Research: Involves portraying an accurate profile of a situation or phenomenon, answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.
  • Explanatory Research: Seeks to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how.’

Data Collection Methods

Choosing the right data collection methods is crucial for obtaining reliable and relevant information. Common methods include:

  • Surveys and Questionnaires: Employed to gather information from a large number of respondents through standardized questions.
  • Interviews: In-depth conversations with participants, offering qualitative insights.
  • Observation: Systematic watching and recording of behaviour, events, or processes in their natural setting.

Data Analysis Techniques

Once data is collected, analysis becomes imperative to derive meaningful conclusions. Different methodologies exist for quantitative and qualitative data:

  • Quantitative Data Analysis: Involves statistical techniques such as descriptive statistics, inferential statistics, and regression analysis to interpret numerical data.
  • Qualitative Data Analysis: Methods like content analysis, thematic analysis, and grounded theory are employed to extract patterns, themes, and meanings from non-numerical data.

The research paper we write have:

  • Precision and Clarity
  • Zero Plagiarism
  • High-level Encryption
  • Authentic Sources

proposals we write

Choosing a Research Method

Selecting an appropriate research method is a critical decision in the research process. It determines the approach, tools, and techniques that will be used to answer the research questions. 

Quantitative Research Methods

Quantitative research involves the collection and analysis of numerical data, providing a structured and objective approach to understanding and explaining phenomena.

Experimental Research

Experimental research involves manipulating variables to observe the effect on another variable under controlled conditions. It aims to establish cause-and-effect relationships.

Key Characteristics:

  • Controlled Environment: Experiments are conducted in a controlled setting to minimize external influences.
  • Random Assignment: Participants are randomly assigned to different experimental conditions.
  • Quantitative Data: Data collected is numerical, allowing for statistical analysis.

Applications: Commonly used in scientific studies and psychology to test hypotheses and identify causal relationships.

Survey Research

Survey research gathers information from a sample of individuals through standardized questionnaires or interviews. It aims to collect data on opinions, attitudes, and behaviours.

  • Structured Instruments: Surveys use structured instruments, such as questionnaires, to collect data.
  • Large Sample Size: Surveys often target a large and diverse group of participants.
  • Quantitative Data Analysis: Responses are quantified for statistical analysis.

Applications: Widely employed in social sciences, marketing, and public opinion research to understand trends and preferences.

Descriptive Research

Descriptive research seeks to portray an accurate profile of a situation or phenomenon. It focuses on answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.

  • Observation and Data Collection: This involves observing and documenting without manipulating variables.
  • Objective Description: Aim to provide an unbiased and factual account of the subject.
  • Quantitative or Qualitative Data: T his can include both types of data, depending on the research focus.

Applications: Useful in situations where researchers want to understand and describe a phenomenon without altering it, common in social sciences and education.

Qualitative Research Methods

Qualitative research emphasizes exploring and understanding the depth and complexity of phenomena through non-numerical data.

A case study is an in-depth exploration of a particular person, group, event, or situation. It involves detailed, context-rich analysis.

  • Rich Data Collection: Uses various data sources, such as interviews, observations, and documents.
  • Contextual Understanding: Aims to understand the context and unique characteristics of the case.
  • Holistic Approach: Examines the case in its entirety.

Applications: Common in social sciences, psychology, and business to investigate complex and specific instances.

Ethnography

Ethnography involves immersing the researcher in the culture or community being studied to gain a deep understanding of their behaviours, beliefs, and practices.

  • Participant Observation: Researchers actively participate in the community or setting.
  • Holistic Perspective: Focuses on the interconnectedness of cultural elements.
  • Qualitative Data: In-depth narratives and descriptions are central to ethnographic studies.

Applications: Widely used in anthropology, sociology, and cultural studies to explore and document cultural practices.

Grounded Theory

Grounded theory aims to develop theories grounded in the data itself. It involves systematic data collection and analysis to construct theories from the ground up.

  • Constant Comparison: Data is continually compared and analyzed during the research process.
  • Inductive Reasoning: Theories emerge from the data rather than being imposed on it.
  • Iterative Process: The research design evolves as the study progresses.

Applications: Commonly applied in sociology, nursing, and management studies to generate theories from empirical data.

Research design is the structural framework that outlines the systematic process and plan for conducting a study. It serves as the blueprint, guiding researchers on how to collect, analyze, and interpret data.

Exploratory, Descriptive, And Explanatory Designs

Exploratory design.

Exploratory research design is employed when a researcher aims to explore a relatively unknown subject or gain insights into a complex phenomenon.

  • Flexibility: Allows for flexibility in data collection and analysis.
  • Open-Ended Questions: Uses open-ended questions to gather a broad range of information.
  • Preliminary Nature: Often used in the initial stages of research to formulate hypotheses.

Applications: Valuable in the early stages of investigation, especially when the researcher seeks a deeper understanding of a subject before formalizing research questions.

Descriptive Design

Descriptive research design focuses on portraying an accurate profile of a situation, group, or phenomenon.

  • Structured Data Collection: Involves systematic and structured data collection methods.
  • Objective Presentation: Aims to provide an unbiased and factual account of the subject.
  • Quantitative or Qualitative Data: Can incorporate both types of data, depending on the research objectives.

Applications: Widely used in social sciences, marketing, and educational research to provide detailed and objective descriptions.

Explanatory Design

Explanatory research design aims to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how’ behind observed relationships.

  • Causal Relationships: Seeks to establish causal relationships between variables.
  • Controlled Variables : Often involves controlling certain variables to isolate causal factors.
  • Quantitative Analysis: Primarily relies on quantitative data analysis techniques.

Applications: Commonly employed in scientific studies and social sciences to delve into the underlying reasons behind observed patterns.

Cross-Sectional Vs. Longitudinal Designs

Cross-sectional design.

Cross-sectional designs collect data from participants at a single point in time.

  • Snapshot View: Provides a snapshot of a population at a specific moment.
  • Efficiency: More efficient in terms of time and resources.
  • Limited Temporal Insights: Offers limited insights into changes over time.

Applications: Suitable for studying characteristics or behaviours that are stable or not expected to change rapidly.

Longitudinal Design

Longitudinal designs involve the collection of data from the same participants over an extended period.

  • Temporal Sequence: Allows for the examination of changes over time.
  • Causality Assessment: Facilitates the assessment of cause-and-effect relationships.
  • Resource-Intensive: Requires more time and resources compared to cross-sectional designs.

Applications: Ideal for studying developmental processes, trends, or the impact of interventions over time.

Experimental Vs Non-experimental Designs

Experimental design.

Experimental designs involve manipulating variables under controlled conditions to observe the effect on another variable.

  • Causality Inference: Enables the inference of cause-and-effect relationships.
  • Quantitative Data: Primarily involves the collection and analysis of numerical data.

Applications: Commonly used in scientific studies, psychology, and medical research to establish causal relationships.

Non-Experimental Design

Non-experimental designs observe and describe phenomena without manipulating variables.

  • Natural Settings: Data is often collected in natural settings without intervention.
  • Descriptive or Correlational: Focuses on describing relationships or correlations between variables.
  • Quantitative or Qualitative Data: This can involve either type of data, depending on the research approach.

Applications: Suitable for studying complex phenomena in real-world settings where manipulation may not be ethical or feasible.

Effective data collection is fundamental to the success of any research endeavour. 

Designing Effective Surveys

Objective Design:

  • Clearly define the research objectives to guide the survey design.
  • Craft questions that align with the study’s goals and avoid ambiguity.

Structured Format:

  • Use a structured format with standardized questions for consistency.
  • Include a mix of closed-ended and open-ended questions for detailed insights.

Pilot Testing:

  • Conduct pilot tests to identify and rectify potential issues with survey design.
  • Ensure clarity, relevance, and appropriateness of questions.

Sampling Strategy:

  • Develop a robust sampling strategy to ensure a representative participant group.
  • Consider random sampling or stratified sampling based on the research goals.

Conducting Interviews

Establishing Rapport:

  • Build rapport with participants to create a comfortable and open environment.
  • Clearly communicate the purpose of the interview and the value of participants’ input.

Open-Ended Questions:

  • Frame open-ended questions to encourage detailed responses.
  • Allow participants to express their thoughts and perspectives freely.

Active Listening:

  • Practice active listening to understand areas and gather rich data.
  • Avoid interrupting and maintain a non-judgmental stance during the interview.

Ethical Considerations:

  • Obtain informed consent and assure participants of confidentiality.
  • Be transparent about the study’s purpose and potential implications.

Observation

1. participant observation.

Immersive Participation:

  • Actively immerse yourself in the setting or group being observed.
  • Develop a deep understanding of behaviours, interactions, and context.

Field Notes:

  • Maintain detailed and reflective field notes during observations.
  • Document observed patterns, unexpected events, and participant reactions.

Ethical Awareness:

  • Be conscious of ethical considerations, ensuring respect for participants.
  • Balance the role of observer and participant to minimize bias.

2. Non-participant Observation

Objective Observation:

  • Maintain a more detached and objective stance during non-participant observation.
  • Focus on recording behaviours, events, and patterns without direct involvement.

Data Reliability:

  • Enhance the reliability of data by reducing observer bias.
  • Develop clear observation protocols and guidelines.

Contextual Understanding:

  • Strive for a thorough understanding of the observed context.
  • Consider combining non-participant observation with other methods for triangulation.

Archival Research

1. using existing data.

Identifying Relevant Archives:

  • Locate and access archives relevant to the research topic.
  • Collaborate with institutions or repositories holding valuable data.

Data Verification:

  • Verify the accuracy and reliability of archived data.
  • Cross-reference with other sources to ensure data integrity.

Ethical Use:

  • Adhere to ethical guidelines when using existing data.
  • Respect copyright and intellectual property rights.

2. Challenges and Considerations

Incomplete or Inaccurate Archives:

  • Address the possibility of incomplete or inaccurate archival records.
  • Acknowledge limitations and uncertainties in the data.

Temporal Bias:

  • Recognize potential temporal biases in archived data.
  • Consider the historical context and changes that may impact interpretation.

Access Limitations:

  • Address potential limitations in accessing certain archives.
  • Seek alternative sources or collaborate with institutions to overcome barriers.

Common Challenges in Research Methodology

Conducting research is a complex and dynamic process, often accompanied by a myriad of challenges. Addressing these challenges is crucial to ensure the reliability and validity of research findings.

Sampling Issues

Sampling bias:.

  • The presence of sampling bias can lead to an unrepresentative sample, affecting the generalizability of findings.
  • Employ random sampling methods and ensure the inclusion of diverse participants to reduce bias.

Sample Size Determination:

  • Determining an appropriate sample size is a delicate balance. Too small a sample may lack statistical power, while an excessively large sample may strain resources.
  • Conduct a power analysis to determine the optimal sample size based on the research objectives and expected effect size.

Data Quality And Validity

Measurement error:.

  • Inaccuracies in measurement tools or data collection methods can introduce measurement errors, impacting the validity of results.
  • Pilot test instruments, calibrate equipment, and use standardized measures to enhance the reliability of data.

Construct Validity:

  • Ensuring that the chosen measures accurately capture the intended constructs is a persistent challenge.
  • Use established measurement instruments and employ multiple measures to assess the same construct for triangulation.

Time And Resource Constraints

Timeline pressures:.

  • Limited timeframes can compromise the depth and thoroughness of the research process.
  • Develop a realistic timeline, prioritize tasks, and communicate expectations with stakeholders to manage time constraints effectively.

Resource Availability:

  • Inadequate resources, whether financial or human, can impede the execution of research activities.
  • Seek external funding, collaborate with other researchers, and explore alternative methods that require fewer resources.

Managing Bias in Research

Selection bias:.

  • Selecting participants in a way that systematically skews the sample can introduce selection bias.
  • Employ randomization techniques, use stratified sampling, and transparently report participant recruitment methods.

Confirmation Bias:

  • Researchers may unintentionally favour information that confirms their preconceived beliefs or hypotheses.
  • Adopt a systematic and open-minded approach, use blinded study designs, and engage in peer review to mitigate confirmation bias.

Tips On How To Write A Research Methodology

Conducting successful research relies not only on the application of sound methodologies but also on strategic planning and effective collaboration. Here are some tips to enhance the success of your research methodology:

Tip 1. Clear Research Objectives

Well-defined research objectives guide the entire research process. Clearly articulate the purpose of your study, outlining specific research questions or hypotheses.

Tip 2. Comprehensive Literature Review

A thorough literature review provides a foundation for understanding existing knowledge and identifying gaps. Invest time in reviewing relevant literature to inform your research design and methodology.

Tip 3. Detailed Research Plan

A detailed plan serves as a roadmap, ensuring all aspects of the research are systematically addressed. Develop a detailed research plan outlining timelines, milestones, and tasks.

Tip 4. Ethical Considerations

Ethical practices are fundamental to maintaining the integrity of research. Address ethical considerations early, obtain necessary approvals, and ensure participant rights are safeguarded.

Tip 5. Stay Updated On Methodologies

Research methodologies evolve, and staying updated is essential for employing the most effective techniques. Engage in continuous learning by attending workshops, conferences, and reading recent publications.

Tip 6. Adaptability In Methods

Unforeseen challenges may arise during research, necessitating adaptability in methods. Be flexible and willing to modify your approach when needed, ensuring the integrity of the study.

Tip 7. Iterative Approach

Research is often an iterative process, and refining methods based on ongoing findings enhance the study’s robustness. Regularly review and refine your research design and methods as the study progresses.

Frequently Asked Questions

What is the research methodology.

Research methodology is the systematic process of planning, executing, and evaluating scientific investigation. It encompasses the techniques, tools, and procedures used to collect, analyze, and interpret data, ensuring the reliability and validity of research findings.

What are the methodologies in research?

Research methodologies include qualitative and quantitative approaches. Qualitative methods involve in-depth exploration of non-numerical data, while quantitative methods use statistical analysis to examine numerical data. Mixed methods combine both approaches for a comprehensive understanding of research questions.

How to write research methodology?

To write a research methodology, clearly outline the study’s design, data collection, and analysis procedures. Specify research tools, participants, and sampling methods. Justify choices and discuss limitations. Ensure clarity, coherence, and alignment with research objectives for a robust methodology section.

How to write the methodology section of a research paper?

In the methodology section of a research paper, describe the study’s design, data collection, and analysis methods. Detail procedures, tools, participants, and sampling. Justify choices, address ethical considerations, and explain how the methodology aligns with research objectives, ensuring clarity and rigour.

What is mixed research methodology?

Mixed research methodology combines both qualitative and quantitative research approaches within a single study. This approach aims to enhance the details and depth of research findings by providing a more comprehensive understanding of the research problem or question.

You May Also Like

Welcome to the most comprehensive resource page of climate change research topics, a crucial field of study central to understanding […]

Learn how to write a finance thesis and more than 30 finance thesis topics to choose from. Start your research with the help of our guide.

Discover Canadian doctoral dissertation format: structure, formatting, and word limits. Check your university guidelines.

Ready to place an order?

USEFUL LINKS

Learning resources.

DMCA.com Protection Status

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

how to present research design and methodology

Princeton Correspondents on Undergraduate Research

How to Make a Successful Research Presentation

Turning a research paper into a visual presentation is difficult; there are pitfalls, and navigating the path to a brief, informative presentation takes time and practice. As a TA for  GEO/WRI 201: Methods in Data Analysis & Scientific Writing this past fall, I saw how this process works from an instructor’s standpoint. I’ve presented my own research before, but helping others present theirs taught me a bit more about the process. Here are some tips I learned that may help you with your next research presentation:

More is more

In general, your presentation will always benefit from more practice, more feedback, and more revision. By practicing in front of friends, you can get comfortable with presenting your work while receiving feedback. It is hard to know how to revise your presentation if you never practice. If you are presenting to a general audience, getting feedback from someone outside of your discipline is crucial. Terms and ideas that seem intuitive to you may be completely foreign to someone else, and your well-crafted presentation could fall flat.

Less is more

Limit the scope of your presentation, the number of slides, and the text on each slide. In my experience, text works well for organizing slides, orienting the audience to key terms, and annotating important figures–not for explaining complex ideas. Having fewer slides is usually better as well. In general, about one slide per minute of presentation is an appropriate budget. Too many slides is usually a sign that your topic is too broad.

how to present research design and methodology

Limit the scope of your presentation

Don’t present your paper. Presentations are usually around 10 min long. You will not have time to explain all of the research you did in a semester (or a year!) in such a short span of time. Instead, focus on the highlight(s). Identify a single compelling research question which your work addressed, and craft a succinct but complete narrative around it.

You will not have time to explain all of the research you did. Instead, focus on the highlights. Identify a single compelling research question which your work addressed, and craft a succinct but complete narrative around it.

Craft a compelling research narrative

After identifying the focused research question, walk your audience through your research as if it were a story. Presentations with strong narrative arcs are clear, captivating, and compelling.

  • Introduction (exposition — rising action)

Orient the audience and draw them in by demonstrating the relevance and importance of your research story with strong global motive. Provide them with the necessary vocabulary and background knowledge to understand the plot of your story. Introduce the key studies (characters) relevant in your story and build tension and conflict with scholarly and data motive. By the end of your introduction, your audience should clearly understand your research question and be dying to know how you resolve the tension built through motive.

how to present research design and methodology

  • Methods (rising action)

The methods section should transition smoothly and logically from the introduction. Beware of presenting your methods in a boring, arc-killing, ‘this is what I did.’ Focus on the details that set your story apart from the stories other people have already told. Keep the audience interested by clearly motivating your decisions based on your original research question or the tension built in your introduction.

  • Results (climax)

Less is usually more here. Only present results which are clearly related to the focused research question you are presenting. Make sure you explain the results clearly so that your audience understands what your research found. This is the peak of tension in your narrative arc, so don’t undercut it by quickly clicking through to your discussion.

  • Discussion (falling action)

By now your audience should be dying for a satisfying resolution. Here is where you contextualize your results and begin resolving the tension between past research. Be thorough. If you have too many conflicts left unresolved, or you don’t have enough time to present all of the resolutions, you probably need to further narrow the scope of your presentation.

  • Conclusion (denouement)

Return back to your initial research question and motive, resolving any final conflicts and tying up loose ends. Leave the audience with a clear resolution of your focus research question, and use unresolved tension to set up potential sequels (i.e. further research).

Use your medium to enhance the narrative

Visual presentations should be dominated by clear, intentional graphics. Subtle animation in key moments (usually during the results or discussion) can add drama to the narrative arc and make conflict resolutions more satisfying. You are narrating a story written in images, videos, cartoons, and graphs. While your paper is mostly text, with graphics to highlight crucial points, your slides should be the opposite. Adapting to the new medium may require you to create or acquire far more graphics than you included in your paper, but it is necessary to create an engaging presentation.

The most important thing you can do for your presentation is to practice and revise. Bother your friends, your roommates, TAs–anybody who will sit down and listen to your work. Beyond that, think about presentations you have found compelling and try to incorporate some of those elements into your own. Remember you want your work to be comprehensible; you aren’t creating experts in 10 minutes. Above all, try to stay passionate about what you did and why. You put the time in, so show your audience that it’s worth it.

For more insight into research presentations, check out these past PCUR posts written by Emma and Ellie .

— Alec Getraer, Natural Sciences Correspondent

Share this:

  • Share on Tumblr

how to present research design and methodology

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

how to present research design and methodology

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

how to present research design and methodology

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

how to present research design and methodology

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

how to present research design and methodology

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on 25 February 2019 by Shona McCombes . Revised on 10 October 2022.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

It should include:

  • The type of research you conducted
  • How you collected and analysed your data
  • Any tools or materials you used in the research
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, frequently asked questions about methodology.

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

how to present research design and methodology

Correct my document today

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalisable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalised your concepts and measured your variables. Discuss your sampling method or inclusion/exclusion criteria, as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on 4–8 July 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyse?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness shop’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods here.

Prevent plagiarism, run a free check.

Next, you should indicate how you processed and analysed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analysing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorising and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviours, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalised beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalisable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives  and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved 27 May 2024, from https://www.scribbr.co.uk/thesis-dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a literature review | guide, template, & examples, what is a theoretical framework | a step-by-step guide.

Banner

Research Methodology: Overview of Research Methodology

  • Overview of Research Methodology
  • General Encyclopedias on Research Methodology
  • General Handbooks on Research Methodology
  • Focus Groups
  • Case Studies
  • Cost Benefit Analysis
  • Participatory Action Research
  • Archival Research
  • Data Analysis

Research Methods Overview

If you are planning to do research - whether you are doing a student research project,  IQP,  MQP, GPS project, thesis, or dissertation, you need to use valid approaches and tools to set up your study, gather your data, and make sense of your findings. This research methods guide will help you choose a methodology and launch into your research project. 

Data collection and data analysis are  research methods  that can be applied to many disciplines. There is Qualitative research and Quantitative Research. The focus of this guide, includes most popular methods including: 

focus groups

case studies

We are happy to answer questions about research methods and assist with choosing a method that is right for your research in person or online. below is a video on how to book a research consultation

"How-To": Booking a Research Consultation

how to present research design and methodology

" Research Data Management " by  Peter Neish  is marked with  CC0 1.0 .

Research Design vs Research Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

Research Data Management

Research Data Management (RDM) refers to how you are going to keep and share your data over longer time frame - like after you graduate. It is defined as the organization, documentation, storage, and  preservation  of the  data  resulting from the research process, where data can be broadly defined as the outcome of experiments or observations that validate research findings, and can take a variety of forms including numerical output ( quantitative data ),  qualitative data , documentation, images, audio, and video.

"Research Design"  by  George C Gordon Library  is licensed under  CC BY 4.0  / A derivative from the  original work

  • Next: General Encyclopedias on Research Methodology >>
  • Last Updated: Jul 31, 2023 3:07 PM
  • URL: https://libguides.wpi.edu/researchmethod

How can we help?

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Cyberspace

Research Design and Methodology

Submitted: 23 January 2019 Reviewed: 08 March 2019 Published: 07 August 2019

DOI: 10.5772/intechopen.85731

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Edited by Evon Abu-Taieh, Abdelkrim El Mouatasim and Issam H. Al Hadid

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

31,237 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research framework. The purpose of this section is to indicate how the research was conducted throughout the study periods.

  • research design
  • methodology
  • data sources

Author Information

Kassu jilcha sileyew *.

  • School of Mechanical and Industrial Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa, Ethiopia

*Address all correspondence to: [email protected]

1. Introduction

Research methodology is the path through which researchers need to conduct their research. It shows the path through which these researchers formulate their problem and objective and present their result from the data obtained during the study period. This research design and methodology chapter also shows how the research outcome at the end will be obtained in line with meeting the objective of the study. This chapter hence discusses the research methods that were used during the research process. It includes the research methodology of the study from the research strategy to the result dissemination. For emphasis, in this chapter, the author outlines the research strategy, research design, research methodology, the study area, data sources such as primary data sources and secondary data, population consideration and sample size determination such as questionnaires sample size determination and workplace site exposure measurement sample determination, data collection methods like primary data collection methods including workplace site observation data collection and data collection through desk review, data collection through questionnaires, data obtained from experts opinion, workplace site exposure measurement, data collection tools pretest, secondary data collection methods, methods of data analysis used such as quantitative data analysis and qualitative data analysis, data analysis software, the reliability and validity analysis of the quantitative data, reliability of data, reliability analysis, validity, data quality management, inclusion criteria, ethical consideration and dissemination of result and its utilization approaches. In order to satisfy the objectives of the study, a qualitative and quantitative research method is apprehended in general. The study used these mixed strategies because the data were obtained from all aspects of the data source during the study time. Therefore, the purpose of this methodology is to satisfy the research plan and target devised by the researcher.

2. Research design

The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [ 1 ].

This study employed a mixed type of methods. The first part of the study consisted of a series of well-structured questionnaires (for management, employee’s representatives, and technician of industries) and semi-structured interviews with key stakeholders (government bodies, ministries, and industries) in participating organizations. The other design used is an interview of employees to know how they feel about safety and health of their workplace, and field observation at the selected industrial sites was undertaken.

Hence, this study employs a descriptive research design to agree on the effects of occupational safety and health management system on employee health, safety, and property damage for selected manufacturing industries. Saunders et al. [ 2 ] and Miller [ 3 ] say that descriptive research portrays an accurate profile of persons, events, or situations. This design offers to the researchers a profile of described relevant aspects of the phenomena of interest from an individual, organizational, and industry-oriented perspective. Therefore, this research design enabled the researchers to gather data from a wide range of respondents on the impact of safety and health on manufacturing industries in Ethiopia. And this helped in analyzing the response obtained on how it affects the manufacturing industries’ workplace safety and health. The research overall design and flow process are depicted in Figure 1 .

how to present research design and methodology

Research methods and processes (author design).

3. Research methodology

To address the key research objectives, this research used both qualitative and quantitative methods and combination of primary and secondary sources. The qualitative data supports the quantitative data analysis and results. The result obtained is triangulated since the researcher utilized the qualitative and quantitative data types in the data analysis. The study area, data sources, and sampling techniques were discussed under this section.

3.1 The study area

According to Fraenkel and Warren [ 4 ] studies, population refers to the complete set of individuals (subjects or events) having common characteristics in which the researcher is interested. The population of the study was determined based on random sampling system. This data collection was conducted from March 07, 2015 to December 10, 2016, from selected manufacturing industries found in Addis Ababa city and around. The manufacturing companies were selected based on their employee number, established year, and the potential accidents prevailing and the manufacturing industry type even though all criterions were difficult to satisfy.

3.2 Data sources

3.2.1 primary data sources.

It was obtained from the original source of information. The primary data were more reliable and have more confidence level of decision-making with the trusted analysis having direct intact with occurrence of the events. The primary data sources are industries’ working environment (through observation, pictures, and photograph) and industry employees (management and bottom workers) (interview, questionnaires and discussions).

3.2.2 Secondary data

Desk review has been conducted to collect data from various secondary sources. This includes reports and project documents at each manufacturing sectors (more on medium and large level). Secondary data sources have been obtained from literatures regarding OSH, and the remaining data were from the companies’ manuals, reports, and some management documents which were included under the desk review. Reputable journals, books, different articles, periodicals, proceedings, magazines, newsletters, newspapers, websites, and other sources were considered on the manufacturing industrial sectors. The data also obtained from the existing working documents, manuals, procedures, reports, statistical data, policies, regulations, and standards were taken into account for the review.

In general, for this research study, the desk review has been completed to this end, and it had been polished and modified upon manuals and documents obtained from the selected companies.

4. Population and sample size

4.1 population.

The study population consisted of manufacturing industries’ employees in Addis Ababa city and around as there are more representative manufacturing industrial clusters found. To select representative manufacturing industrial sector population, the types of the industries expected were more potential to accidents based on random and purposive sampling considered. The population of data was from textile, leather, metal, chemicals, and food manufacturing industries. A total of 189 sample sizes of industries responded to the questionnaire survey from the priority areas of the government. Random sample sizes and disproportionate methods were used, and 80 from wood, metal, and iron works; 30 from food, beverage, and tobacco products; 50 from leather, textile, and garments; 20 from chemical and chemical products; and 9 from other remaining 9 clusters of manufacturing industries responded.

4.2 Questionnaire sample size determination

A simple random sampling and purposive sampling methods were used to select the representative manufacturing industries and respondents for the study. The simple random sampling ensures that each member of the population has an equal chance for the selection or the chance of getting a response which can be more than equal to the chance depending on the data analysis justification. Sample size determination procedure was used to get optimum and reasonable information. In this study, both probability (simple random sampling) and nonprobability (convenience, quota, purposive, and judgmental) sampling methods were used as the nature of the industries are varied. This is because of the characteristics of data sources which permitted the researchers to follow the multi-methods. This helps the analysis to triangulate the data obtained and increase the reliability of the research outcome and its decision. The companies’ establishment time and its engagement in operation, the number of employees and the proportion it has, the owner types (government and private), type of manufacturing industry/production, types of resource used at work, and the location it is found in the city and around were some of the criteria for the selections.

The determination of the sample size was adopted from Daniel [ 5 ] and Cochran [ 6 ] formula. The formula used was for unknown population size Eq. (1) and is given as

how to present research design and methodology

where n  = sample size, Z  = statistic for a level of confidence, P  = expected prevalence or proportion (in proportion of one; if 50%, P  = 0.5), and d  = precision (in proportion of one; if 6%, d  = 0.06). Z statistic ( Z ): for the level of confidence of 95%, which is conventional, Z value is 1.96. In this study, investigators present their results with 95% confidence intervals (CI).

The expected sample number was 267 at the marginal error of 6% for 95% confidence interval of manufacturing industries. However, the collected data indicated that only 189 populations were used for the analysis after rejecting some data having more missing values in the responses from the industries. Hence, the actual data collection resulted in 71% response rate. The 267 population were assumed to be satisfactory and representative for the data analysis.

4.3 Workplace site exposure measurement sample determination

The sample size for the experimental exposure measurements of physical work environment has been considered based on the physical data prepared for questionnaires and respondents. The response of positive were considered for exposure measurement factors to be considered for the physical environment health and disease causing such as noise intensity, light intensity, pressure/stress, vibration, temperature/coldness, or hotness and dust particles on 20 workplace sites. The selection method was using random sampling in line with purposive method. The measurement of the exposure factors was done in collaboration with Addis Ababa city Administration and Oromia Bureau of Labour and Social Affair (AACBOLSA). Some measuring instruments were obtained from the Addis Ababa city and Oromia Bureau of Labour and Social Affair.

5. Data collection methods

Data collection methods were focused on the followings basic techniques. These included secondary and primary data collections focusing on both qualitative and quantitative data as defined in the previous section. The data collection mechanisms are devised and prepared with their proper procedures.

5.1 Primary data collection methods

Primary data sources are qualitative and quantitative. The qualitative sources are field observation, interview, and informal discussions, while that of quantitative data sources are survey questionnaires and interview questions. The next sections elaborate how the data were obtained from the primary sources.

5.1.1 Workplace site observation data collection

Observation is an important aspect of science. Observation is tightly connected to data collection, and there are different sources for this: documentation, archival records, interviews, direct observations, and participant observations. Observational research findings are considered strong in validity because the researcher is able to collect a depth of information about a particular behavior. In this dissertation, the researchers used observation method as one tool for collecting information and data before questionnaire design and after the start of research too. The researcher made more than 20 specific observations of manufacturing industries in the study areas. During the observations, it found a deeper understanding of the working environment and the different sections in the production system and OSH practices.

5.1.2 Data collection through interview

Interview is a loosely structured qualitative in-depth interview with people who are considered to be particularly knowledgeable about the topic of interest. The semi-structured interview is usually conducted in a face-to-face setting which permits the researcher to seek new insights, ask questions, and assess phenomena in different perspectives. It let the researcher to know the in-depth of the present working environment influential factors and consequences. It has provided opportunities for refining data collection efforts and examining specialized systems or processes. It was used when the researcher faces written records or published document limitation or wanted to triangulate the data obtained from other primary and secondary data sources.

This dissertation is also conducted with a qualitative approach and conducting interviews. The advantage of using interviews as a method is that it allows respondents to raise issues that the interviewer may not have expected. All interviews with employees, management, and technicians were conducted by the corresponding researcher, on a face-to-face basis at workplace. All interviews were recorded and transcribed.

5.1.3 Data collection through questionnaires

The main tool for gaining primary information in practical research is questionnaires, due to the fact that the researcher can decide on the sample and the types of questions to be asked [ 2 ].

In this dissertation, each respondent is requested to reply to an identical list of questions mixed so that biasness was prevented. Initially the questionnaire design was coded and mixed up from specific topic based on uniform structures. Consequently, the questionnaire produced valuable data which was required to achieve the dissertation objectives.

The questionnaires developed were based on a five-item Likert scale. Responses were given to each statement using a five-point Likert-type scale, for which 1 = “strongly disagree” to 5 = “strongly agree.” The responses were summed up to produce a score for the measures.

5.1.4 Data obtained from experts’ opinion

The data was also obtained from the expert’s opinion related to the comparison of the knowledge, management, collaboration, and technology utilization including their sub-factors. The data obtained in this way was used for prioritization and decision-making of OSH, improving factor priority. The prioritization of the factors was using Saaty scales (1–9) and then converting to Fuzzy set values obtained from previous researches using triangular fuzzy set [ 7 ].

5.1.5 Workplace site exposure measurement

The researcher has measured the workplace environment for dust, vibration, heat, pressure, light, and noise to know how much is the level of each variable. The primary data sources planned and an actual coverage has been compared as shown in Table 1 .

how to present research design and methodology

Planned versus actual coverage of the survey.

The response rate for the proposed data source was good, and the pilot test also proved the reliability of questionnaires. Interview/discussion resulted in 87% of responses among the respondents; the survey questionnaire response rate obtained was 71%, and the field observation response rate was 90% for the whole data analysis process. Hence, the data organization quality level has not been compromised.

This response rate is considered to be representative of studies of organizations. As the study agrees on the response rate to be 30%, it is considered acceptable [ 8 ]. Saunders et al. [ 2 ] argued that the questionnaire with a scale response of 20% response rate is acceptable. Low response rate should not discourage the researchers, because a great deal of published research work also achieves low response rate. Hence, the response rate of this study is acceptable and very good for the purpose of meeting the study objectives.

5.1.6 Data collection tool pretest

The pretest for questionnaires, interviews, and tools were conducted to validate that the tool content is valid or not in the sense of the respondents’ understanding. Hence, content validity (in which the questions are answered to the target without excluding important points), internal validity (in which the questions raised answer the outcomes of researchers’ target), and external validity (in which the result can generalize to all the population from the survey sample population) were reflected. It has been proved with this pilot test prior to the start of the basic data collections. Following feedback process, a few minor changes were made to the originally designed data collect tools. The pilot test made for the questionnaire test was on 10 sample sizes selected randomly from the target sectors and experts.

5.2 Secondary data collection methods

The secondary data refers to data that was collected by someone other than the user. This data source gives insights of the research area of the current state-of-the-art method. It also makes some sort of research gap that needs to be filled by the researcher. This secondary data sources could be internal and external data sources of information that may cover a wide range of areas.

Literature/desk review and industry documents and reports: To achieve the dissertation’s objectives, the researcher has conducted excessive document review and reports of the companies in both online and offline modes. From a methodological point of view, literature reviews can be comprehended as content analysis, where quantitative and qualitative aspects are mixed to assess structural (descriptive) as well as content criteria.

A literature search was conducted using the database sources like MEDLINE; Emerald; Taylor and Francis publications; EMBASE (medical literature); PsycINFO (psychological literature); Sociological Abstracts (sociological literature); accident prevention journals; US Statistics of Labor, European Safety and Health database; ABI Inform; Business Source Premier (business/management literature); EconLit (economic literature); Social Service Abstracts (social work and social service literature); and other related materials. The search strategy was focused on articles or reports that measure one or more of the dimensions within the research OSH model framework. This search strategy was based on a framework and measurement filter strategy developed by the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) group. Based on screening, unrelated articles to the research model and objectives were excluded. Prior to screening, researcher (principal investigator) reviewed a sample of more than 2000 articles, websites, reports, and guidelines to determine whether they should be included for further review or reject. Discrepancies were thoroughly identified and resolved before the review of the main group of more than 300 articles commenced. After excluding the articles based on the title, keywords, and abstract, the remaining articles were reviewed in detail, and the information was extracted on the instrument that was used to assess the dimension of research interest. A complete list of items was then collated within each research targets or objectives and reviewed to identify any missing elements.

6. Methods of data analysis

Data analysis method follows the procedures listed under the following sections. The data analysis part answered the basic questions raised in the problem statement. The detailed analysis of the developed and developing countries’ experiences on OSH regarding manufacturing industries was analyzed, discussed, compared and contrasted, and synthesized.

6.1 Quantitative data analysis

Quantitative data were obtained from primary and secondary data discussed above in this chapter. This data analysis was based on their data type using Excel, SPSS 20.0, Office Word format, and other tools. This data analysis focuses on numerical/quantitative data analysis.

Before analysis, data coding of responses and analysis were made. In order to analyze the data obtained easily, the data were coded to SPSS 20.0 software as the data obtained from questionnaires. This task involved identifying, classifying, and assigning a numeric or character symbol to data, which was done in only one way pre-coded [ 9 , 10 ]. In this study, all of the responses were pre-coded. They were taken from the list of responses, a number of corresponding to a particular selection was given. This process was applied to every earlier question that needed this treatment. Upon completion, the data were then entered to a statistical analysis software package, SPSS version 20.0 on Windows 10 for the next steps.

Under the data analysis, exploration of data has been made with descriptive statistics and graphical analysis. The analysis included exploring the relationship between variables and comparing groups how they affect each other. This has been done using cross tabulation/chi square, correlation, and factor analysis and using nonparametric statistic.

6.2 Qualitative data analysis

Qualitative data analysis used for triangulation of the quantitative data analysis. The interview, observation, and report records were used to support the findings. The analysis has been incorporated with the quantitative discussion results in the data analysis parts.

6.3 Data analysis software

The data were entered using SPSS 20.0 on Windows 10 and analyzed. The analysis supported with SPSS software much contributed to the finding. It had contributed to the data validation and correctness of the SPSS results. The software analyzed and compared the results of different variables used in the research questionnaires. Excel is also used to draw the pictures and calculate some analytical solutions.

7. The reliability and validity analysis of the quantitative data

7.1 reliability of data.

The reliability of measurements specifies the amount to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument [ 8 ]. In reliability analysis, it has been checked for the stability and consistency of the data. In the case of reliability analysis, the researcher checked the accuracy and precision of the procedure of measurement. Reliability has numerous definitions and approaches, but in several environments, the concept comes to be consistent [ 8 ]. The measurement fulfills the requirements of reliability when it produces consistent results during data analysis procedure. The reliability is determined through Cranach’s alpha as shown in Table 2 .

how to present research design and methodology

Internal consistency and reliability test of questionnaires items.

K stands for knowledge; M, management; T, technology; C, collaboration; P, policy, standards, and regulation; H, hazards and accident conditions; PPE, personal protective equipment.

7.2 Reliability analysis

Cronbach’s alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group [ 11 ]. It is considered to be a measure of scale reliability. The reliability of internal consistency most of the time is measured based on the Cronbach’s alpha value. Reliability coefficient of 0.70 and above is considered “acceptable” in most research situations [ 12 ]. In this study, reliability analysis for internal consistency of Likert-scale measurement after deleting 13 items was found similar; the reliability coefficients were found for 76 items were 0.964 and for the individual groupings made shown in Table 2 . It was also found internally consistent using the Cronbach’s alpha test. Table 2 shows the internal consistency of the seven major instruments in which their reliability falls in the acceptable range for this research.

7.3 Validity

Face validity used as defined by Babbie [ 13 ] is an indicator that makes it seem a reasonable measure of some variables, and it is the subjective judgment that the instrument measures what it intends to measure in terms of relevance [ 14 ]. Thus, the researcher ensured, in this study, when developing the instruments that uncertainties were eliminated by using appropriate words and concepts in order to enhance clarity and general suitability [ 14 ]. Furthermore, the researcher submitted the instruments to the research supervisor and the joint supervisor who are both occupational health experts, to ensure validity of the measuring instruments and determine whether the instruments could be considered valid on face value.

In this study, the researcher was guided by reviewed literature related to compliance with the occupational health and safety conditions and data collection methods before he could develop the measuring instruments. In addition, the pretest study that was conducted prior to the main study assisted the researcher to avoid uncertainties of the contents in the data collection measuring instruments. A thorough inspection of the measuring instruments by the statistician and the researcher’s supervisor and joint experts, to ensure that all concepts pertaining to the study were included, ensured that the instruments were enriched.

8. Data quality management

Insight has been given to the data collectors on how to approach companies, and many of the questionnaires were distributed through MSc students at Addis Ababa Institute of Technology (AAiT) and manufacturing industries’ experience experts. This made the data quality reliable as it has been continually discussed with them. Pretesting for questionnaire was done on 10 workers to assure the quality of the data and for improvement of data collection tools. Supervision during data collection was done to understand how the data collectors are handling the questionnaire, and each filled questionnaires was checked for its completeness, accuracy, clarity, and consistency on a daily basis either face-to-face or by phone/email. The data expected in poor quality were rejected out of the acting during the screening time. Among planned 267 questionnaires, 189 were responded back. Finally, it was analyzed by the principal investigator.

9. Inclusion criteria

The data were collected from the company representative with the knowledge of OSH. Articles written in English and Amharic were included in this study. Database information obtained in relation to articles and those who have OSH area such as interventions method, method of accident identification, impact of occupational accidents, types of occupational injuries/disease, and impact of occupational accidents, and disease on productivity and costs of company and have used at least one form of feedback mechanism. No specific time period was chosen in order to access all available published papers. The questionnaire statements which are similar in the questionnaire have been rejected from the data analysis.

10. Ethical consideration

Ethical clearance was obtained from the School of Mechanical and Industrial Engineering, Institute of Technology, Addis Ababa University. Official letters were written from the School of Mechanical and Industrial Engineering to the respective manufacturing industries. The purpose of the study was explained to the study subjects. The study subjects were told that the information they provided was kept confidential and that their identities would not be revealed in association with the information they provided. Informed consent was secured from each participant. For bad working environment assessment findings, feedback will be given to all manufacturing industries involved in the study. There is a plan to give a copy of the result to the respective study manufacturing industries’ and ministries’ offices. The respondents’ privacy and their responses were not individually analyzed and included in the report.

11. Dissemination and utilization of the result

The result of this study will be presented to the Addis Ababa University, AAiT, School of Mechanical and Industrial Engineering. It will also be communicated to the Ethiopian manufacturing industries, Ministry of Labor and Social Affair, Ministry of Industry, and Ministry of Health from where the data was collected. The result will also be availed by publication and online presentation in Google Scholars. To this end, about five articles were published and disseminated to the whole world.

12. Conclusion

The research methodology and design indicated overall process of the flow of the research for the given study. The data sources and data collection methods were used. The overall research strategies and framework are indicated in this research process from problem formulation to problem validation including all the parameters. It has laid some foundation and how research methodology is devised and framed for researchers. This means, it helps researchers to consider it as one of the samples and models for the research data collection and process from the beginning of the problem statement to the research finding. Especially, this research flow helps new researchers to the research environment and methodology in particular.

Conflict of interest

There is no “conflict of interest.”

  • 1. Aaker A, Kumar VD, George S. Marketing Research. New York: John Wiley & Sons Inc; 2000
  • 2. Saunders M, Lewis P, Thornhill A. Research Methods for Business Student. 5th ed. Edinburgh Gate: Pearson Education Limited; 2009
  • 3. Miller P. Motivation in the Workplace. Work and Organizational Psychology. Oxford: Blackwell Publishers; 1991
  • 4. Fraenkel FJ, Warren NE. How to Design and Evaluate Research in Education. 4th ed. New York: McGraw-Hill; 2002
  • 5. Danniel WW. Biostatist: A Foundation for Analysis in the Health Science. 7th ed. New York: John Wiley & Sons; 1999
  • 6. Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977
  • 7. Saaty TL. The Analytical Hierarchy Process. Pittsburg: PWS Publications; 1990
  • 8. Sekaran U, Bougie R. Research Methods for Business: A Skill Building Approach. 5th ed. New Delhi: John Wiley & Sons, Ltd; 2010. pp. 1-468
  • 9. Luck DJ, Rubin RS. Marketing Research. 7th ed. New Jersey: Prentice-Hall International; 1987
  • 10. Wong TC. Marketing Research. Oxford, UK: Butterworth-Heinemann; 1999
  • 11. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16 :297-334
  • 12. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. International Journal of Medical Education. 2011; 2 :53-55. DOI: 10.5116/ijme.4dfb.8dfd
  • 13. Babbie E. The Practice of Social Research. 12th ed. Belmont, CA: Wadsworth; 2010
  • 14. Polit DF, Beck CT. Generating and Assessing Evidence for Nursing Practice. 8th ed. Williams and Wilkins: Lippincott; 2008

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Evon Abu-Taieh

Published: 17 June 2020

By Sabína Gáliková Tolnaiová and Slavomír Gálik

1027 downloads

By Carlos Pedro Gonçalves

1576 downloads

By Konstantinos-George Thanos, Andrianna Polydouri, A...

1057 downloads

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

how to present research design and methodology

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations, authors’ contributions.

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

39k Accesses

54 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

how to present research design and methodology

  • Privacy Policy

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

  • Open access
  • Published: 24 May 2024

Prevalence and associated risk factors of peste des petits ruminants in selected districts of the northern border region of Pakistan

  • Munibullah 1 , 2 , 4 ,
  • Yanmin Li 1 ,
  • Kainat Munib 3 ,
  • Zhixiong Zhang 4 &
  • Zhidong Zhang 1  

BMC Veterinary Research volume  20 , Article number:  225 ( 2024 ) Cite this article

73 Accesses

1 Altmetric

Metrics details

Peste des Petits Ruminants (PPR) is a world organization for animal health (WOAH) notifiable and economically important transboundary, highly communicable viral disease of small ruminants. PPR virus (PPRV) belongs to the genus Morbillivirus of the family Paramyxoviridae .

The present cross-sectional epidemiological investigation was accomplished to estimate the apparent prevalence and identify the risk factors linked with peste des petits ruminants (PPR) in the previously neglected northern border regions of Pakistan.

A total of 1300 samples (serum = 328; swabs = 972) from 150 flocks/herds were compiled from sheep ( n  = 324), goats ( n  = 328), cattle ( n  = 324), and buffaloes ( n  = 324) during 2020–2021 and tested using ELISA for detection of viral antibody in sera or antigen in swabs.

An overall apparent prevalence of 38.7% (504 samples) and an estimated true prevalence (calculated by the Rogan and Gladen estimator) of 41.0% (95% CI, 38.0–44 were recorded in the target regions. The highest apparent prevalence of 53.4% (85 samples) and the true prevalence of 57.0%, 95% Confidence Interval (CI) were documented in the Gilgit district and the lowest apparent prevalence of 53 (25.1%) and the true prevalence of 26.0%, 95% Confidence Interval (CI), 19.0–33.0) was reported in the Swat district. A questionnaire was designed to collect data about associated risk factors that were put into a univariable logistic regression to decrease the non-essential assumed risk dynamics with a P-value of 0.25. ArcGIS, 10.8.1 was used to design hotspot maps and MedCalc’s online statistical software was used to calculate Odds Ratio (OR). Some of the risk factors significantly different ( P  < 0.05) in the multivariable logistic regression were flock/herd size, farming methods, nomadic animal movement, and outbreaks of PPR. The odds of large-sized flocks/herds were 1.7 (OR = 1.79; 95% Confidence Interval (CI) = 0.034–91.80%) times more likely to be positive than small-sized. The odds of transhumance and nomadic systems were 1.1 (OR = 1.15; 95% Confidence Interval (CI) = 0.022–58.64%) and 1.0 (OR = 1.02; 95% Confidence Interval (CI) = 0.020–51.97%) times more associated to be positive than sedentary and mixed farming systems, respectively. The odds of nomadic animal movement in the area was 0.7 (OR = 0.57; 95% Confidence Interval (CI) = 0.014–38.06%) times more associated to be positive than in areas where no nomadic movement was observed. In addition, the odds of an outbreak of PPR in the area were 1.0 (OR = 1.00; 95% Confidence Interval (CI) = 0.018–46.73%) times more associated to be positive than in areas where no outbreak of PPR was observed.

Conclusions

It was concluded that many northern regions considered endemic for PPR, large and small ruminants are kept and reared together making numerous chances for virus transmission dynamic, so a big threats of disease spread exist in the region. The results of the present study would contribute to the global goal of controlling and eradicating PPR by 2030.

Peer Review reports

Introduction

Peste des Petits Ruminants (PPR) is a world organization for animal health (WOAH) notifiable and economically important transboundary, highly communicable viral disease of small ruminants, which is characterized by severe morbidity and mortality rates [ 1 ]. PPR virus (PPRV) belongs to the only member of the Morbillivirus caprinae species within the genus Morbillivirus of the family Paramyxoviridae . There is only one serotype of PPRV, but phylogenetic analysis based on partial N or F gene sequences groups PPRV strains into lineages I, II, III, and IV. Lineage IV is currently most prevalent in Asian countries [ 2 ]. Clinically, PPR resembles rinderpest (RP) in cattle and is characterized by high fever, ocular-nasal discharges, necrotic stomatitis, and catarrhal inflammation of the ocular-nasal mucosa, enteritis, bronchopneumonia, and diarrhea followed by death or some time recovery from the disease [ 3 ]. The highest mortality and morbidity of disease are observed in small ruminants. The mortality ranges from 50 to 90% and sometimes can be nil and morbidity can be 10–100%, even lower than 10% depending on circumstances like general animal health status, immunity, previous exposure, nutritional condition and absence of secondary bacterial infection [ 4 ]. PPRV primarily affects sheep and goats; while cattle and buffaloes are infected asymptomatically with seroconversion, however camels and certain wild ruminants may show clinical signs, symptoms, and mortality [ 5 ]. PPR is endemic across Asia, the Middle East, and African regions. The widespread transmission of PPR across the world damages the livelihoods, food safety, and trade of herders as well as poses threats to biodiversity and ecological health [ 2 ]. As a result, PPR has pulled the consideration of FAO and WOAH and is listed as a major transboundary animal infection that needs to be prevented, controlled, and eradicated [ 6 ]. However, controlling PPR needs a good understanding of the epidemiological dynamics and the influence of the disease in a range of geographic regions and management structures [ 7 , 8 ].

Throughout Asia where small ruminants contribute to assuring livelihoods, PPR is a main economic risk to the growth of sustainable animal production. The PPR in Asia was first described in southern India and currently remains endemic in many countries of Asia. In the Pan Pamir Plateau countries, PPR has caused significant economic damage to the animal production system and threatened wildlife. Various investigations showed that unrestricted transboundary animal movement as well as animal movement within a country is considered a major risk factor regarding the transmission of PPRV [ 9 ]. Recent study based on a MaxEnt model showed that five Least Cost Path (LCP) is responsible for PPRV cross-border transmission among China, India, Pakistan, Kazakhstan, and Tajikistan [ 10 ]. An epidemiological study showed that PPRV isolates caused the 2007 and 2013 PPRV epidemics in China was closely interrelated to lineage IV endemic bordering countries [ 11 ]. Epidemiological investigation and phylogenetic analysis of two distinct epidemics of PPR exposed that Pakistani isolate, collected with Chinese isolates, which are symbolic of the factual geographic pattern of PPRV [ 12 ]. Although PPRV introduction to China remains to be fully discovered, the PPR epidemics in China may be commenced by the cross-border spread of PPR from the neighboring enzootic states. Therefore, there should be usual migration tracks for domestic and wild animals and different associated risk factors near the western zones of China (N 29˚54’-44˚32’), which might assist the transboundary spread of PPRV due to contamination of grassland from various species sharing similar grazing points and having a status of no or irregular vaccination. These create a big issue in PPR eradication from the root level [ 10 ].

Pakistan occupies a location of great geostrategic significance, bordered by China on the northeast, Afghanistan on the northwest, Iran on the west, India on the east, and the Arabian Sea on the south. PPR is enzootic throughout Pakistan, where both small ruminants and large are mostly reared together within close interaction, regularly sharing bounded inhabitances as well as grassland and watering drinking points. These husbandry structures provide best chances for the spread of viruses among various sheep and goats populations as well as between small and large ruminants [ 13 ]. The current vaccination strategies of PPR in Pakistan and neighboring underdeveloped countries are insufficient and no proper vaccination policies, vaccine production facilities, or supply chain of vaccine to animal production systems. Furthermore, the huge gaps between local farmers, migratory nomadism/ transhumant, and concerned veterinary authorities, as well as regional policymakers, are existing. When considering eradication programs for PPRV in the future, these factors are of great significance, and without minimizing these gaps, there would be impossible to achieve the long-term goals of PPR eradication. Pakistan’s northern border is adjacent to Afghanistan, China, and Tajikistan border regions, but data on epidemiological dynamics and associated risk factors of PPR both in small and large ruminants of local and migratory flocks/herds are very scarce. The current study was conducted to estimate the apparent prevalence, identify the associated risk factors, and hotspot trends of PPR in the northern border regions of Pakistan which are previously neglected; conflict hit territories and having a significant geostrategic importance. The study will provide regional epidemiology, associated risk factors and GIS-based investigation of PPR and will identify that when and where intensive surveillance and immunization along with biosecurity procedures essential to be employed for the control and eradication of the infection from the research zones and adjacent neighbor countries in consonance with the PPR global control and eradication strategy.

Materials and methods

The study was conducted in Pakistan’s northern border, adjacent to Afghanistan, China, and Tajikistan border regions including Swat (35°12′N 72°29′E), Shangla (34°52′N 72°39′E), Chitral (35°50′N 71°47′E), Bajaur agency (34°41′N 71°30′E), Khyber agency (32°40′N 69°51′E), Mohmand agency (34°30′N 71°20′E), and Gilgit region (35°55′N 74°18′E) as shown in Fig.  1 . The first three districts belong to the provincially administrated tribal areas (PATA) of Pakistan. The provincially administrated tribal areas (PATA) were the former administrative subdivision of Pakistan designated in Article 246(b) of the constitution of Pakistan. The remaining three agencies (Bajaur, Khyber & Mohmand) belong to the formerly federally administrated tribal areas (FATA). The FATA was a semi-autonomous tribal region in northwestern Pakistan, which existed from 1947 until being merged with the neighboring province of Khyber Pakhtunkhwa in 2018. It bordered Pakistan’s provinces of Khyber Pakhtunkhwa, Balochistan, and Punjab to the east, south, and southeast respectively, and Afghanistan’s provinces of Kunar, Nangarhar, and Paktia to the west and north. Furthermore, Gilgit is the capital town of Gilgit–Baltistan (GB) previously recognized as the Northern Areas of Pakistan. Gilgit is surrounded by the Wakhan corridor of Afghanistan in the north, the People’s Republic of China in the north and northeast, Skardu district in the south and southeast; Chitral is the northernmost district sharing a border with GB to the east, with Nuristan Badakshan, and Kunar provinces of Afghanistan to the west and north, and with the Dir and Swat districts of Khyber Pakhtunkhwa to the south, and a narrow band of Wakhan Corridor splits Tajikistan from Chitral in the north. Bajaur Agency (34°41′N 71°30′E) is located at a high altitude to the east of the Kunar region of Afghanistan and Pakistan, from which it is divided by a constant track of harsh boundary mountains, making a barrier that is easily travelable at one or two points; while Mohmand Agency is adjoined by Bajaur Agency to the north, Khyber Agency to the south, Malakand Agency and Charsadda district to the east and Peshawar to the southeast; Shangla is the district of Khyber Pakhtunkhwa, situated between the hillocks and surrounded by high mountains full of forests and green pastures; Khyber Agency is bordered by Nangarhar region of Afghanistan to the west, Kurram Agency to south west, Orakzai Agency to the south, Peshawar district to the east and Mohmand Agency in the north; Swat (35°12′N 72°29′E) is a natural geographic region of formerly provincially administrated tribal area of Khyber Pakhtunkhwa surrounding the Swat river having a green hills, forests and grazing pastures.

figure 1

Map of Pakistan’s northern border regions showing the study sites and neighboring countries. The map was created using ArcGIS, 10.8.1 while drag GPS coordinates to the page and dropping it on the map drop zone by the author (Munibullah)

Study animals

The target animals in Pakistan’s northwest and northeast border regions were cattle, buffaloes, sheep, and goats the study units were unvaccinated animals that are more than six months. Sheep and goats are classified under indigenous and cross breed, while, cattle were of Achai, Sahiwal, and crossbreed, on the other hand, buffalo were of Azakheli, Kundi, Nili-Ravi, and crossbreed.

Determination of prevalence of PPR

Study design and sampling strategy.

A cross-sectional study with 1300 samples were design to estimate the apparent prevalence and identify the associated risk factors of peste des petits ruminants (PPR) in the previously neglected northern border regions of Pakistan during 2020–2021.

Collection of blood and swab samples

Blood samples (about 5 ml) was collected right from the jugular vein of animals by venipuncture using a sterile syringe and needles and then transferred to a labeled gel-barrier tube with an identification code. The blood samples were kept at a slant position overnight at room temperature. A clear serum was poured into an Eppendorf tube (2 ml) and categorized consequently and stored in the freezer till its arrival at the laboratory and stored in a freezer (-20 °C). Similarly, swab samples from nasal, ocular, oral, and rectal regions were collected and kept in the refrigerator until arrival at the laboratory and stored. All serum and swab samples were shipped to the Veterinary Research Institute, Peshawar, Pakistan by keeping the cold chain and, and stored at − 20 °C. All the samples were collected from randomly selected different animals for swab and for serum samples.

Competitive ELISA for detection of the antibody to PPRV

The serum samples were tested using PPR competitive ELISA (cELISA) kit according to the instructions of Anderson et al. [ 14 ], and the manufacturer (Lanzhou Veterinary Research Institute, CAAS Xujiaping No. 1 Laznzou, Gansu, 730,046, (patent no: ZL201210278970.9) for detection of PPPV antibody. The inhibition rate (PI value) of each sample was calculated according to the following formula: PI= [1-(sample’s OD 450 /MAb OD 450 )] ×100%. The experiment was only tenable when the negative control serum PI < 40%, positive control serum PI > 60%, and blank control PI ≥ 90%. For evaluating the antibody titer in sera, when the PI ≥ 45%, the serum was positive; when PI < 40%, the serum was negative; when 40% < PI < 45%, the serum was then suspicious. In the diagnosis or epidemiological investigation, when PI is greater or equal to ≥ 50%, the serum was positive; when PI is less than 50%, the serum was a negative.

Sandwich ELISA for detection of PPRV antigen

Swabs samples collected were tested using PPR antigen capture sandwich ELISA Kit “ID. Vet France” was used to detect according to the manufacturer’s instruction. To interpret the results for each sample the S/P% is calculated. S/P %= (OD sample-ODNC)/ (ODPC - ODNC) ×100. Where PC: positive control; NC: negative control. If the S/P% is less than 20%, then the sample was considered negative. If the S/P% is greater or equal to 20%, the sample was considered positive.

Data collection

Data collection adapted participatory epidemiological (PE) appraisal techniques for gathering of disease epidemiological data as described by Catley et al. [ 15 ]. Participatory epidemiological approaches based on open communication and transfer of knowledge, using a toolkit of methods guided by some key concepts and attitudes regarding disease under investigation and regional geostrategic scenario. The methods include: A semi-structured interviewing and collection of information through structured questionnaire about animal species, sex, age, flock size, grazing dynamics, farming method, vaccination status, the entry of new animals into a herd/ flock, information about the outbreak of PPR, nomadism & cross-boundary movements, returned of unsold animals into the flock/herd, the appearance of the clinical sign of PPR and availability of nearby veterinary services, focus-group discussions, ranking and scoring disease observations, a variety of visualization (e.g. mapping and modelling) and diagramming techniques (e.g. seasonal calendars and historical timelines regarding PPR in the region), all information was validated by cross-checking. The geographical positioning system (GPS) coordinates were obtained in the form of latitudes and longitudes. The size of the flock/herd was categorized as less than 50, 51–100, 101–200, 201–300, 301–500, and more than 500. The age of animals was noted through physical observation, dentition, and inquiring from the farmers. The age was classified as less than 1 year, 1 year, 2 years, 3 years, and above 3 years.

Data management and analysis

Data were entered into the MS excel spreadsheet 2020 program, coded, and transferred to the Statistical Package for Social Sciences (SPSS) version 20. Overall apparent prevalence was estimated by Thrusfield [ 16 ]. The formula is given by:

Apparent prevalence= (positive samples /total number of animals sampled) x100.

The apparent prevalence estimates were used to estimate the true prevalence using Rogan and Gladen estimator [ 17 ]. The formula is given by:

Where AP is apparent prevalence and Sp and Se are test specificity 99.2% and sensitivity 100% respectively for sandwich ELISA ID. vet (France). While a Competitive Enzyme Linked Immuno-Sorbent Assay (cELISA: Chinese Patent No. ZL201210278970.9) supplied by the Lanzhou Veterinary Research Institute kit has a has a diagnostic specificity (Sp) and diagnostic sensitivity (Se) of 97.7% and 84%, respectively, according to LVRI‐CAAS (176 sera tested) in comparison to the commercial ID Screen® PPR Competition ELISA (ID-Vet, France) [ 17 , 18 ]. The Epi-Info online software (version 3.5.1) was used to calculate the confidence interval for proportions. Univariable logistic regression analysis for the proportions was carried out with P  = 0.25. Multi-collinearity of risk factors was checked. This was verified additional by multivariable logistic regression analysis for the decision with a probability predictive limit of less than 5%. The MedCalc’s online statistical software was used to calculate Odds Ratio (OR) to associate the statistical power of PPR positivity with various possible risk factors. The interaction consequence of significant risk dynamics in the multivariable logistic regression analysis was also assessed. Model fitness was calculated by applying the Hosmer-Lemeshow goodness of the test (P value > 0.05).

ArcGIS 10.8.1 was used to design hotspot maps from GPS coordinates to highlight the location of each case in their localities.

Prevalence of PPR

A total of 1300 samples were collected from 150 conveniently selected flocks/herds, including 328 serum samples and 972 swab samples from different animals. The serum samples and swab samples were tested by cELISA for antibody detection and sandwich ELISA for viral antigen detection, respectively. Out of a total of 328 serum samples analyzed by cELISA, 167 (50.09%) were positive for PPRV antibody. Out of a total of 972 swabs tested by sandwich ELISA, 337 (34.6%) were positive for PPRV antigen. Based on the detection rate of PPRV antibody and antigen, an overall apparent prevalence and true prevalence of 38.7% and 41.0% respectively, at 95% Confidence Interval (CI) was recorded in the target region (Table  1 ).

The district-wise prevalence of PPR

The district-wise prevalence of PPR was analyzed based on the positive rate of PPRV antibody and antigen. As shown in Table  2 , out of the seven regions studied, the highest apparent prevalence of 53.4% and true prevalence of 57.0% (95% CI = 48.0–66.0%) was documented in the Gilgit region, followed by Chitral, Bajaur Agency, Mohmand Agency, Shangla, Khyber Agency and Swat district.

The species-wise prevalence of PPR in the target region

The species-wise apparent prevalence of PPR was analysed based on data from cELISA and sandwich ELISA. A total of 324 sheep, 328 goats, 324 buffaloes, and 324 cattle were sampled and an apparent prevalence of 52.1%, 51.8%, 27.4% and 23.4% was recorded, respectively. However, a true prevalence of 56.0% (95% Confidence Interval (CI) = 50.0–62.0), 55.0% (95% Confidence Interval (CI) = 49.0–62.0), 28.0% (95% CI = 23.0–34.0) and 24.0% (95% Confidence Interval (CI) = 19.0–29.0) was estimated for sheep, goats, buffaloes, and cattle respectively. The highest prevalence was observed in the sheep and goat population and the lowest was observed in the cattle population shown in (Table  3 ).

Analysis and assessment of risk factors

Univariable logistic regression analysis of risk factors for ppr positivity in animals.

Univariable logistic regression was used to analyze risk factors associated with PPR positivity in sheep, goats, cattle, and buffaloes. Various factors omitted from the model by applying univariable logistic regression analyses with a p-value of 0.25 were age, sex, introducing new animals, type of flock/herd, and season. Accordingly, the univariable logistic regression analysis and multivariable logistic regression analysis specie ( P  = 0.000), flock/herd size ( P  = 0.004), outbreaks of PPR or PPR-affected animals in the area in the last 15 days ( P  = 0.000), nomadic animals’ movement ( P  = 0.000), farming methods ( P  = 0.021), return of unsold animals from the market ( P  = 0.057) and outbreak location ( P  = 0.046) were significantly risk factors for the occurrence and distribution of PPR in the target region (Table  4 ).

Risk assessment

The assessment of risk factors associated with PPR positivity in the region was evaluated as a function of the probability of hazard (PPR) calculated positivity (Table  2 ), exposure of susceptible usual hosts (sheep, goats) and unusual hosts (cattle and buffaloes) shown in Tables  3 and 4 and the consequences of spread of PPR using the following parameters: current true prevalence and relevant odds ratios of infection (Tables  2 and 4 ), evidence of unvaccinated nomadism and transboundary animal movements shown in Table  4 , cELISA and sandwich ELISA screening records of an infected animals through different types of samples (Table  1 ), the outbreak of PPR or PPR-affected animals in the area in the last 15 days and the virus potential for infection in the entire region as shown in Fig.  2 ; Tables  2 and 4 with P  < 0.05, and other findings provide a strong sero-epidemiological foot printings of PPR endemic dynamics and transboundary threats in the region. Based on current risk assessment across many northern regions considered endemic for PPR, large and small ruminants are kept and reared together making numerous chances for virus transmission dynamic. This is the first assessment of PPRV positivity in small and large ruminant populations in the northern border region of Pakistan adjacent to Afghanistan, Tajikistan, and China border regions based on the epidemiological foot printing of the animals sampled.

Association of clinical signs/symptoms with PPR in sheep and goats

The relationship/association between clinical signs and symptoms with PPR specifically in sheep, goats excluding cattle and buffaloes, was determined using co-efficient values, which were interpreted as follows: 0.0-0.199 for a very weak/no association, 0.2–0.39 for a weak association, 0.4-0.599 for a moderate association, 0.6-0.799 for a strong association, and 0.8-0.999 for a very strong association [ 19 ]. It was observed during field epidemiological investigation and interaction with farming communities that the disease primarily affects sheep and goats; however large ruminants were infected asymptomatically with seroconversion. It is indicated from the analyzed data (Table  5 ) that some of the signs and symptoms of the disease are weakly associated with PPR. On the other hand, some of the clinical signs and symptoms have a moderate association/relationship with PPR.

Hotspots of the spatial distribution of PPR

On spatial epidemiological investigation of outbreak records, PPR risk hotspots showed a wide deviation in the various regions of northwestern and northeastern Pakistan at different periods. Most of the study regions were considered as neglected areas of disease investigation including PPR. Therefore, special attention was taken to monitors the nomadic flock/herds and participating study activities in harsh full conditions. Fig.  2 (the map created using ArcGIS, 10.8.1) shows the spatial distribution of PPR in selected neglected areas of Pakistan’s northern border regions based on disease coordinates (latitudes and longitudes). The seven PPR disease hotspots trend categories were identified across different sub-regions in Pakistan’s northern border region based on both the detection rate of PPRV antibody and antigen. The greenish zones represent study areas while the yellow spots show the burden of PPR cases in different study districts. These disease hotspots were identified through the tools of participatory epidemiological (PE) assessment as discussed by Jost et al. [ 20 ], and Catley et al. [ 15 ], among visualization techniques; seasonal calendars, mapping, and diagramming exercises were the most common. Participatory mapping was one of the most useful tools in the PE toolkit, and was often a good technique to start with, as it involves several people and can stimulate much informal interviews and focused group discussion and enthusiasm. It was used to gain an overview of the spatial distribution of community resources, herding patterns, livestock population contact structure, the spatial distribution of risk factors, similarly questionnaire-based surveys, evidence of traditional routes of transboundary animal movements to these areas, seasonal migration within the country and across the border to these regions, livestock practices without vaccination, and based on current results, which shows that the disease is prevalent in these regions and act as a continue spreading points locally and regionally. In participatory epidemiological approaches, participatory mapping was used to map disease outbreaks, both spatially and temporally, within target communities. Respondents indicate the locations and dates of clinical disease events and describe the sequence of events, which reflects how diseases spread through communities and populations. This can highlight key risk factors and important epidemiological information, as well as contribute data to aid in estimating transmission parameters for disease models. It is a proven technique that overcomes many of the limitations of conventional epidemiological methods and has been used to solve several animal health-related events investigation and research problems. The approach can be developed in small-scale, community animal health programs, and also can be applied to major regional and international disease control efforts [ 19 ].

figure 2

Hotspot map of the spatial distribution of PPR. The map identified seven PPR disease hotspots trend categories across different sub-regions in Pakistan’s northern border region based on participatory epidemiological tools and PPR positivity. The greenish zones represent study areas while the yellow spots show the burden of PPR cases in different study districts (Mohmand, Khyber, Bajaur, Swat, Chitral, Gilgit, and Shangla). The map was created using ArcGIS, 10.8.1 while drag GPS coordinates to the page and dropping it on the map drop zone by the author (Munibullah)

Discussions

Though a variety of studies have been taken regarding PPR globally, the current study contributes to FAO/WOAH’s goal of achieving global PPR eradication in the future, by controlling the disease in the previously neglected or conflict-hit territories where the evidence of small ruminants–large ruminants and livestock-wildlife interfaces exist. The presence of the PPRV among unvaccinated animals in the study area was demonstrated by the clinical picture [ 21 ], sandwich ELISA and the PPRV-specific antibodies were detected using cELISA during 2020–2021. There was no recognized standard information on prior immunization in the study area; therefore, the existence of PPR antibodies was attributed to natural PPR infection.

It is evident (Table  2 ) that PPR is prevalent throughout the study region. The overall apparent prevalence based on both the detection rate of PPRV antibody and antigen in different animal species in the current study was 38.7% ( n  = 1300) of which 52.1% ( n  = 324) was detected in sheep, 51.8% ( n  = 328) in goats, 27.4% ( n  = 324) in buffaloes and 23.4% ( n  = 324) in cattle, similarly, the estimated true prevalence of PPR in the target region was 41.0% (95% Confidence Interval (CI) = 38.0–44.0), of which 56.0% (95% Confidence Interval (CI) = 50.0–62.0) was detected in sheep, 55.0% (95% Confidence Interval (CI) = 49.0–62.0) in goats, 28.0% (95% Confidence Interval (CI) = 23.0–34.0) in buffaloes and 24.0% (95% Confidence Interval (CI) = 19.0–29.0) in cattle (Tables  3 and 4 ). Regarding the detection of the PPR virus in large ruminants [ 13 ] and 23 of 250 (9.2%; 95% Confidence Interval (CI) = 5.9–13.5%) yaks sampled in Pakistan were found positive [ 22 ], are in line with the findings of current investigation. Furthermore, the retrospective studies in Pakistan excluding the current study area, showed a prevalence of 43.33% and 59.09% in small ruminants and large ruminants respectively [ 23 ] and 74.9% in sheep and goats [ 24 ]. Similarly, a Food and Agriculture Organization (FAO) project (GCP/PAK/127/USA) Progressive control of Peste des Petits Ruminants in Pakistan supported study investigated antibodies in the serum using cELISA and antigen in the tissue samples using IcELISA from 62 outbreaks against the PPR virus with a positive percentage of 61.27% and 64.99% respectively [ 25 ]. These studies are in line with the finding of the current investigation. As compared to the current study outcome, a higher overall apparent prevalence was documented in various Asian countries 74.9% in Pakistan Zahur et al. [ 24 ], 67.9% in India Saritha et al. [ 26 ], 48% in Afghanistan Azizi et al. [ 27 ], and 9.2% in yaks Abubakar et al. [ 22 ] 10.0% in cattle and 14.16% in buffaloes in Pakistan Abubakar et al. [ 13 ] were lower prevalence recorded as compared to the outcomes of the present study. The dissimilarities in the PPRV prevalence found in bordering nations compared to the present study region might be due to variations in the livestock management practices, seasonal variations, host population, sampling procedures used, disease control strategies, or practical data levels of natural protection and variable usual PPRV infection rates in various geographic regions.

Geographical regions that recorded the highest prevalence of PPR were Gilgit (53.4%) and Chitral (50.3%) while Swat (25.1%) had the lowermost PPR prevalence. The higher PPR positivity in Gilgit and Chitral can be described by rigorous unrestrained transboundary animal movements between these territories with Kunar, Badakhshan, and Nuristan provinces of Afghanistan, the Wakhan corridor of (Tajikistan-Afghanistan) in the north, the People’s Republic of China in the northeast and north where PPR epidemics have been observed in the past. Furthermore, poor livestock management practices and the use of mutual grazing structures, and huge nomadism could also contribute to this higher positivity. The climatic factors, livestock, domesticated yokes, and wildlife interface in pastoralist systems particularly in the regions with a high mass of wild animals like Gilgit and Chitral might lead to the highest PPR positivity. These outcomes are consistent with the studies of Noman et al. [ 28 ], suggesting that high rainwater and cold climate might lead to PPR spread. Limited data existing on PPR transmission from wild to domestic and from small ruminates to large ruminants in the study regions. Furthermore, these outcomes are consistent with the findings of Gao et al. [ 10 ], who investigated unknown regions of PPR transmission, further, the internal threat in China is lower than that in Pan Pamir Plateau states, also investigate, five representing corridors (Table  1 ), and verifies the probability of transboundary spread of PPR for the first time by small ruminants, large ruminants, and wild animals. In the FATA region of the Khyber Pakhtunkhwa province, the highest positivity of PPR was observed in Bajaur (42.9%), Mohmand (38.6%), and Khyber (30.5%), while (36.2%) prevalence in shangla district was reported. The maximum PPR positivity in these regions might be described by exhaustive unrestrained nomadism, climate factors, negligence in vaccination, and war conflicts among different tribes and nations.

There are several production systems while farming animals in the region, namely, nomadic, transhumant, sedentary, and household/mixed. Mostly small ruminants are raised in the nomadic and transhumant production systems [ 29 ]. Animals reared in a joint production structure like nomadic, transhumance, and/or free-grazing husbandry were probable to have higher prevalence with corresponding prevalence levels of 53.6% and 50.5%, whereas the lower prevalence of 24.6% was found in the animals of the mixed farming system. The odds of transhumance and nomadic farming system were 0.59 (OR = 0.595; 95% Confidence Interval (CI) = 0.306–1.158%) and 0.51 (OR = 0.519; 95% Confidence Interval (CI) = 0.272–0.900%) times more associated to be positive than sedentary and mixed farming systems, respectively with ( P  = 0.001). The odds of nomadic animal movement in the area in the last 15 days was 0.5 (OR = 0.552; 95% Confidence Interval (CI) = 0.389–0.784%) times more associated to be positive than in areas where no nomadic movement was observed. The outcomes of the present study are inconsistent with the outcomes of Zahur et al. [ 25 ]. Furthermore, a huge transboundary animal movement from Afghanistan via the Khyber agency, Mohmand agency, Bajaur agency, and Tajikistan via the Wahkan corridor and adjacent border regions were investigated through participatory epidemiological discussion with the local communities, these observations are in line with the findings that these nomads visit different areas, especially riverbanks, irrigated areas of Khyber Pakhtunkhwa, Punjab and Sindh provinces of Pakistan in the winter season, and northern border regions of Pakistan in the summer season. In wheat harvesting season, these nomads came back to Afghanistan adopting the same paths [ 25 , 26 ]. These are the main epidemiological footprints behind the endemic status of PPR virus circulation in the study region.

The logistic regression model indicated that the odds of large-sized flocks/herds (101–200) of animals were 1.7 times more positive than small-sized flocks/herds. This finding is in agreement with Selvaraju [ 30 ], . Table  4 indicated that large-sized flocks/herds of animals were 1.7 times significantly more at risk ( P  = 0.004), of getting PPR infection (OR = 1.79; 95% Confidence Interval (CI) = 0.034–91.80). The odds of medium-sized herds/flocks of animals being positive was 0.42 times more likely than small herds/flocks OR = 0.42; 95% Confidence Interval (CI) = 0.008–21.72) shown in (Table  4 ) shows that overcrowding might be a contributing factor in the spread of the contagious PPR virus among susceptible animals. This judgment is consistent with Al-Majali et al. [ 31 ]. This overcrowding might increase the spread of the contagious PPR virus among susceptible animals [ 32 ].

The outcomes of the present investigation indicated that the introduction of new animals into a flock/herd in the last 15 days showed a greater positivity of 44.8% (95% Confidence Interval (CI) = 40.2–49.4%), (Table  4 ). This result is in line with the studies of Gebre et al. [ 33 ]. After buying animals, owners do not follow isolation practices. Animals are taken to the market and brought home on foot crossing long distances. During this stressful time, the animals become susceptible to different infections. Further, when animals from various stocks are together in one marketplace and there may be interactions. Subsequently, this phenomenon plays an important role in the PPRV transmission dynamic [ 32 ].

Associated risk factors that were statistically non-significant with the positivity of PPR in the region, were age ( p  > 0.05), sex ( p  > 0.05), the introduction of new animals into the flock or herd in the last 15 days ( p  > 0.05), type of flock/herd ( p  > 0.05), return of unsold animals from the market ( p  > 0.05), and seasons ( p  > 0.05). However, slightly high PPR apparent prevalence (52.1%) and a true prevalence of 56.0% (95% Confidence Interval (CI) = 50.0–62.0) in sheep than in goats (51.8%) and 55.0% (95% Confidence Interval (CI) = 49.0–62.0) in the study region. Similarly, the higher prevalence of PPR in buffalo (27.4%) than in cattle (23.4%) with most of them infected asymptomatically the current findings agrees with the outcomes of Khan et al. [ 32 ] who documented a significantly higher prevalence of 67.42% in buffalo and 41.86% in cattle with ( P  = 0.005), Abubakar et al. [ 13 ] who documented a significantly higher prevalence of 14.16% in buffaloes and 10.0% in cattle in Pakistan and of Balamurugan et al. [ 1 ] who detected a slightly higher seroprevalence of 16.20% in buffaloes and 11.07% in cattle, across 1498 serum samples analyzed in the neighboring country, India. On the other hand, this finding is dissimilar from the outcomes of Saritha et al. [ 26 ], and Kgotlele et al. [ 34 ], who reported higher prevalence in goats than sheep. Similarly, in contrast to the current study lower prevalence of 5.88% PPRV antibodies in cattle was reported by Prajapati et al. [ 35 ] in Nepal.

The study identified seven PPR disease hotspots trend categories across different sub-regions in Pakistan’s northern border region. It was concluded that no immunization, the practice of introducing newly purchased animals, congestion, the presence of PPR-affected animals in the area, nomadism, and transboundary movements were the main associated risk factors of disease occurrence in the region, and the hotspots map showed that big threats of disease spread exist to neighbor’s states and vice versa. Across many northern regions considered endemic for PPR, large and small ruminants are kept and reared together making numerous chances for virus transmission dynamic. This study provides a spark for policymakers regarding regional and global goal achievements of PPR eradication by 2030.

Data availability

The authors declare they have no competing interests. The datasets generated and/or analyzed during the current study are not publicly available due to the confidentiality agreements made all authors, but could be available from the corresponding author on reasonable request.

Balamurugan V, Krishnamoorthy P, Raju DS, Rajak KK, Bhanuprakash V, Pandey AB, Gajendragad MR, Prabhudas K, Rahman H. Prevalence of Peste-Des-petits-ruminant virus antibodies in cattle, buffaloes, sheep and goats in India. Virusdisease. 2014;25:85–90.

Article   CAS   PubMed   Google Scholar  

Md A. (2020). Peste des petits ruminants (PPR) in Africa and Asia: A systematic review and meta-analysis of the prevalence in sheep and goats between 1969 and 2018. Veterinary Medicine and Science, 6 (4):813–33. https://doi.org/10.1002/vms3.300 PMID: 32529792.

Gargadennec L, Lalanne A. La Peste Des petits ruminants. Bull Serve Zootech Epizoot Afr Occid Fr. 1942;5:16–21.

Google Scholar  

Abu-Elzein EME, Hassanien MM, Al-Afaleq AI, Abd-Elhadi MA, Housawi FMI. Isolation of peste des petits ruminants from goats in Saudi Arabia. Vet Rec. 1990;127:309–10.

CAS   PubMed   Google Scholar  

Albina E, Kwiatek O, Minet C, Lancelot R, Servan de Almeida R, Libeau G. Peste Des petits ruminants, the next eradicated animal disease? Vet Microbiol. 2013;165(1–2):38–44.

Article   PubMed   Google Scholar  

Mousumi B, Raja WY, Pronab D, Rabindra PS. (2018). An overview of process intensification and thermo stabilization for upscaling of Peste des petits ruminants vaccines in view of global control and eradication. Virusdisease, 29(3):285–96. https://doi.org/10.1007/s13337-018-0455-3 PMID: 30159362.

Munibullah, Li Y, Munib K, Zhang Z. Regional epidemiology and associated risk factors of PPR in Asia-A Review. Slovenian Veterinary Res. 2022;59(2):75–87. https://doi.org/10.26873/SVR-1464-2022 .

Article   Google Scholar  

Akwongo CJ, Quan M, Byaruhanga C. Prevalence, risk factors for exposure, and SocioEconomic Impact of Peste Des Petits Ruminants in Karenga District, Karamoja Region, Uganda. Pathogens. 2022;11:54. https://doi.org/10.3390/pathogens1101005 .

Article   PubMed   PubMed Central   Google Scholar  

Munir M. (2013). Role of Wild Small Ruminants in the Epidemiology of Peste Des Petits Ruminants. Transboundary and Emerging Diseases, 61(5):411–24. https://doi.org/10.1111/tbed.12052 PMID: 23305511.

Gao S, Xu G, Zeng Z, Lv J, Huang L, Wang H, Wang X. Transboundary spread of peste des petits ruminants virus in western China: a prediction model. PLoS ONE. 2021;16(9):e0257898. https://doi.org/10.1371/journal.pone.0257898 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Xia J, Zheng XG, Adili GZ, Wei YR, Ma WG, Xue XM, Mi XY, Yi Z, Chen SJ, Du W, Muhan M. Sequence analysis of peste des petits ruminants virus from ibexes in Xinjiang, China. Genetic Mol Res. 2016;15(2):1–7.

Munir M, Zohari S, Saeed A, Khan Q, Abubakar M, LeBlanc N, Berg M. Detection and phylogenetic analysis of peste des petits ruminants virus isolated from outbreaks in Punjab, Pakistan. Transbound Emerg Dis. 2012;59:85–93. https://doi.org/10.1111/j.1865-1682.2011.01245.x .

Abubakar, M., Mahapatra, M., Muniraju, M., Arshed, M. J., Khan, E. U. H., Banyard,A. C., … Parida, S. (2017b). Serological detection of antibodies to peste des petits ruminants virus in large ruminants. Transboundary and Emerging Diseases , 64 (2), 513–519.

Anderson J, McKay JA, Butcher RN. 1991. The use of monoclonal antibodies in competitive ELISA for the detection of antibodies against Rinderpest and Peste des Petits Ruminants virus. The Proceedings of the Final Research Coordination, in the seromonitering of Rinderpest throughout Africa Phase I, International Atomic Energy Agency, Vienna-Austria, 43–53.

Catley A, Alders RG, Wood JLN. Participatory epidemiology: approaches, methods, experiences. Vet J. 2012;191:151–60.

Thrusfield M. Veterinary epidemiology. 3 ed. London: Black well science Ltd; 2007. pp. 178–236.

Rogan WJ, Gladen B. Estimation of prevalence from the results of a screening test. Am J Epidemiol. 1978;107:71–6.

Niyokwishimira A, de D Baziki J, Dundon WG, Nwankpa N, Njoroge C, Boussini H, Bodjo SC. Detection and molecular characterization of Peste Des Petits ruminants virus from outbreaks in Burundi, December 2017–January 2018. Transbound Emerg Dis. 2019;66(5):2067–73.

Lawrence KE, Forsyth SF, Vaatstra BL, McFadden AMJ, Pulford DJ, Govindaraju K, Pomroy WE. Cluster analysis of the clinical histories of cattle affected with bovine anaemia associated with Theileria Orientalis Ikeda type infection. N Z Vet J. 2017;65(6):305–12.

Jost C, Mariner JC, Roeder PL, Sawitri E, Macgregor-Skinner GJ. (2007). Participatory epidemiology in disease surveillance and research. Sci Tech Rev.

IPAPEL, Congo RD. Rapport annuel de l’inspection provinciale de l’agriculture pêche et élevage. Bukavu; 2012. p. 86.

Abubakar, M., Sattorov, N., Manzoor, S., Khan, E. U. H., Hussain, M., Zahur, A. B.,… Wensman, J. J. (2019). Detection of antibodies to peste-des-petits-ruminants virus in the semi-domesticated yak. European Journal of Wildlife Research , 65 (6), 88.

Khan HA, Siddique M, Abubakar M, Ashraf M. The detection of antibody against peste des petits ruminants virus in sheep, goats, cattle and buffaloes. Trop Anim Health Prod. 2008;40(7):521–7. Epub 2008 Mar 18. PMID: 18716909.

Zahur AB, Irshad H, Hussain M, Ullah A, Jahangir M, Khan MQ, Farooq MS. The epidemiology of peste des petits ruminants in Pakistan. Rev Sci Tech. 2008;27(3):877.

Zahur, A. B., Shahzad, C., Shahzad, C., Shahzad, C., Shahzad, C., Shahzad, C., … Kashmir,A. (2014). Epidemiological analysis of Peste des Petits Ruminants (PPR) outbreaks in Pakistan. Journal of Biosciences and Medicines , 2 (06), 18.

Saritha G, Shobhamani B, Sreedevi B. Seroprevalence of peste des petits ruminants in pastoral small ruminants with special reference on sensitivity to age and agro-climatic zones (India). Anim Sci Report. 2014;8:3.

Nikmal Azizi AF. (2010). Peste des petits ruminants in Afghanistan. Ministry of Agriculture, Irrigation, and Livestock, Kabul, Afghanistan. https://oiebulletin.com/?panorama ppr control and eradication programme in afghanistan 2. 2010.

Noman MA, Shaikat A, Nath B, Shil S, Hossain M. Incidence and modulating effects of environmental factors on infectious diseases of Black Bengal goat in Cox’s Bazar district of Bangladesh. YYU Veterier Fakultesi Dergisi. 2011;22(3):163–7.

Ishaque SM. Sheep management systems. Sheep Production in Pakistan. Islamabad, Pakistan: Pakistan Agricultural Research Council; 1993.

Selvaraju G. Epidemiological measures of causal association between Peste Des Petits Ruminants (PPR) and its determinants in small ruminants. Int J Dev Res. 2014;4(7):1411–3.

Al-Majali AM, Hussain NO, Amarin NM, Majok AA. Seroprevalence of and risk factors for peste des petits ruminants in sheep and goats in Northern Jordan. Prev Vet Med. 2008;85:1–8.

Radostits OM, Gay CC, Hinclcliff KW, Constable PO. (2007). Veterinary Medicine: A text book of the disease of cattle, sheep, pigs, goat and horses. 10 ed. London, Saunders, pp: 1094–1110.

Gebre T, Deneke Y, Begna F. Seroprevalence and Associated Risk Factors of Peste Des Petits Ruminants (PPR) in Sheep and goats in four districts of Bench Maji and Kafa Zones, South West Ethiopia. Global Vet. 2018;20(6):260–70.

Kgotlele T, Torsson E, Kasanga CJ, Wensman JJ, Misinzo G. Seroprevalence of Peste Des Petits ruminants Virus from SamplesCollected in different regions of Tanzania in 2013 and 2015. J Veterinary Sci Technol. 2016;7(6):1–5.

Prajapati M, Shrestha SP, Kathayat D, Dou Y, Li Y, Zhang Z. Serological investigations of Peste Des Petits ruminants in cattle of Nepal. Veterinary Med Sci. 2021;7(1):122–6.

Article   CAS   Google Scholar  

Download references

Acknowledgements

We would like to thank Dr. Dou Yongxi and Dr. Xueliang Meng of Lanzhou Veterinary Research Institute, Chinese Academy of Agricultural Sciences, Lanzhou, China for providing C-ELISA kit, Dr. Hanif Ur Rehman Research Officer of the Veterinary Research Institute, Peshawar, Khyber Pakhtunkhwa for their technical support during laboratory experiments (cELISA and sandwich ELISA), Dr. Sajjad Ali Shah, Research Officer of the Veterinary Research Institute Peshawar, Khyber Pakhtunkhwa for their support in statistical analysis and calculations and Mr. Aftab Ahmad Khan, Scientific Officer, Global Change Impact Studies Center of the Ministry of Climate Change Islamabad, Pakistan for their technical support in ArcGIS software and hotspots maps designing. We are sincerely grateful to Prof. Dr. Arfan Yousaf, Dean of the Faculty of Veterinary and Animal Sciences, and Dr. Muhammad Arif Zafar, Chairman of the Department of Clinical Studies, Faculty of Veterinary and Animal Sciences, Pir Mehr Ali Shah Arid Agriculture University Rawalpindi for their official support.

This work was funded and supported by Southwest Mizu University Double World-Class Project (XM2023012), the Southwest Mizu University Research Startup Funds (16011211013), the Natural Science Foundation of Sichuan Province (2022NSFSC0073) and Chengdu Research Base of Giant Panda Breeding Project (2024CPB-B11).

Author information

Authors and affiliations.

College of Animal Husbandry & Veterinary Medicine, Southwest Minzu University, Chengdu, 610041, China

Munibullah, Yanmin Li & Zhidong Zhang

Department of Clinical Studies, Faculty of Veterinary and Animal Sciences, Pir Mehr Ali Shah Arid Agriculture University, Rawalpindi 46000, Pakistan

Department of Sociology, Allama Iqbal Open University, Islamabad, Pakistan

Kainat Munib

Lanzhou Veterinary Research Institute, Lanzhou, 730046, China

Munibullah & Zhixiong Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

Zhidong Zhang (ZZ): Contributed to conception of the research idea, supervision and editing of the manuscript. Munibullah (M): Contributed to conception of the research idea, data collection, Methodology, writing and review of the manuscript. Munibullah (M): Contributed to data collection, methodology, writing and review of the manuscript. Zhidong Zhang (ZZ) and Yanmin Li (YL): Contributed to conception of the research idea, data analysis and supervision. Kainat Munib (KM) and Zhixiong Zhang (ZZ): Contributed to conception of the research idea, data analysis and review of the manuscript.

Corresponding author

Correspondence to Zhidong Zhang .

Ethics declarations

Ethics approval and consent to participate.

The authors confirm that the ethical policies of the journal, as noted on the journal’s author guidelines page, have been adhered to and a written informed consent of voluntary participation was obtained from all legal private animal owners. With the statement that “We voluntarily agree to take part in this study and the authors will maintain our confidentiality”. Further the authors confirm that study protocol was approved by the “Institutional Review Committee” of the Virology Section, Center of Microbiology and Biotechnology (CMB), Veterinary Research Institute Peshawar, Pakistan (Reference Code. No/RO/Virology/2022/276).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Munibullah, Li, Y., Munib, K. et al. Prevalence and associated risk factors of peste des petits ruminants in selected districts of the northern border region of Pakistan. BMC Vet Res 20 , 225 (2024). https://doi.org/10.1186/s12917-024-04033-8

Download citation

Received : 01 January 2024

Accepted : 24 April 2024

Published : 24 May 2024

DOI : https://doi.org/10.1186/s12917-024-04033-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Epidemiology
  • Risk factors
  • Northern Pakistan

BMC Veterinary Research

ISSN: 1746-6148

how to present research design and methodology

IMAGES

  1. How to Write a Research Design

    how to present research design and methodology

  2. Research Design in Qualitative Research

    how to present research design and methodology

  3. Schematic Diagram Of Research Process

    how to present research design and methodology

  4. Research Methodology Process Diagram

    how to present research design and methodology

  5. Types of Research Methodology: Uses, Types & Benefits

    how to present research design and methodology

  6. Navigating the Best Research Methodology steps? The Professor's Advice

    how to present research design and methodology

VIDEO

  1. How to Design Your Research Methodology

  2. RESEARCH METHODOLOGY (PRESENTATION)

  3. WRITING THE CHAPTER 3|| Research Methodology (Research Design and Method)

  4. Research Lecture 2 Research Methodology

  5. Top 10 UX Design Books for Learning Best Practices

  6. sample design in research methodology

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  3. PDF Presenting Methodology and Research Approach

    This section outlines your overall research design/methodology. It includes the list of steps in carrying out your research from data collection through data analysis. The two sections that follow elaborate ... Identify and present all the data-collection methods you used, and clearly explain the steps taken to carry out each method. Include in the

  4. What Is a Research Methodology?

    1. Focus on your objectives and research questions. The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions. 2.

  5. How to Write a Research Design

    Step 2: Data Type you Need for Research. Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions: Primary Data Vs. Secondary Data.

  6. The Ultimate Guide To Research Methodology

    Here is an ultimate guide on research methodology to help you ace your research. Learn about its definition, importance, and types. ... Involves systematic and structured data collection methods. Objective Presentation: ... Invest time in reviewing relevant literature to inform your research design and methodology. Tip 3. Detailed Research Plan.

  7. How to Make a Successful Research Presentation

    Turning a research paper into a visual presentation is difficult; there are pitfalls, and navigating the path to a brief, informative presentation takes time and practice. As a TA for GEO/WRI 201: Methods in Data Analysis & Scientific Writing this past fall, I saw how this process works from an instructor's standpoint.

  8. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  9. Your Step-by-Step Guide to Writing a Good Research Methodology

    Provide the rationality behind your chosen approach. Based on logic and reason, let your readers know why you have chosen said research methodologies. Additionally, you have to build strong arguments supporting why your chosen research method is the best way to achieve the desired outcome. 3. Explain your mechanism.

  10. What Is a Research Methodology?

    Revised on 10 October 2022. Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

  11. Overview of Research Methodology

    This research methods guide will help you choose a methodology and launch into your research project. Data collection and data analysis are research methods that can be applied to many disciplines. There is Qualitative research and Quantitative Research. The focus of this guide, includes most popular methods including: surveys.

  12. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  13. Research Design and Methodology

    There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are ...

  14. Research Methods Guide: Research Design & Method

    Most frequently used methods include: Observation / Participant Observation. Surveys. Interviews. Focus Groups. Experiments. Secondary Data Analysis / Archival Study. Mixed Methods (combination of some of the above) One particular method could be better suited to your research goal than others, because the data you collect from different ...

  15. A tutorial on methodological studies: the what, when, how and why

    In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts). In the past 10 years, there has been an increase in the use of terms related to ...

  16. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  17. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  18. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  19. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  20. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  21. How to Write Research Methodology in 2024: Overview, Tips, and

    Saunders et al. (2007) proposed the concept of the research onion model to help researchers develop a methodology and construct research design techniques within the field of future studies. ... If you want to study abroad for free and have to present a research proposal to the institution for acceptance, then you have to be able to clearly ...

  22. Research Design (in 3 minutes)

    What's the difference between research design, research methodology, and research methods? Let's have a quick look in 3 minutes - and think about qualitative...

  23. How to Write a Research Proposal

    Research design and methods. Following the literature review, restate your main objectives. This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

  24. (Pdf) Descriptive Research Design

    This research aims to test whether the Momentum investing strategy is better than passive investing strategy. The research method used is experiment design. The population observed is Kompas100 ...

  25. Auto‐Pressurized Multi‐Stage Tesla‐Valve Type Microreactors in Carbon

    The present research exploits an innovative methodology for producing auto-pressurized carbon microreactors with a precise and controlled structure analyzing the influence of their design on the fluid dynamics and their catalytic performance.

  26. Prevalence and associated risk factors of peste des petits ruminants in

    Background Peste des Petits Ruminants (PPR) is a world organization for animal health (WOAH) notifiable and economically important transboundary, highly communicable viral disease of small ruminants. PPR virus (PPRV) belongs to the genus Morbillivirus of the family Paramyxoviridae. Aim The present cross-sectional epidemiological investigation was accomplished to estimate the apparent ...