Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • How to Write a Results Section | Tips & Examples

How to Write a Results Section | Tips & Examples

Published on August 30, 2022 by Tegan George . Revised on July 18, 2023.

A results section is where you report the main findings of the data collection and analysis you conducted for your thesis or dissertation . You should report all relevant results concisely and objectively, in a logical order. Don’t include subjective interpretations of why you found these results or what they mean—any evaluation should be saved for the discussion section .

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a results section, reporting quantitative research results, reporting qualitative research results, results vs. discussion vs. conclusion, checklist: research results, other interesting articles, frequently asked questions about results sections.

When conducting research, it’s important to report the results of your study prior to discussing your interpretations of it. This gives your reader a clear idea of exactly what you found and keeps the data itself separate from your subjective analysis.

Here are a few best practices:

  • Your results should always be written in the past tense.
  • While the length of this section depends on how much data you collected and analyzed, it should be written as concisely as possible.
  • Only include results that are directly relevant to answering your research questions . Avoid speculative or interpretative words like “appears” or “implies.”
  • If you have other results you’d like to include, consider adding them to an appendix or footnotes.
  • Always start out with your broadest results first, and then flow into your more granular (but still relevant) ones. Think of it like a shoe store: first discuss the shoes as a whole, then the sneakers, boots, sandals, etc.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you conducted quantitative research , you’ll likely be working with the results of some sort of statistical analysis .

Your results section should report the results of any statistical tests you used to compare groups or assess relationships between variables . It should also state whether or not each hypothesis was supported.

The most logical way to structure quantitative results is to frame them around your research questions or hypotheses. For each question or hypothesis, share:

  • A reminder of the type of analysis you used (e.g., a two-sample t test or simple linear regression ). A more detailed description of your analysis should go in your methodology section.
  • A concise summary of each relevant result, both positive and negative. This can include any relevant descriptive statistics (e.g., means and standard deviations ) as well as inferential statistics (e.g., t scores, degrees of freedom , and p values ). Remember, these numbers are often placed in parentheses.
  • A brief statement of how each result relates to the question, or whether the hypothesis was supported. You can briefly mention any results that didn’t fit with your expectations and assumptions, but save any speculation on their meaning or consequences for your discussion  and conclusion.

A note on tables and figures

In quantitative research, it’s often helpful to include visual elements such as graphs, charts, and tables , but only if they are directly relevant to your results. Give these elements clear, descriptive titles and labels so that your reader can easily understand what is being shown. If you want to include any other visual elements that are more tangential in nature, consider adding a figure and table list .

As a rule of thumb:

  • Tables are used to communicate exact values, giving a concise overview of various results
  • Graphs and charts are used to visualize trends and relationships, giving an at-a-glance illustration of key findings

Don’t forget to also mention any tables and figures you used within the text of your results section. Summarize or elaborate on specific aspects you think your reader should know about rather than merely restating the same numbers already shown.

A two-sample t test was used to test the hypothesis that higher social distance from environmental problems would reduce the intent to donate to environmental organizations, with donation intention (recorded as a score from 1 to 10) as the outcome variable and social distance (categorized as either a low or high level of social distance) as the predictor variable.Social distance was found to be positively correlated with donation intention, t (98) = 12.19, p < .001, with the donation intention of the high social distance group 0.28 points higher, on average, than the low social distance group (see figure 1). This contradicts the initial hypothesis that social distance would decrease donation intention, and in fact suggests a small effect in the opposite direction.

Example of using figures in the results section

Figure 1: Intention to donate to environmental organizations based on social distance from impact of environmental damage.

In qualitative research , your results might not all be directly related to specific hypotheses. In this case, you can structure your results section around key themes or topics that emerged from your analysis of the data.

For each theme, start with general observations about what the data showed. You can mention:

  • Recurring points of agreement or disagreement
  • Patterns and trends
  • Particularly significant snippets from individual responses

Next, clarify and support these points with direct quotations. Be sure to report any relevant demographic information about participants. Further information (such as full transcripts , if appropriate) can be included in an appendix .

When asked about video games as a form of art, the respondents tended to believe that video games themselves are not an art form, but agreed that creativity is involved in their production. The criteria used to identify artistic video games included design, story, music, and creative teams.One respondent (male, 24) noted a difference in creativity between popular video game genres:

“I think that in role-playing games, there’s more attention to character design, to world design, because the whole story is important and more attention is paid to certain game elements […] so that perhaps you do need bigger teams of creative experts than in an average shooter or something.”

Responses suggest that video game consumers consider some types of games to have more artistic potential than others.

Your results section should objectively report your findings, presenting only brief observations in relation to each question, hypothesis, or theme.

It should not  speculate about the meaning of the results or attempt to answer your main research question . Detailed interpretation of your results is more suitable for your discussion section , while synthesis of your results into an overall answer to your main research question is best left for your conclusion .

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

data analysis research paper section

Try for free

I have completed my data collection and analyzed the results.

I have included all results that are relevant to my research questions.

I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics .

I have stated whether each hypothesis was supported or refuted.

I have used tables and figures to illustrate my results where appropriate.

All tables and figures are correctly labelled and referred to in the text.

There is no subjective interpretation or speculation on the meaning of the results.

You've finished writing up your results! Use the other checklists to further improve your thesis.

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

The results chapter of a thesis or dissertation presents your research results concisely and objectively.

In quantitative research , for each question or hypothesis , state:

  • The type of analysis used
  • Relevant results in the form of descriptive and inferential statistics
  • Whether or not the alternative hypothesis was supported

In qualitative research , for each question or theme, describe:

  • Recurring patterns
  • Significant or representative individual responses
  • Relevant quotations from the data

Don’t interpret or speculate in the results chapter.

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write a Results Section | Tips & Examples. Scribbr. Retrieved April 10, 2024, from https://www.scribbr.com/dissertation/results/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a research methodology | steps & tips, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

pep

Find what you need to study

Academic Paper: Discussion and Analysis

5 min read • march 10, 2023

Dylan Black

Dylan Black

Introduction

After presenting your data and results to readers, you have one final step before you can finally wrap up your paper and write a conclusion: analyzing your data! This is the big part of your paper that finally takes all the stuff you've been talking about - your method, the data you collected, the information presented in your literature review - and uses it to make a point!

The major question to be answered in your analysis section is simply "we have all this data, but what does it mean?" What questions does this data answer? How does it relate to your research question ? Can this data be explained by, and is it consistent with, other papers? If not, why? These are the types of questions you'll be discussing in this section.

Source: GIPHY

Writing a Discussion and Analysis

Explain what your data means.

The primary point of a discussion section is to explain to your readers, through both statistical means and thorough explanation, what your results mean for your project. In doing so, you want to be succinct, clear, and specific about how your data backs up the claims you are making. These claims should be directly tied back to the overall focus of your paper.

What is this overall focus, you may ask? Your research question ! This discussion along with your conclusion forms the final analysis of your research - what answers did we find? Was our research successful? How do the results we found tie into and relate to the current consensus by the research community? Were our results expected or unexpected? Why or why not? These are all questions you may consider in writing your discussion section.

You showing off all of the cool findings of your research! Source: GIPHY

Why Did Your Results Happen?

After presenting your results in your results section, you may also want to explain why your results actually occurred. This is integral to gaining a full understanding of your results and the conclusions you can draw from them. For example, if data you found contradicts certain data points found in other studies, one of the most important aspects of your discussion of said data is going to be theorizing as to why this disparity took place.

Note that making broad, sweeping claims based on your data is not enough! Everything, and I mean just about everything you say in your discussions section must be backed up either by your own findings that you showed in your results section or past research that has been performed in your field.

For many situations, finding these answers is not easy, and a lot of thinking must be done as to why your results actually occurred the way they did. For some fields, specifically STEM-related fields, a discussion might dive into the theoretical foundations of your research, explaining interactions between parts of your study that led to your results. For others, like social sciences and humanities, results may be open to more interpretation.

However, "open to more interpretation" does not mean you can make claims willy nilly and claim "author's interpretation". In fact, such interpretation may be harder than STEM explanations! You will have to synthesize existing analysis on your topic and incorporate that in your analysis.

Liam Neeson explains the major question of your analysis. Source: GIPHY

Discussion vs. Summary & Repetition

Quite possibly the biggest mistake made within a discussion section is simply restating your data in a different format. The role of the discussion section is to explain your data and what it means for your project. Many students, thinking they're making discussion and analysis, simply regurgitate their numbers back in full sentences with a surface-level explanation.

Phrases like "this shows" and others similar, while good building blocks and great planning tools, often lead to a relatively weak discussion that isn't very nuanced and doesn't lead to much new understanding.

Instead, your goal will be to, through this section and your conclusion, establish a new understanding and in the end, close your gap! To do this effectively, you not only will have to present the numbers and results of your study, but you'll also have to describe how such data forms a new idea that has not been found in prior research.

This, in essence, is the heart of research - finding something new that hasn't been studied before! I don't know if it's just us, but that's pretty darn cool and something that you as the researcher should be incredibly proud of yourself for accomplishing.

Rubric Points

Before we close out this guide, let's take a quick peek at our best friend: the AP Research Rubric for the Discussion and Conclusion sections.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-ZhTL4asMd9fA.png?alt=media&token=ef89cc5b-e85a-480a-a51c-0f3f6158be44

Source: CollegeBoard

Scores of One and Two: Nothing New, Your Standard Essay

Responses that earn a score of one or two on this section of the AP Research Academic Paper typically don't find much new and by this point may not have a fully developed method nor well-thought-out results. For the most part, these are more similar to essays you may have written in a prior English class or AP Seminar than a true Research paper. Instead of finding new ideas, they summarize already existing information about a topic.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-FeoWavGnXCWk.webp?alt=media&token=c0c111d5-37af-428c-aef7-44711143e633

Score of Three: New Understanding, Not Enough Support

A score of three is the first row that establishes a new understanding! This is a great step forward from a one or a two. However, what differentiates a three from a four or a five is the explanation and support of such a new understanding. A paper that earns a three lacks in building a line of reasoning and does not present enough evidence, both from their results section and from already published research.

Scores of Four and Five: New Understanding With A Line of Reasoning

We've made it to the best of the best! With scores of four and five, successful papers describe a new understanding with an effective line of reasoning, sufficient evidence, and an all-around great presentation of how their results signify filling a gap and answering a research question .

As far as the discussions section goes, the difference between a four and a five is more on the side of complexity and nuance. Where a four hits all the marks and does it well, a five exceeds this and writes a truly exceptional analysis. Another area where these two sections differ is in the limitations described, which we discuss in the Conclusion section guide.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-rqPia7AnPCYJ.webp?alt=media&token=cda3a169-92db-41cb-a40b-9369a90a3744

You did it!!!! You have, for the most part, finished the brunt of your research paper and are over the hump! All that's left to do is tackle the conclusion, which tends to be for most the easiest section to write because all you do is summarize how your research question was answered and make some final points about how your research impacts your field. Finally, as always...

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-7Gq1HyLaboLC.webp?alt=media&token=9277c610-aff1-4599-9a4b-bd089909c677

Key Terms to Review ( 1 )

Research Question

Fiveable

Stay Connected

© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

How to Write an Effective Results Section

Affiliation.

  • 1 Rothman Orthopaedics Institute, Philadelphia, PA.
  • PMID: 31145152
  • DOI: 10.1097/BSD.0000000000000845

Developing a well-written research paper is an important step in completing a scientific study. This paper is where the principle investigator and co-authors report the purpose, methods, findings, and conclusions of the study. A key element of writing a research paper is to clearly and objectively report the study's findings in the Results section. The Results section is where the authors inform the readers about the findings from the statistical analysis of the data collected to operationalize the study hypothesis, optimally adding novel information to the collective knowledge on the subject matter. By utilizing clear, concise, and well-organized writing techniques and visual aids in the reporting of the data, the author is able to construct a case for the research question at hand even without interpreting the data.

  • Data Analysis
  • Peer Review, Research*
  • Publishing*
  • Sample Size
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

data analysis research paper section

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection  methods, and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

employee evaluation software

Top 15 Employee Evaluation Software to Enhance Performance

event feedback software

Event Feedback Software: Top 11 Best in 2024

Apr 9, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Results/Findings Section in Research

data analysis research paper section

What is the research paper Results section and what does it do?

The Results section of a scientific research paper represents the core findings of a study derived from the methods applied to gather and analyze information. It presents these findings in a logical sequence without bias or interpretation from the author, setting up the reader for later interpretation and evaluation in the Discussion section. A major purpose of the Results section is to break down the data into sentences that show its significance to the research question(s).

The Results section appears third in the section sequence in most scientific papers. It follows the presentation of the Methods and Materials and is presented before the Discussion section —although the Results and Discussion are presented together in many journals. This section answers the basic question “What did you find in your research?”

What is included in the Results section?

The Results section should include the findings of your study and ONLY the findings of your study. The findings include:

  • Data presented in tables, charts, graphs, and other figures (may be placed into the text or on separate pages at the end of the manuscript)
  • A contextual analysis of this data explaining its meaning in sentence form
  • All data that corresponds to the central research question(s)
  • All secondary findings (secondary outcomes, subgroup analyses, etc.)

If the scope of the study is broad, or if you studied a variety of variables, or if the methodology used yields a wide range of different results, the author should present only those results that are most relevant to the research question stated in the Introduction section .

As a general rule, any information that does not present the direct findings or outcome of the study should be left out of this section. Unless the journal requests that authors combine the Results and Discussion sections, explanations and interpretations should be omitted from the Results.

How are the results organized?

The best way to organize your Results section is “logically.” One logical and clear method of organizing research results is to provide them alongside the research questions—within each research question, present the type of data that addresses that research question.

Let’s look at an example. Your research question is based on a survey among patients who were treated at a hospital and received postoperative care. Let’s say your first research question is:

results section of a research paper, figures

“What do hospital patients over age 55 think about postoperative care?”

This can actually be represented as a heading within your Results section, though it might be presented as a statement rather than a question:

Attitudes towards postoperative care in patients over the age of 55

Now present the results that address this specific research question first. In this case, perhaps a table illustrating data from a survey. Likert items can be included in this example. Tables can also present standard deviations, probabilities, correlation matrices, etc.

Following this, present a content analysis, in words, of one end of the spectrum of the survey or data table. In our example case, start with the POSITIVE survey responses regarding postoperative care, using descriptive phrases. For example:

“Sixty-five percent of patients over 55 responded positively to the question “ Are you satisfied with your hospital’s postoperative care ?” (Fig. 2)

Include other results such as subcategory analyses. The amount of textual description used will depend on how much interpretation of tables and figures is necessary and how many examples the reader needs in order to understand the significance of your research findings.

Next, present a content analysis of another part of the spectrum of the same research question, perhaps the NEGATIVE or NEUTRAL responses to the survey. For instance:

  “As Figure 1 shows, 15 out of 60 patients in Group A responded negatively to Question 2.”

After you have assessed the data in one figure and explained it sufficiently, move on to your next research question. For example:

  “How does patient satisfaction correspond to in-hospital improvements made to postoperative care?”

results section of a research paper, figures

This kind of data may be presented through a figure or set of figures (for instance, a paired T-test table).

Explain the data you present, here in a table, with a concise content analysis:

“The p-value for the comparison between the before and after groups of patients was .03% (Fig. 2), indicating that the greater the dissatisfaction among patients, the more frequent the improvements that were made to postoperative care.”

Let’s examine another example of a Results section from a study on plant tolerance to heavy metal stress . In the Introduction section, the aims of the study are presented as “determining the physiological and morphological responses of Allium cepa L. towards increased cadmium toxicity” and “evaluating its potential to accumulate the metal and its associated environmental consequences.” The Results section presents data showing how these aims are achieved in tables alongside a content analysis, beginning with an overview of the findings:

“Cadmium caused inhibition of root and leave elongation, with increasing effects at higher exposure doses (Fig. 1a-c).”

The figure containing this data is cited in parentheses. Note that this author has combined three graphs into one single figure. Separating the data into separate graphs focusing on specific aspects makes it easier for the reader to assess the findings, and consolidating this information into one figure saves space and makes it easy to locate the most relevant results.

results section of a research paper, figures

Following this overall summary, the relevant data in the tables is broken down into greater detail in text form in the Results section.

  • “Results on the bio-accumulation of cadmium were found to be the highest (17.5 mg kgG1) in the bulb, when the concentration of cadmium in the solution was 1×10G2 M and lowest (0.11 mg kgG1) in the leaves when the concentration was 1×10G3 M.”

Captioning and Referencing Tables and Figures

Tables and figures are central components of your Results section and you need to carefully think about the most effective way to use graphs and tables to present your findings . Therefore, it is crucial to know how to write strong figure captions and to refer to them within the text of the Results section.

The most important advice one can give here as well as throughout the paper is to check the requirements and standards of the journal to which you are submitting your work. Every journal has its own design and layout standards, which you can find in the author instructions on the target journal’s website. Perusing a journal’s published articles will also give you an idea of the proper number, size, and complexity of your figures.

Regardless of which format you use, the figures should be placed in the order they are referenced in the Results section and be as clear and easy to understand as possible. If there are multiple variables being considered (within one or more research questions), it can be a good idea to split these up into separate figures. Subsequently, these can be referenced and analyzed under separate headings and paragraphs in the text.

To create a caption, consider the research question being asked and change it into a phrase. For instance, if one question is “Which color did participants choose?”, the caption might be “Color choice by participant group.” Or in our last research paper example, where the question was “What is the concentration of cadmium in different parts of the onion after 14 days?” the caption reads:

 “Fig. 1(a-c): Mean concentration of Cd determined in (a) bulbs, (b) leaves, and (c) roots of onions after a 14-day period.”

Steps for Composing the Results Section

Because each study is unique, there is no one-size-fits-all approach when it comes to designing a strategy for structuring and writing the section of a research paper where findings are presented. The content and layout of this section will be determined by the specific area of research, the design of the study and its particular methodologies, and the guidelines of the target journal and its editors. However, the following steps can be used to compose the results of most scientific research studies and are essential for researchers who are new to preparing a manuscript for publication or who need a reminder of how to construct the Results section.

Step 1 : Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study.

  • The guidelines will generally outline specific requirements for the results or findings section, and the published articles will provide sound examples of successful approaches.
  • Note length limitations on restrictions on content. For instance, while many journals require the Results and Discussion sections to be separate, others do not—qualitative research papers often include results and interpretations in the same section (“Results and Discussion”).
  • Reading the aims and scope in the journal’s “ guide for authors ” section and understanding the interests of its readers will be invaluable in preparing to write the Results section.

Step 2 : Consider your research results in relation to the journal’s requirements and catalogue your results.

  • Focus on experimental results and other findings that are especially relevant to your research questions and objectives and include them even if they are unexpected or do not support your ideas and hypotheses.
  • Catalogue your findings—use subheadings to streamline and clarify your report. This will help you avoid excessive and peripheral details as you write and also help your reader understand and remember your findings. Create appendices that might interest specialists but prove too long or distracting for other readers.
  • Decide how you will structure of your results. You might match the order of the research questions and hypotheses to your results, or you could arrange them according to the order presented in the Methods section. A chronological order or even a hierarchy of importance or meaningful grouping of main themes or categories might prove effective. Consider your audience, evidence, and most importantly, the objectives of your research when choosing a structure for presenting your findings.

Step 3 : Design figures and tables to present and illustrate your data.

  • Tables and figures should be numbered according to the order in which they are mentioned in the main text of the paper.
  • Information in figures should be relatively self-explanatory (with the aid of captions), and their design should include all definitions and other information necessary for readers to understand the findings without reading all of the text.
  • Use tables and figures as a focal point to tell a clear and informative story about your research and avoid repeating information. But remember that while figures clarify and enhance the text, they cannot replace it.

Step 4 : Draft your Results section using the findings and figures you have organized.

  • The goal is to communicate this complex information as clearly and precisely as possible; precise and compact phrases and sentences are most effective.
  • In the opening paragraph of this section, restate your research questions or aims to focus the reader’s attention to what the results are trying to show. It is also a good idea to summarize key findings at the end of this section to create a logical transition to the interpretation and discussion that follows.
  • Try to write in the past tense and the active voice to relay the findings since the research has already been done and the agent is usually clear. This will ensure that your explanations are also clear and logical.
  • Make sure that any specialized terminology or abbreviation you have used here has been defined and clarified in the  Introduction section .

Step 5 : Review your draft; edit and revise until it reports results exactly as you would like to have them reported to your readers.

  • Double-check the accuracy and consistency of all the data, as well as all of the visual elements included.
  • Read your draft aloud to catch language errors (grammar, spelling, and mechanics), awkward phrases, and missing transitions.
  • Ensure that your results are presented in the best order to focus on objectives and prepare readers for interpretations, valuations, and recommendations in the Discussion section . Look back over the paper’s Introduction and background while anticipating the Discussion and Conclusion sections to ensure that the presentation of your results is consistent and effective.
  • Consider seeking additional guidance on your paper. Find additional readers to look over your Results section and see if it can be improved in any way. Peers, professors, or qualified experts can provide valuable insights.

One excellent option is to use a professional English proofreading and editing service  such as Wordvice, including our paper editing service . With hundreds of qualified editors from dozens of scientific fields, Wordvice has helped thousands of authors revise their manuscripts and get accepted into their target journals. Read more about the  proofreading and editing process  before proceeding with getting academic editing services and manuscript editing services for your manuscript.

As the representation of your study’s data output, the Results section presents the core information in your research paper. By writing with clarity and conciseness and by highlighting and explaining the crucial findings of their study, authors increase the impact and effectiveness of their research manuscripts.

For more articles and videos on writing your research manuscript, visit Wordvice’s Resources page.

Wordvice Resources

  • How to Write a Research Paper Introduction 
  • Which Verb Tenses to Use in a Research Paper
  • How to Write an Abstract for a Research Paper
  • How to Write a Research Paper Title
  • Useful Phrases for Academic Writing
  • Common Transition Terms in Academic Papers
  • Active and Passive Voice in Research Papers
  • 100+ Verbs That Will Make Your Research Writing Amazing
  • Tips for Paraphrasing in Research Papers

Advanced essay writing guides for everyone

  • Papers From Scratch
  • Data Analysis Section
  • Essay About Maturity
  • MLA Formatting
  • Informative Essay Writing
  • 22 Great Examples
  • Computer Science Papers
  • Persuasive Essay Topics
  • Argumentative Essay Topics

Helpful Tips on Composing a Research Paper Data Analysis Section

If you are given a research paper assignment, you should create a list of tasks to be done and try to stick to your working schedule. It is recommended that you complete your research and then start writing your work. One of the important steps is to prepare your data analysis section. However, that step is vital as it aims to explain how the data will be described in the results section. Use the following helpful tips to complete that section without a hitch.

17% OFF on your first order Type the code 17TUDENT

How to Compose a Data Analysis Section for Your Research Paper

Usually, a data analysis section is provided right after the methods and approaches used. There, you should explain how you organized your data, what statistical tests were applied, and how you evaluated the obtained results. Follow these simple tips to compose a strong piece of writing:

  • Avoid analyzing your results in the data analysis section.
  • Indicate whether your research is quantitative or qualitative.
  • Provide your main research questions and the analysis methods that were applied to answer them.
  • Report what software you used to gather and analyze your data.
  • List the data sources, including electronic archives and online reports of different institutions.
  • Explain how the data were summarized and what measures of variability you have used.
  • Remember to mention the data transformations if any, including data normalizing.
  • Make sure that you included the full name of statistical tests used.
  • Describe graphical techniques used to analyze the raw data and the results.

Where to Find the Necessary Assistance If You Get Stuck

Research paper writing is hard, so if you get stuck, do not wait for enlightenment and start searching for some assistance. It is a good idea to consult a statistics expert if you have a large amount of data and have no idea on how to summarize it. Your academic advisor may suggest you where to find a statistician to ask your questions.

Another great help option is getting a sample of a data analysis section. At the school’s library, you can find sample research papers written by your fellow students, get a few works, and study how the students analyzed data. Pay special attention to the word choices and the structure of the writing.

If you decide to follow a section template, you should be careful and keep your professor’s instructions in mind. For example, you may be asked to place all the page-long data tables in the appendices or build graphs instead of providing tables.

2024 | tartanpr.com

Elsevier QRcode Wechat

  • Manuscript Preparation

How to write the results section of a research paper

  • 3 minute read
  • 56.5K views

Table of Contents

At its core, a research paper aims to fill a gap in the research on a given topic. As a result, the results section of the paper, which describes the key findings of the study, is often considered the core of the paper. This is the section that gets the most attention from reviewers, peers, students, and any news organization reporting on your findings. Writing a clear, concise, and logical results section is, therefore, one of the most important parts of preparing your manuscript.

Difference between results and discussion

Before delving into how to write the results section, it is important to first understand the difference between the results and discussion sections. The results section needs to detail the findings of the study. The aim of this section is not to draw connections between the different findings or to compare it to previous findings in literature—that is the purview of the discussion section. Unlike the discussion section, which can touch upon the hypothetical, the results section needs to focus on the purely factual. In some cases, it may even be preferable to club these two sections together into a single section. For example, while writing  a review article, it can be worthwhile to club these two sections together, as the main results in this case are the conclusions that can be drawn from the literature.

Structure of the results section

Although the main purpose of the results section in a research paper is to report the findings, it is necessary to present an introduction and repeat the research question. This establishes a connection to the previous section of the paper and creates a smooth flow of information.

Next, the results section needs to communicate the findings of your research in a systematic manner. The section needs to be organized such that the primary research question is addressed first, then the secondary research questions. If the research addresses multiple questions, the results section must individually connect with each of the questions. This ensures clarity and minimizes confusion while reading.

Consider representing your results visually. For example, graphs, tables, and other figures can help illustrate the findings of your paper, especially if there is a large amount of data in the results.

Remember, an appealing results section can help peer reviewers better understand the merits of your research, thereby increasing your chances of publication.

Practical guidance for writing an effective results section for a research paper

  • Always use simple and clear language. Avoid the use of uncertain or out-of-focus expressions.
  • The findings of the study must be expressed in an objective and unbiased manner. While it is acceptable to correlate certain findings in the discussion section, it is best to avoid overinterpreting the results.
  • If the research addresses more than one hypothesis, use sub-sections to describe the results. This prevents confusion and promotes understanding.
  • Ensure that negative results are included in this section, even if they do not support the research hypothesis.
  • Wherever possible, use illustrations like tables, figures, charts, or other visual representations to showcase the results of your research paper. Mention these illustrations in the text, but do not repeat the information that they convey.
  • For statistical data, it is adequate to highlight the tests and explain their results. The initial or raw data should not be mentioned in the results section of a research paper.

The results section of a research paper is usually the most impactful section because it draws the greatest attention. Regardless of the subject of your research paper, a well-written results section is capable of generating interest in your research.

For detailed information and assistance on writing the results of a research paper, refer to Elsevier Author Services.

Writing a good review article

  • Research Process

Writing a good review article

Why is data validation important in research

Why is data validation important in research?

You may also like.

impactful introduction section

Make Hook, Line, and Sinker: The Art of Crafting Engaging Introductions

Limitations of a Research

Can Describing Study Limitations Improve the Quality of Your Paper?

Guide to Crafting Impactful Sentences

A Guide to Crafting Shorter, Impactful Sentences in Academic Writing

Write an Excellent Discussion in Your Manuscript

6 Steps to Write an Excellent Discussion in Your Manuscript

How to Write Clear Civil Engineering Papers

How to Write Clear and Crisp Civil Engineering Papers? Here are 5 Key Tips to Consider

data analysis research paper section

The Clear Path to An Impactful Paper: ②

Essentials of Writing to Communicate Research in Medicine

The Essentials of Writing to Communicate Research in Medicine

There are some recognizable elements and patterns often used for framing engaging sentences in English. Find here the sentence patterns in Academic Writing

Changing Lines: Sentence Patterns in Academic Writing

Input your search keywords and press Enter.

AI Index Report

The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world’s most credible and authoritative source for data and insights about AI.

Subscribe to receive the 2024 report in your inbox!

AI Index coming soon

Coming Soon: 2024 AI Index Report!

The 2024 AI Index Report will be out April 15! Sign up for our mailing list to receive it in your inbox.

Steering Committee Co-Directors

Jack Clark

Ray Perrault

Steering committee members.

Erik Brynjolfsson

Erik Brynjolfsson

John Etchemendy

John Etchemendy

Katrina light

Katrina Ligett

Terah Lyons

Terah Lyons

James Manyika

James Manyika

Juan Carlos Niebles

Juan Carlos Niebles

Vanessa Parli

Vanessa Parli

Yoav Shoham

Yoav Shoham

Russell Wald

Russell Wald

Staff members.

Loredana Fattorini

Loredana Fattorini

Nestor Maslej

Nestor Maslej

Letter from the co-directors.

AI has moved into its era of deployment; throughout 2022 and the beginning of 2023, new large-scale AI models have been released every month. These models, such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 2, are capable of an increasingly broad range of tasks, from text manipulation and analysis, to image generation, to unprecedentedly good speech recognition. These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new. However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.

Although 2022 was the first year in a decade where private AI investment decreased, AI is still a topic of great interest to policymakers, industry leaders, researchers, and the public. Policymakers are talking about AI more than ever before. Industry leaders that have integrated AI into their businesses are seeing tangible cost and revenue benefits. The number of AI publications and collaborations continues to increase. And the public is forming sharper opinions about AI and which elements they like or dislike.

AI will continue to improve and, as such, become a greater part of all our lives. Given the increased presence of this technology and its potential for massive disruption, we should all begin thinking more critically about how exactly we want AI to be developed and deployed. We should also ask questions about who is deploying it—as our analysis shows, AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors. This year’s AI Index paints a picture of where we are so far with AI, in order to highlight what might await us in the future.

- Jack Clark and Ray Perrault

Our Supporting Partners

AI Index Supporting Partners

Analytics & Research Partners

AI Index Supporting Partners

Stay up to date on the AI Index by subscribing to the  Stanford HAI newsletter.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(3); May-Jun 2015

Logo of cjhp

Qualitative Research: Data Collection, Analysis, and Management

Introduction.

In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area. Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. Whereas quantitative research methods can be used to determine how many people undertake particular behaviours, qualitative methods can help researchers to understand how and why such behaviours take place. Within the context of pharmacy practice research, qualitative approaches have been used to examine a diverse array of topics, including the perceptions of key stakeholders regarding prescribing by pharmacists and the postgraduation employment experiences of young pharmacists (see “Further Reading” section at the end of this article).

In the previous paper, 1 we outlined 3 commonly used methodologies: ethnography 2 , grounded theory 3 , and phenomenology. 4 Briefly, ethnography involves researchers using direct observation to study participants in their “real life” environment, sometimes over extended periods. Grounded theory and its later modified versions (e.g., Strauss and Corbin 5 ) use face-to-face interviews and interactions such as focus groups to explore a particular research phenomenon and may help in clarifying a less-well-understood problem, situation, or context. Phenomenology shares some features with grounded theory (such as an exploration of participants’ behaviour) and uses similar techniques to collect data, but it focuses on understanding how human beings experience their world. It gives researchers the opportunity to put themselves in another person’s shoes and to understand the subjective experiences of participants. 6 Some researchers use qualitative methodologies but adopt a different standpoint, and an example of this appears in the work of Thurston and others, 7 discussed later in this paper.

Qualitative work requires reflection on the part of researchers, both before and during the research process, as a way of providing context and understanding for readers. When being reflexive, researchers should not try to simply ignore or avoid their own biases (as this would likely be impossible); instead, reflexivity requires researchers to reflect upon and clearly articulate their position and subjectivities (world view, perspectives, biases), so that readers can better understand the filters through which questions were asked, data were gathered and analyzed, and findings were reported. From this perspective, bias and subjectivity are not inherently negative but they are unavoidable; as a result, it is best that they be articulated up-front in a manner that is clear and coherent for readers.

THE PARTICIPANT’S VIEWPOINT

What qualitative study seeks to convey is why people have thoughts and feelings that might affect the way they behave. Such study may occur in any number of contexts, but here, we focus on pharmacy practice and the way people behave with regard to medicines use (e.g., to understand patients’ reasons for nonadherence with medication therapy or to explore physicians’ resistance to pharmacists’ clinical suggestions). As we suggested in our earlier article, 1 an important point about qualitative research is that there is no attempt to generalize the findings to a wider population. Qualitative research is used to gain insights into people’s feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as “mixed methods” research, and further reading on this topic may be found at the end of this paper.

The role of the researcher in qualitative research is to attempt to access the thoughts and feelings of study participants. This is not an easy task, as it involves asking people to talk about things that may be very personal to them. Sometimes the experiences being explored are fresh in the participant’s mind, whereas on other occasions reliving past experiences may be difficult. However the data are being collected, a primary responsibility of the researcher is to safeguard participants and their data. Mechanisms for such safeguarding must be clearly articulated to participants and must be approved by a relevant research ethics review board before the research begins. Researchers and practitioners new to qualitative research should seek advice from an experienced qualitative researcher before embarking on their project.

DATA COLLECTION

Whatever philosophical standpoint the researcher is taking and whatever the data collection method (e.g., focus group, one-to-one interviews), the process will involve the generation of large amounts of data. In addition to the variety of study methodologies available, there are also different ways of making a record of what is said and done during an interview or focus group, such as taking handwritten notes or video-recording. If the researcher is audio- or video-recording data collection, then the recordings must be transcribed verbatim before data analysis can begin. As a rough guide, it can take an experienced researcher/transcriber 8 hours to transcribe one 45-minute audio-recorded interview, a process than will generate 20–30 pages of written dialogue.

Many researchers will also maintain a folder of “field notes” to complement audio-taped interviews. Field notes allow the researcher to maintain and comment upon impressions, environmental contexts, behaviours, and nonverbal cues that may not be adequately captured through the audio-recording; they are typically handwritten in a small notebook at the same time the interview takes place. Field notes can provide important context to the interpretation of audio-taped data and can help remind the researcher of situational factors that may be important during data analysis. Such notes need not be formal, but they should be maintained and secured in a similar manner to audio tapes and transcripts, as they contain sensitive information and are relevant to the research. For more information about collecting qualitative data, please see the “Further Reading” section at the end of this paper.

DATA ANALYSIS AND MANAGEMENT

If, as suggested earlier, doing qualitative research is about putting oneself in another person’s shoes and seeing the world from that person’s perspective, the most important part of data analysis and management is to be true to the participants. It is their voices that the researcher is trying to hear, so that they can be interpreted and reported on for others to read and learn from. To illustrate this point, consider the anonymized transcript excerpt presented in Appendix 1 , which is taken from a research interview conducted by one of the authors (J.S.). We refer to this excerpt throughout the remainder of this paper to illustrate how data can be managed, analyzed, and presented.

Interpretation of Data

Interpretation of the data will depend on the theoretical standpoint taken by researchers. For example, the title of the research report by Thurston and others, 7 “Discordant indigenous and provider frames explain challenges in improving access to arthritis care: a qualitative study using constructivist grounded theory,” indicates at least 2 theoretical standpoints. The first is the culture of the indigenous population of Canada and the place of this population in society, and the second is the social constructivist theory used in the constructivist grounded theory method. With regard to the first standpoint, it can be surmised that, to have decided to conduct the research, the researchers must have felt that there was anecdotal evidence of differences in access to arthritis care for patients from indigenous and non-indigenous backgrounds. With regard to the second standpoint, it can be surmised that the researchers used social constructivist theory because it assumes that behaviour is socially constructed; in other words, people do things because of the expectations of those in their personal world or in the wider society in which they live. (Please see the “Further Reading” section for resources providing more information about social constructivist theory and reflexivity.) Thus, these 2 standpoints (and there may have been others relevant to the research of Thurston and others 7 ) will have affected the way in which these researchers interpreted the experiences of the indigenous population participants and those providing their care. Another standpoint is feminist standpoint theory which, among other things, focuses on marginalized groups in society. Such theories are helpful to researchers, as they enable us to think about things from a different perspective. Being aware of the standpoints you are taking in your own research is one of the foundations of qualitative work. Without such awareness, it is easy to slip into interpreting other people’s narratives from your own viewpoint, rather than that of the participants.

To analyze the example in Appendix 1 , we will adopt a phenomenological approach because we want to understand how the participant experienced the illness and we want to try to see the experience from that person’s perspective. It is important for the researcher to reflect upon and articulate his or her starting point for such analysis; for example, in the example, the coder could reflect upon her own experience as a female of a majority ethnocultural group who has lived within middle class and upper middle class settings. This personal history therefore forms the filter through which the data will be examined. This filter does not diminish the quality or significance of the analysis, since every researcher has his or her own filters; however, by explicitly stating and acknowledging what these filters are, the researcher makes it easer for readers to contextualize the work.

Transcribing and Checking

For the purposes of this paper it is assumed that interviews or focus groups have been audio-recorded. As mentioned above, transcribing is an arduous process, even for the most experienced transcribers, but it must be done to convert the spoken word to the written word to facilitate analysis. For anyone new to conducting qualitative research, it is beneficial to transcribe at least one interview and one focus group. It is only by doing this that researchers realize how difficult the task is, and this realization affects their expectations when asking others to transcribe. If the research project has sufficient funding, then a professional transcriber can be hired to do the work. If this is the case, then it is a good idea to sit down with the transcriber, if possible, and talk through the research and what the participants were talking about. This background knowledge for the transcriber is especially important in research in which people are using jargon or medical terms (as in pharmacy practice). Involving your transcriber in this way makes the work both easier and more rewarding, as he or she will feel part of the team. Transcription editing software is also available, but it is expensive. For example, ELAN (more formally known as EUDICO Linguistic Annotator, developed at the Technical University of Berlin) 8 is a tool that can help keep data organized by linking media and data files (particularly valuable if, for example, video-taping of interviews is complemented by transcriptions). It can also be helpful in searching complex data sets. Products such as ELAN do not actually automatically transcribe interviews or complete analyses, and they do require some time and effort to learn; nonetheless, for some research applications, it may be a valuable to consider such software tools.

All audio recordings should be transcribed verbatim, regardless of how intelligible the transcript may be when it is read back. Lines of text should be numbered. Once the transcription is complete, the researcher should read it while listening to the recording and do the following: correct any spelling or other errors; anonymize the transcript so that the participant cannot be identified from anything that is said (e.g., names, places, significant events); insert notations for pauses, laughter, looks of discomfort; insert any punctuation, such as commas and full stops (periods) (see Appendix 1 for examples of inserted punctuation), and include any other contextual information that might have affected the participant (e.g., temperature or comfort of the room).

Dealing with the transcription of a focus group is slightly more difficult, as multiple voices are involved. One way of transcribing such data is to “tag” each voice (e.g., Voice A, Voice B). In addition, the focus group will usually have 2 facilitators, whose respective roles will help in making sense of the data. While one facilitator guides participants through the topic, the other can make notes about context and group dynamics. More information about group dynamics and focus groups can be found in resources listed in the “Further Reading” section.

Reading between the Lines

During the process outlined above, the researcher can begin to get a feel for the participant’s experience of the phenomenon in question and can start to think about things that could be pursued in subsequent interviews or focus groups (if appropriate). In this way, one participant’s narrative informs the next, and the researcher can continue to interview until nothing new is being heard or, as it says in the text books, “saturation is reached”. While continuing with the processes of coding and theming (described in the next 2 sections), it is important to consider not just what the person is saying but also what they are not saying. For example, is a lengthy pause an indication that the participant is finding the subject difficult, or is the person simply deciding what to say? The aim of the whole process from data collection to presentation is to tell the participants’ stories using exemplars from their own narratives, thus grounding the research findings in the participants’ lived experiences.

Smith 9 suggested a qualitative research method known as interpretative phenomenological analysis, which has 2 basic tenets: first, that it is rooted in phenomenology, attempting to understand the meaning that individuals ascribe to their lived experiences, and second, that the researcher must attempt to interpret this meaning in the context of the research. That the researcher has some knowledge and expertise in the subject of the research means that he or she can have considerable scope in interpreting the participant’s experiences. Larkin and others 10 discussed the importance of not just providing a description of what participants say. Rather, interpretative phenomenological analysis is about getting underneath what a person is saying to try to truly understand the world from his or her perspective.

Once all of the research interviews have been transcribed and checked, it is time to begin coding. Field notes compiled during an interview can be a useful complementary source of information to facilitate this process, as the gap in time between an interview, transcribing, and coding can result in memory bias regarding nonverbal or environmental context issues that may affect interpretation of data.

Coding refers to the identification of topics, issues, similarities, and differences that are revealed through the participants’ narratives and interpreted by the researcher. This process enables the researcher to begin to understand the world from each participant’s perspective. Coding can be done by hand on a hard copy of the transcript, by making notes in the margin or by highlighting and naming sections of text. More commonly, researchers use qualitative research software (e.g., NVivo, QSR International Pty Ltd; www.qsrinternational.com/products_nvivo.aspx ) to help manage their transcriptions. It is advised that researchers undertake a formal course in the use of such software or seek supervision from a researcher experienced in these tools.

Returning to Appendix 1 and reading from lines 8–11, a code for this section might be “diagnosis of mental health condition”, but this would just be a description of what the participant is talking about at that point. If we read a little more deeply, we can ask ourselves how the participant might have come to feel that the doctor assumed he or she was aware of the diagnosis or indeed that they had only just been told the diagnosis. There are a number of pauses in the narrative that might suggest the participant is finding it difficult to recall that experience. Later in the text, the participant says “nobody asked me any questions about my life” (line 19). This could be coded simply as “health care professionals’ consultation skills”, but that would not reflect how the participant must have felt never to be asked anything about his or her personal life, about the participant as a human being. At the end of this excerpt, the participant just trails off, recalling that no-one showed any interest, which makes for very moving reading. For practitioners in pharmacy, it might also be pertinent to explore the participant’s experience of akathisia and why this was left untreated for 20 years.

One of the questions that arises about qualitative research relates to the reliability of the interpretation and representation of the participants’ narratives. There are no statistical tests that can be used to check reliability and validity as there are in quantitative research. However, work by Lincoln and Guba 11 suggests that there are other ways to “establish confidence in the ‘truth’ of the findings” (p. 218). They call this confidence “trustworthiness” and suggest that there are 4 criteria of trustworthiness: credibility (confidence in the “truth” of the findings), transferability (showing that the findings have applicability in other contexts), dependability (showing that the findings are consistent and could be repeated), and confirmability (the extent to which the findings of a study are shaped by the respondents and not researcher bias, motivation, or interest).

One way of establishing the “credibility” of the coding is to ask another researcher to code the same transcript and then to discuss any similarities and differences in the 2 resulting sets of codes. This simple act can result in revisions to the codes and can help to clarify and confirm the research findings.

Theming refers to the drawing together of codes from one or more transcripts to present the findings of qualitative research in a coherent and meaningful way. For example, there may be examples across participants’ narratives of the way in which they were treated in hospital, such as “not being listened to” or “lack of interest in personal experiences” (see Appendix 1 ). These may be drawn together as a theme running through the narratives that could be named “the patient’s experience of hospital care”. The importance of going through this process is that at its conclusion, it will be possible to present the data from the interviews using quotations from the individual transcripts to illustrate the source of the researchers’ interpretations. Thus, when the findings are organized for presentation, each theme can become the heading of a section in the report or presentation. Underneath each theme will be the codes, examples from the transcripts, and the researcher’s own interpretation of what the themes mean. Implications for real life (e.g., the treatment of people with chronic mental health problems) should also be given.

DATA SYNTHESIS

In this final section of this paper, we describe some ways of drawing together or “synthesizing” research findings to represent, as faithfully as possible, the meaning that participants ascribe to their life experiences. This synthesis is the aim of the final stage of qualitative research. For most readers, the synthesis of data presented by the researcher is of crucial significance—this is usually where “the story” of the participants can be distilled, summarized, and told in a manner that is both respectful to those participants and meaningful to readers. There are a number of ways in which researchers can synthesize and present their findings, but any conclusions drawn by the researchers must be supported by direct quotations from the participants. In this way, it is made clear to the reader that the themes under discussion have emerged from the participants’ interviews and not the mind of the researcher. The work of Latif and others 12 gives an example of how qualitative research findings might be presented.

Planning and Writing the Report

As has been suggested above, if researchers code and theme their material appropriately, they will naturally find the headings for sections of their report. Qualitative researchers tend to report “findings” rather than “results”, as the latter term typically implies that the data have come from a quantitative source. The final presentation of the research will usually be in the form of a report or a paper and so should follow accepted academic guidelines. In particular, the article should begin with an introduction, including a literature review and rationale for the research. There should be a section on the chosen methodology and a brief discussion about why qualitative methodology was most appropriate for the study question and why one particular methodology (e.g., interpretative phenomenological analysis rather than grounded theory) was selected to guide the research. The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper. The findings should be written as if a story is being told; as such, it is not necessary to have a lengthy discussion section at the end. This is because much of the discussion will take place around the participants’ quotes, such that all that is needed to close the report or paper is a summary, limitations of the research, and the implications that the research has for practice. As stated earlier, it is not the intention of qualitative research to allow the findings to be generalized, and therefore this is not, in itself, a limitation.

Planning out the way that findings are to be presented is helpful. It is useful to insert the headings of the sections (the themes) and then make a note of the codes that exemplify the thoughts and feelings of your participants. It is generally advisable to put in the quotations that you want to use for each theme, using each quotation only once. After all this is done, the telling of the story can begin as you give your voice to the experiences of the participants, writing around their quotations. Do not be afraid to draw assumptions from the participants’ narratives, as this is necessary to give an in-depth account of the phenomena in question. Discuss these assumptions, drawing on your participants’ words to support you as you move from one code to another and from one theme to the next. Finally, as appropriate, it is possible to include examples from literature or policy documents that add support for your findings. As an exercise, you may wish to code and theme the sample excerpt in Appendix 1 and tell the participant’s story in your own way. Further reading about “doing” qualitative research can be found at the end of this paper.

CONCLUSIONS

Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. It can be used in pharmacy practice research to explore how patients feel about their health and their treatment. Qualitative research has been used by pharmacists to explore a variety of questions and problems (see the “Further Reading” section for examples). An understanding of these issues can help pharmacists and other health care professionals to tailor health care to match the individual needs of patients and to develop a concordant relationship. Doing qualitative research is not easy and may require a complete rethink of how research is conducted, particularly for researchers who are more familiar with quantitative approaches. There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples’ thoughts and feelings to enable researchers to tell participants’ stories.

Appendix 1. Excerpt from a sample transcript

The participant (age late 50s) had suffered from a chronic mental health illness for 30 years. The participant had become a “revolving door patient,” someone who is frequently in and out of hospital. As the participant talked about past experiences, the researcher asked:

  • What was treatment like 30 years ago?
  • Umm—well it was pretty much they could do what they wanted with you because I was put into the er, the er kind of system er, I was just on
  • endless section threes.
  • Really…
  • But what I didn’t realize until later was that if you haven’t actually posed a threat to someone or yourself they can’t really do that but I didn’t know
  • that. So wh-when I first went into hospital they put me on the forensic ward ’cause they said, “We don’t think you’ll stay here we think you’ll just
  • run-run away.” So they put me then onto the acute admissions ward and – er – I can remember one of the first things I recall when I got onto that
  • ward was sitting down with a er a Dr XXX. He had a book this thick [gestures] and on each page it was like three questions and he went through
  • all these questions and I answered all these questions. So we’re there for I don’t maybe two hours doing all that and he asked me he said “well
  • when did somebody tell you then that you have schizophrenia” I said “well nobody’s told me that” so he seemed very surprised but nobody had
  • actually [pause] whe-when I first went up there under police escort erm the senior kind of consultants people I’d been to where I was staying and
  • ermm so er [pause] I . . . the, I can remember the very first night that I was there and given this injection in this muscle here [gestures] and just
  • having dreadful side effects the next day I woke up [pause]
  • . . . and I suffered that akathesia I swear to you, every minute of every day for about 20 years.
  • Oh how awful.
  • And that side of it just makes life impossible so the care on the wards [pause] umm I don’t know it’s kind of, it’s kind of hard to put into words
  • [pause]. Because I’m not saying they were sort of like not friendly or interested but then nobody ever seemed to want to talk about your life [pause]
  • nobody asked me any questions about my life. The only questions that came into was they asked me if I’d be a volunteer for these student exams
  • and things and I said “yeah” so all the questions were like “oh what jobs have you done,” er about your relationships and things and er but
  • nobody actually sat down and had a talk and showed some interest in you as a person you were just there basically [pause] um labelled and you
  • know there was there was [pause] but umm [pause] yeah . . .

This article is the 10th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.

Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.

Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.

Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.

Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.

Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.

Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.

Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.

Competing interests: None declared.

Further Reading

Examples of qualitative research in pharmacy practice.

  • Farrell B, Pottie K, Woodend K, Yao V, Dolovich L, Kennie N, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists integrated into family practice. J Interprof Care. 2010; 24 (1):80–9. [ PubMed ] [ Google Scholar ]
  • Gregory P, Austin Z. Postgraduation employment experiences of new pharmacists in Ontario in 2012–2013. Can Pharm J. 2014; 147 (5):290–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marks PZ, Jennnings B, Farrell B, Kennie-Kaulbach N, Jorgenson D, Pearson-Sharpe J, et al. “I gained a skill and a change in attitude”: a case study describing how an online continuing professional education course for pharmacists supported achievement of its transfer to practice outcomes. Can J Univ Contin Educ. 2014; 40 (2):1–18. [ Google Scholar ]
  • Nair KM, Dolovich L, Brazil K, Raina P. It’s all about relationships: a qualitative study of health researchers’ perspectives on interdisciplinary research. BMC Health Serv Res. 2008; 8 :110. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pojskic N, MacKeigan L, Boon H, Austin Z. Initial perceptions of key stakeholders in Ontario regarding independent prescriptive authority for pharmacists. Res Soc Adm Pharm. 2014; 10 (2):341–54. [ PubMed ] [ Google Scholar ]

Qualitative Research in General

  • Breakwell GM, Hammond S, Fife-Schaw C. Research methods in psychology. Thousand Oaks (CA): Sage Publications; 1995. [ Google Scholar ]
  • Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks (CA): Sage Publications; 2015. [ Google Scholar ]
  • Miles B, Huberman AM. Qualitative data analysis. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]
  • Patton M. Qualitative research and evaluation methods. Thousand Oaks (CA): Sage Publications; 2002. [ Google Scholar ]
  • Willig C. Introducing qualitative research in psychology. Buckingham (UK): Open University Press; 2001. [ Google Scholar ]

Group Dynamics in Focus Groups

  • Farnsworth J, Boon B. Analysing group dynamics within the focus group. Qual Res. 2010; 10 (5):605–24. [ Google Scholar ]

Social Constructivism

  • Social constructivism. Berkeley (CA): University of California, Berkeley, Berkeley Graduate Division, Graduate Student Instruction Teaching & Resource Center; [cited 2015 June 4]. Available from: http://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/social-constructivism/ [ Google Scholar ]

Mixed Methods

  • Creswell J. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]

Collecting Qualitative Data

  • Arksey H, Knight P. Interviewing for social scientists: an introductory resource with examples. Thousand Oaks (CA): Sage Publications; 1999. [ Google Scholar ]
  • Guest G, Namey EE, Mitchel ML. Collecting qualitative data: a field manual for applied research. Thousand Oaks (CA): Sage Publications; 2013. [ Google Scholar ]

Constructivist Grounded Theory

  • Charmaz K. Grounded theory: objectivist and constructivist methods. In: Denzin N, Lincoln Y, editors. Handbook of qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 2000. pp. 509–35. [ Google Scholar ]

National Center for Science and Engineering Statistics

  • Report PDF (588 KB)
  • Report - All Formats .ZIP (2.2 MB)
  • Share on X/Twitter
  • Share on Facebook
  • Share on LinkedIn
  • Send as Email

International Collaboration in Selected Critical and Emerging Fields: COVID-19 and Artificial Intelligence

April 11, 2024

Research collaboration is a critical strategy for pooling resources, sharing expertise, and accelerating innovation, and institutions may use collaboration to synthesize novel ideas and bridge knowledge or material gaps (Katz and Hicks 1997; Lee, Walsh, and Wang 2015; Wagner et al. 2001). Ongoing research on the transformative potential of artificial intelligence (AI) and the mitigation and treatment of COVID-19 in 2020 are two cases in which scientific progress has been important. Both fields have been recognized as national priorities ( https://www.whitehouse.gov/priorities/ ) and have complex challenges that both domestic and international institutions are motivated to overcome.

A country’s collaboration patterns, both domestic and international, can indicate the presence of expertise or the necessity of knowledge and resource sharing, as countries tend to collaborate internationally less in fields when they have sufficient resources within their own borders (Chinchilla-Rodríguez, Sugimoto, and Larivière 2019). International research collaboration can provide a rapid response to societal challenges, including public health crises (Carvalho et al. 2023) or technological paradigm shifts, and strong international collaborators play a large role in shaping the direction and priorities of research fields worldwide (Leydesdorff and Wagner 2009). A concentration on domestic research can indicate the presence of sufficient domestic knowledge and resources or an interest in preserving in-house expertise. This InfoBrief examines the extent to which top producers of science and engineering (S&E) articles engaged in domestic and international collaborations in AI and COVID-19 research.

Growth in Artificial Intelligence Articles

Between 2003 and 2022, the number of published articles in AI grew faster relative to the number of articles in computer science, table SPBS-22 in National Science Board, National Science Foundation. 2023. Publications Output: U.S. Trends and International Comparisons. Science and Engineering Indicators 2024. NSB-2023-33. Available at https://ncses.nsf.gov/pubs/nsb202333 ." data-bs-content="See table SPBS-22 in National Science Board, National Science Foundation. 2023. Publications Output: U.S. Trends and International Comparisons. Science and Engineering Indicators 2024. NSB-2023-33. Available at https://ncses.nsf.gov/pubs/nsb202333 ." data-endnote-uuid="5569bd18-3709-4dce-830a-89d0460f257a">​ See table SPBS-22 in National Science Board, National Science Foundation. 2023. Publications Output: U.S. Trends and International Comparisons. Science and Engineering Indicators 2024. NSB-2023-33. Available at https://ncses.nsf.gov/pubs/nsb202333 . due in part to the newness of the AI field compared with the more established field of computer science. AI articles worldwide grew by 1,100% during this period, reaching 123,402 articles in 2022, table SPBS-99 ." data-bs-content="See NSB-2023-33, table SPBS-99 ." data-endnote-uuid="5d5b3221-7e43-4793-83a8-dfe2d9c83207">​ See NSB-2023-33, table SPBS-99 . or 4% of all S&E publications globally, figure PBS-3 ." data-bs-content="See NSB-2023-33, figure PBS-3 ." data-endnote-uuid="dca1499d-57bb-40e0-8612-5666dde1c402">​ See NSB-2023-33, figure PBS-3 . compared with 290% growth in computer science articles. table SPBS-22 ." data-bs-content="See NSB-2023-33, table SPBS-22 ." data-endnote-uuid="7542a24d-409a-4612-abb1-2b561d8afe2b">​ See NSB-2023-33, table SPBS-22 . From 2017 to 2022, the six countries with the highest overall publication outputs figure PBS-3 ." data-bs-content="See NSB-2023-33, figure PBS-3 ." data-endnote-uuid="77e4b989-e695-4853-90bc-fe667d3a3ea4">​ See NSB-2023-33, figure PBS-3 . were also the countries with the highest AI research output (China, India, the United States, Japan, the United Kingdom, and Germany) ( figure 1 ). In 2022, the top two producers of AI research articles were China (42,524 articles, or 35% of total AI publication output) and India (22,557, or 18%), followed by the United States (12,642, or 10%). Germany, Japan, and the United Kingdom published similar numbers of publications, ranging between 3,700 and 4,700 articles (3% – 4%).

  • For grouped bar charts, Tab to the first data element (bar/line data point) which will bring up a pop-up with the data details
  • To read the data in all groups Arrow-Down will go back and forth
  • For bar/line chart data points are linear and not grouped, Arrow-Down will read each bar/line data points in order
  • For line charts, Arrow-Left and Arrow-Right will move to the next set of data points after Tabbing to the first data point
  • For stacked bars use the Arrow-Down key again after Tabbing to the first data bar
  • Then use Arrow-Right and Arrow-Left to navigate the stacked bars within that stack
  • Arrow-Down to advance to the next stack. Arrow-Up reverses

AI articles, by selected country: 2003–22

AI = artificial intelligence.

AI article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in science and engineering fields from Scopus. The subset of AI articles was determined by All Science Journal Classification subject matter classification, supplemented by an algorithm that used a series of article characteristics to determine the field of papers published in multidisciplinary journals. Articles are classified by their year of publication and are assigned to a region, country, or economy on the basis of the institutional addresses of the authors listed in the article. Articles are credited on a whole count basis (i.e., for articles produced by authors from different countries, each country is credited for one article). Data for all regions, countries, and economies are available in supplemental table SPBS-99 in Publications Output: U.S. Trends and International Comparisons ( https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-99 ).

National Center for Science and Engineering Statistics; Science-Metrix; Elsevier, Scopus abstract and citation database, accessed April 2023.

Collaboration Trends in Artificial Intelligence Articles

Coauthorship trends on S&E articles shed light on overall collaboration practices. The affiliations of authors to their home institutions and countries are used to infer whether collaboration has occurred across institutions, both domestically and internationally. Three types of collaboration are detailed in this InfoBrief, and an article is the unit of analysis. An article with at least one author from an institution of a given country is classified as one of three categories: an international collaboration , if an author from any other country is present; a domestic collaboration , if all authors are from the same country, but are affiliated with more than one institution; or a single institution article if all authors share the same institutional affiliation or the article is solo authored.

Collaboration Trends

From 2017 to 2022, 37% of U.S. research papers on AI were the result of international collaboration, placing the United States between the five other top producers of AI research papers, with the United Kingdom (61%) and Germany (40%) producing a higher rate of internationally collaborative research and with Japan (25%), China (17%), and India (10%) producing a lower rate ( figure 2 ). Rates of international collaboration for the United States were slightly lower for AI research papers than for all S&E research papers (37% versus 39%). Likewise, across the other five top producers of AI research papers, rates of international collaboration were lower for AI research papers than for all S&E research papers. Compared with other countries, China had the greatest proportion of AI papers that were domestic collaborations (41%). Across the six top-producing countries, the rate of articles produced by a single institution were more common in AI research than in all S&E research (42% versus 26%).

International collaboration, domestic collaboration, and single institution publications on AI research and overall international collaboration on all S&E research, by selected country: 2017–22

AI = artificial intelligence; S&E = science and engineering.

AI articles are assigned to a country, or economy on the basis of the institutional addresses of the authors listed in the article. The subset of AI articles was determined by All Science Journal Classification subject matter classification, supplemented by an algorithm that used a series of article characteristics to determine the field of papers published in multidisciplinary journals. Articles are credited on a whole count basis (i.e., for articles produced by authors from different countries, each country is credited for one article). The percentages refer to the proportion of AI articles to feature collaboration or to the proportion of general articles across all fields to feature collaboration. Articles were excluded when one or more coauthored publications had incomplete address information in the Scopus database; therefore, they cannot be reliably identified as international or domestic collaborations. Data for all regions, countries, and economies are available in supplemental table SPBS-99 and supplemental table SPBS-33 in Publications Output: U.S. Trends and International Comparisons ( https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-99 and https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-33 ).

International Collaboration

Overall, scientific research has become increasingly collaborative over time (Gazni, Sugimoto, and Didegah 2012; Wuchty, Jones, and Uzzi 2007). Although the rate of international collaboration in AI publications has been smaller than the rate of international collaboration across all S&E fields over the past 5 years, international collaboration in AI articles has gradually increased overall between 2003 and 2022. By country, international collaborations in AI increased in Japan (from 15% to 28%), the United States (from 24% to 39%), Germany (from 37% to 42%), and the United Kingdom (from 36% to 66%) ( figure 3 ). Over this same time period, India and China did not show an increasing trend, despite some fluctuation. For example, after China exhibited a period of increased international collaboration in AI research, from 7% in 2009 to 23% in 2015, the rate has since decreased to 16% in 2022.

International collaboration on AI articles, by selected country: 2003–22

AI article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in science and engineering fields from Scopus. The subset of AI articles was determined by All Science Journal Classification subject matter classification, supplemented by an algorithm that used a series of article characteristics to determine the field of papers published in multidisciplinary journals. Articles are assigned to a country, or economy on the basis of the institutional addresses of the authors listed in the article. Articles are credited on a whole count basis (i.e., for articles produced by authors from different countries, each country is credited for one article). The percentages refer to the proportion of AI articles to feature collaboration. Data for all regions, countries, and economies are available in supplemental table SPBS-99 in Publications Output: U.S. Trends and International Comparisons ( https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-99 ).

Domestic Collaborations and Single Institution Publications

The proportion of single institution publications in AI decreased over time in the United States, from 48% in 2003 to 31% in 2022 ( figure 4 ). Despite this decrease, the proportion of U.S. single institution publications remained higher in AI research than in all S&E research, which decreased from 36% to 20% over the same time period. Over time, the rate of domestic collaboration in AI between U.S. institutions remained relatively stable from 2003 to 2022, ranging between 25% and 30%. In China, the proportion of single institution publications in AI decreased from 59% to 38% between 2003 and 2022, albeit with more fluctuation. China’s proportion of single institution publications both in AI papers and among all S&E fields were similar until 2007, after which the proportion of single institution papers in AI research became higher, while the overall proportion of single institution papers in all S&E research continued to decrease.

Collaborative and single institution articles on AI and single institution articles on all S&E research in the United States and China: 2003–22

Article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in S&E fields from Scopus. The subset of AI articles was determined by All Science Journal Classification subject matter classification, supplemented by an algorithm that used a series of article characteristics to determine the field of papers published in multidisciplinary journals. Articles are assigned to a country, or economy on the basis of the institutional addresses of the authors listed in the article. Articles are credited on a whole count basis (i.e., for articles produced by authors from different countries, each country is credited for one article). The percentages refer to the proportion of AI articles to feature collaboration or to the proportion of general articles across all fields to feature collaboration. Articles were excluded when one or more coauthored publications had incomplete address information in the Scopus database; therefore, they cannot be reliably identified as international or domestic collaborations. Data for all regions, countries, and economies are available in supplemental table SPBS-99 and supplemental table SPBS-33 in Publications Output: U.S. Trends and International Comparisons ( https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-99 and https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-33 ).

COVID-19 Research Collaboration

In 2020, COVID-19 was identified as a national priority ( https://www.whitehouse.gov/priorities/ ), and this shifting priority in research may have impacted collaboration patterns for this research area in 2020. In the same year, 35% of the United States’ published research on COVID-19 involved international collaborations, which was lower than the rates in the United Kingdom (55%), Germany (52%), and Japan (45%) but was higher than the rates in China (27%) and India (28%) ( figure 5 ). The overall rates of international collaboration in the United Kingdom and Germany were higher for all S&E research than for COVID-19 research (65% and 55%, respectively).

International collaboration, domestic collaboration, and single institution publications on COVID-19 research and overall international collaboration on all S&E research, by selected country: 2020

S&E = science and engineering.

Article counts refer to publications from a selection of conference proceedings and peer-reviewed journals in S&E fields from Scopus. Articles are assigned to a country, or economy on the basis of the institutional addresses of the authors listed in the article. Articles are credited on a whole count basis (i.e., for articles produced by authors from different countries, each country is credited for one article). The percents refer to the proportion of COVID-19 articles to feature collaboration or to the proportion of general articles across all fields to feature collaboration. Articles were excluded when one or more coauthored publications had incomplete address information in the Scopus database; therefore, they cannot be reliably identified as international or domestic collaborations. Data for all regions, countries, and economies are available in supplemental table SPBS-91 and supplemental table SPBS-35 in Publications Output: U.S. Trends and International Comparisons ( https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-91 and https://ncses.nsf.gov/pubs/nsb202333/table/SPBS-35 ).

National Center for Science and Engineering Statistics; Science-Metrix; Elsevier, Scopus abstract and citation database, accessed April 2021.

Although each of the top producing countries had a lower rate of international collaborations in AI research than in S&E research, the results were mixed for COVID-19. As the number of articles in AI has increased, the rate of international collaboration also increased. For COVID-19 collaborations in 2020, only some of the top producing countries had lower rates of international collaboration in AI research than in all S&E research.

Data Sources, Limitations, and Availability

Publication data are derived from a large database of publication records that were developed for Science and Engineering Indicators 2024, Publications Output: U.S. Trends and International Comparisons (NSB-2023-33), from the Scopus database by Elsevier. The publication counts and coauthorship information presented are derived from information about research articles and conference papers (hereafter referred to collectively as articles) published in conference proceedings and peer-reviewed scientific and technical journals. Elsevier selects journals and conference proceedings for the Scopus database based on evaluation by an international group of subject-matter experts (see NSB-2023-33, Technical Appendix ), and the National Center for Science and Engineering Statistics (NCSES) undertakes additional filtering of the Scopus data to ensure that the statistics presented in Science and Engineering Indicators measure original and high-quality research publications (Science-Metrix 2023). Although the listed affiliation is generally reflective of the locations where research was conducted, authors may have honorary affiliations, have moved, or have experienced other circumstances preventing their affiliations from being an exact corollary to the research environment.

The subset of AI articles was determined by All Science Journal Classification subject matter classification. Global coronavirus publication output data for 2020 were extracted from two different sources. The COVID-19 Open Research Dataset (CORD-19) was created through a partnership between the Office of Science and Technology Policy, the Allen Institute for Artificial Intelligence, the Chan Zuckerberg Initiative, Microsoft Research, Kaggle, and the National Library of Medicine at the National Institutes of Health, coordinated by Georgetown University’s Center for Security and Emerging Technology. CORD-19 is a highly inclusive, noncurated database. The other coronavirus publication output data source was the Scopus database, which permits more refined analysis because it includes more fields (e.g., instructional country of each author). (See NSB-2021-4, Technical Appendix ).

1 See table SPBS-22 in National Science Board, National Science Foundation. 2023. Publications Output: U.S. Trends and International Comparisons. Science and Engineering Indicators 2024. NSB-2023-33. Available at https://ncses.nsf.gov/pubs/nsb202333 .

2 See NSB-2023-33, table SPBS-99 .

3 See NSB-2023-33, figure PBS-3 .

4 See NSB-2023-33, table SPBS-22 .

5 See NSB-2023-33, figure PBS-3 .

Carvalho DS, Felipe LL, Albuquerque PC, Zicker F, Fonseca BDP. 2023. Leadership and International Collaboration on COVID-19 Research: Reducing the North–South Divide? Scientometrics 128:4689–705. Available at https://doi.org/10.1007/s11192-023-04754-x .

Chinchilla-Rodríguez Z, Sugimoto CR, Larivière V. 2019. Follow the Leader: On the Relationship between Leadership and Scholarly Impact in International Collaborations. PLOS ONE 14:e0218309. Available at https://doi.org/10.1371/journal.pone.0218309 .

Gazni A, Sugimoto CR, Didegah F. 2012. Mapping World Scientific Collaboration: Authors, Institutions, and Countries. Journal of the American Society for Information Science and Technology 63:323–35. Available at https://doi.org/10.1002/asi.21688 .

Katz JS, Hicks D. 1997. How Much Is a Collaboration Worth? A Calibrated Bibliometric Model. Scientometrics 40:541–54. Available at https://doi.org/10.1007/BF02459299 .

Lee Y-N, Walsh JP, Wang J. 2015. Creativity in Scientific Teams: Unpacking Novelty and Impact. Research Policy 44:684–97. Available at https://doi.org/10.1016/j.respol.2014.10.007 .

Leydesdorff L, Wagner CS. 2008. International Collaboration in Science and the Formation of a Core Group. Journal of Informetrics 2:317–25. Available at https://doi.org/10.1016/j.joi.2008.07.003 .

Science-Metrix. 2023. Bibliometric Indicators for the Science and Engineering Indicators 2024. Technical Documentation . Available at https://science-metrix.com/bibliometrics-indicators-for-the-science-and-engineering-indicators-2024-technical-documentation/ . Accessed 26 August 2023.

Wagner CS, Brahmakulam IT, Jackson BA, Wong A, Yoda T. 2001. Science and Technology Collaboration : Building Capacity i n Developing Countries ? Santa Monica, CA: RAND Corporation. Available at https://www.rand.org/pubs/monograph_reports/MR1357z0.html .

Wuchty S, Jones BF, Uzzi B. 2007. The Increasing Dominance of Teams in Production of Knowledge. Science 316:1036. Available at https://doi.org/10.1126/science.1136099 .

Suggested Citation

Boothby C, Schneider B; National Center for Science and Engineering Statistics (NCSES). 2024. International Collaboration in Selected Critical and Emerging Fields: COVID-19 and Artificial Intelligence. NSF 24-323. Alexandria, VA: National Science Foundation. Available at https://ncses.nsf.gov/pubs/nsf24323 .

Report Authors

Clara Boothby ORISE Fellow NCSES E-mail: [email protected]

Benjamin Schneider Interdisciplinary Science Analyst NCSES Tel: 703.292.8828 E-mail: [email protected]

National Center for Science and Engineering Statistics Directorate for Social, Behavioral and Economic Sciences National Science Foundation 2415 Eisenhower Avenue, Suite W14200 Alexandria, VA 22314 Tel: (703) 292-8780 FIRS: (800) 877-8339 TDD: (800) 281-8749 E-mail: [email protected]

Source Data & Analysis

Related content, get e-mail updates from ncses.

NCSES is an official statistical agency. Subscribe below to receive our latest news and announcements.

  • Open access
  • Published: 23 September 2023

Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis

  • Rana Islamiah Zahroh   ORCID: orcid.org/0000-0001-7831-2336 1 ,
  • Katy Sutcliffe   ORCID: orcid.org/0000-0002-5469-8649 2 ,
  • Dylan Kneale   ORCID: orcid.org/0000-0002-7016-978X 2 ,
  • Martha Vazquez Corona   ORCID: orcid.org/0000-0003-2061-9540 1 ,
  • Ana Pilar Betrán   ORCID: orcid.org/0000-0002-5631-5883 3 ,
  • Newton Opiyo   ORCID: orcid.org/0000-0003-2709-3609 3 ,
  • Caroline S. E. Homer   ORCID: orcid.org/0000-0002-7454-3011 4 &
  • Meghan A. Bohren   ORCID: orcid.org/0000-0002-4179-4682 1  

BMC Public Health volume  23 , Article number:  1851 ( 2023 ) Cite this article

1228 Accesses

1 Citations

1 Altmetric

Metrics details

Caesarean section (CS) rates are increasing globally, posing risks to women and babies. To reduce CS, educational interventions targeting pregnant women have been implemented globally, however, their effectiveness is varied. To optimise benefits of these interventions, it is important to understand which intervention components influence success. In this study, we aimed to identify essential intervention components that lead to successful implementation of interventions focusing on pregnant women to optimise CS use.

We re-analysed existing systematic reviews that were used to develop and update WHO guidelines on non-clinical interventions to optimise CS. To identify if certain combinations of intervention components (e.g., how the intervention was delivered, and contextual characteristics) are associated with successful implementation, we conducted a Qualitative Comparative Analysis (QCA). We defined successful interventions as interventions that were able to reduce CS rates. We included 36 papers, comprising 17 CS intervention studies and an additional 19 sibling studies (e.g., secondary analyses, process evaluations) reporting on these interventions to identify intervention components. We conducted QCA in six stages: 1) Identifying conditions and calibrating the data; 2) Constructing truth tables, 3) Checking quality of truth tables; 4) Identifying parsimonious configurations through Boolean minimization; 5) Checking quality of the solution; 6) Interpretation of solutions. We used existing published qualitative evidence synthesis to develop potential theories driving intervention success.

We found successful interventions were those that leveraged social or peer support through group-based intervention delivery, provided communication materials to women, encouraged emotional support by partner or family participation, and gave women opportunities to interact with health providers. Unsuccessful interventions were characterised by the absence of at least two of these components.

We identified four key essential intervention components which can lead to successful interventions targeting women to reduce CS. These four components are 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. Maternal health services and hospitals aiming to better prepare women for vaginal birth and reduce CS can consider including the identified components to optimise health and well-being benefits for the woman and baby.

Peer Review reports

Introduction

In recent years, caesarean section (CS) rates have increased globally [ 1 , 2 , 3 , 4 ]. CS can be a life-saving procedure when vaginal birth is not possible; however, it comes with higher risks both in the short- and long-term for women and babies [ 1 , 5 ]. Women with CS have increased risks of surgical complications, complications in future pregnancies, subfertility, bowel obstruction, and chronic pain [ 5 , 6 , 7 , 8 ]. Similarly, babies born through CS have increased risks of hypoglycaemia, respiratory problems, allergies and altered immunity [ 9 , 10 , 11 ]. At a population level, CS rates exceeding 15% are unlikely to reduce mortality rates [ 1 , 12 ]. Despite these risks, an analysis across 154 countries reported a global average CS rate of 21.1% in 2018, projected to increase to 28.5% by 2030 [ 3 ].

There are many reasons for the increasing CS rates, and these vary between and within countries. Increasingly, non-clinical factors across different societal dimensions and stakeholders (e.g. women and communities, health providers, and health systems) are contributing to this increase [ 13 , 14 , 15 , 16 , 17 ]. Women may prefer CS over vaginal birth due to fear of labour or vaginal birth, previous negative experience of childbirth, perceived increased risks of vaginal birth, beliefs about an auspicious or convenient day of birth, or beliefs that caesarean section is safer, quick, and painless compared to vaginal birth [ 13 , 14 , 15 ].

Interventions targeting pregnant women to reduce CS have been implemented globally. A Cochrane intervention review synthesized evidence from non-clinical interventions targeting pregnant women and family, providers, and health systems to reduce unnecessary CS, and identified 15 interventions targeting women [ 18 ]. Interventions targeting women primarily focused on improving women’s knowledge around birth, improving women’s ability to cope during labour, and decreasing women’s stress related to labour through childbirth education, and decision aids for women with previous CS [ 18 ]. These types of interventions aim to reduce the concerns of pregnant women and their partners around childbirth, and prepare them for vaginal birth.

The effectiveness of interventions targeting women in reducing CS is mixed [ 18 , 19 ]. Plausible explanations for this limited success include the multifactorial nature of the factors driving increases in CS, as well as the contextual characteristics of the interventions, which may include the study environment, participant characteristics, intensity of exposure to the intervention and method of implementation. Understanding which intervention components are essential influencers of the success of the interventions is conducive to optimising benefits. This study used a Qualitative Comparative Analysis (QCA) approach to re-analyse evidence from existing systematic reviews to identify essential intervention components that lead to the successful implementation of non-clinical interventions focusing on pregnant women to optimise the use of CS. Updating and re-analysing existing systematic reviews using new analytical frameworks may help to explore the heterogeneity in effects and ascertain why some studies appear to be effective while others are not.

Data sources, case selection, and defining outcomes

Developing a logic model.

We developed a logic model to guide our understanding of different pathways and intervention components potentially leading to successful implementation (Additional file 1 ). The logic model was developed based on published qualitative evidence syntheses and systematic reviews [ 18 , 20 , 21 , 22 , 23 , 24 ]. The logic model depicts the desired outcome of reduced CS rates in low-risk women (at the time of admission for birth, these women are typically represented by Robson groups 1–4 [ 25 ] and are women with term, cephalic, singleton pregnancies without a previous CS) and works backwards to understand what inputs and processes are needed to achieve the desired outcome. Our logic model shows multiple pathways to success and highlights the interactions between different levels of factors (women, providers, societal, health system) (Additional file 1 ). Based on the logic model, we have separated our QCA into two clusters of interventions: 1) interventions targeting women, and 2) interventions targeting health providers. The results of analysis on interventions targeting health providers have been published elsewhere [ 26 ]. The logic model was also used to inform the potential important components that influence success.

Identifying data sources and selecting cases

We re-analysed the systematic reviews which were used to inform the development and update of World Health Organization (WHO) guidelines. In 2018, WHO issued global guidance on non-clinical interventions to reduce unnecessary CS, with interventions designed to target three different levels or stakeholders: women, health providers, and health systems [ 27 ]. As part of the guideline recommendations, a series of systematic reviews about CS interventions were conducted: 1) a Cochrane intervention review of effectiveness by Chen et al. (2018) [ 18 ] and 2) three qualitative evidence syntheses exploring key stakeholder perspectives and experiences of interventions focusing on women and communities, health professionals, and health organisations, facilities and systems by Kingdon et al. (2018) [ 20 , 21 , 22 ]. Later on, Opiyo and colleagues (2020) published a scoping review of financial and regulatory interventions to optimise the use of CS [ 23 ].

Therefore, the primary data sources of this QCA are the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ]. We used these two systematic reviews as not only they are comprehensive, but they were also used to inform the WHO guidelines development. A single intervention study is referred to as a “case”. Eligible cases were intervention studies focusing on pregnant women and aimed to reduce or optimise the use of CS. No restrictions on study design were imposed in the QCA. Therefore, we also assessed the eligibility of intervention studies excluded from Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] due to ineligible study designs (such as cohort study, uncontrolled before and after study, interrupted time series with fewer than three data points), as these studies could potentially show other pathways to successful implementation. We complemented these intervention studies with additional intervention studies published since the last review updates in 2018 and 2020, to include intervention studies that are likely to meet the review inclusion criteria for future review updates. No further search was conducted as QCA is suitable for medium-N cases, approximately around 10–50 cases, and inclusion of more studies may threaten study rigour [ 28 ].

Once eligible studies were selected, we searched for their ‘sibling studies’. Sibling studies are studies linked to the included intervention studies, such as formative research or process evaluations which may have been published separately. Sibling studies can provide valuable additional information about study context, intervention components, and implementation outcomes (e.g. acceptability, fidelity, adherence, dosage), which may not be well described in a single article about intervention effectiveness. We searched for sibling studies using the following steps: 1) reference list search of the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 2) reference list search of the qualitative studies included in Kingdon et al. (2018) reviews [ 20 , 21 , 22 ]; and 3) forward reference search of the intervention studies (through “Cited by” function) in Scopus and Web of Science. Sibling studies were included if they included any information on intervention components or implementation outcomes, regardless of the methodology used. One author conducted the study screening independently (RIZ), and 10% of the screening was double-checked by a second author (MAB). Disagreements during screening were discussed until consensus, and with the rest of the author team if needed.

Defining outcomes

We assessed all outcomes related to the mode of birth in the studies included in the Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] reviews. Based on the consistency of outcome reporting, we selected “overall CS rate” as the primary outcome of interest due to its presence across studies. We planned to rank the rate ratio across these studies to select the 10 most successful and unsuccessful intervention studies. However, due to heterogeneity in how CS outcomes were reported across studies (e.g. odds ratios, rate ratios, percentages across different intervention stages), the final categorisation of successful or unsuccessful interventions is based on whether the CS rate decreased, based on the precision of the confidence interval or p-value (successful, coded as 1), or CS rate increased or did not change (unsuccessful, coded as 0).

Assessing risk of bias in intervention studies

All intervention studies eligible for inclusion were assessed for risk of bias. All studies included in Chen et al. (2018) and Opiyo et al. (2020) already had risk of bias assessed and reported [ 18 , 23 ], and we used these assessments. Additional intervention studies outside the included studies on these reviews were assessed using the same tools depending on the type of evidence (two randomized controlled trials and one uncontrolled before and after study), and details of the risk of bias assessment results can be found in Additional file 2 . We excluded studies with a high risk of bias to ensure that the analysis was based on high-quality studies and to enhance the ability of researchers to develop deep case knowledge by limiting the overall number of studies.

Qualitative comparative analysis (QCA)

QCA was first developed and used in political sciences and has since been extended to systematic reviews of complex health interventions [ 24 , 29 , 30 , 31 ]. Despite the term “qualitative”, QCA is not a typical qualitative analysis, and is often conceptualised as a methodology that bridges qualitative and quantitative methodologies based on its process, data used and theoretical standpoint [ 24 ]. Here, QCA is used to identify if certain configurations or combinations of intervention components (e.g. participants, types of interventions, contextual characteristics, and intervention delivery) are associated with the desired outcome [ 31 ]. These intervention components are referred to as “conditions” in the QCA methodology. Whilst statistical synthesis methods may be used to examine intervention heterogeneity in systematic reviews, such as meta-regression, QCA is a particularly suitable method to understand complex interventions like those aiming to optimise CS, as it allows for multiple overlapping pathways to causality [ 31 ]. Moreover, QCA allows the exploration of different combinations of conditions, rather than relying on a single condition leading to intervention effectiveness [ 31 ]. Although meta-regression allows for the assessment of multiple conditions, a sufficient number of studies may not be available to conduct the analysis. In complex interventions, such as interventions aiming to optimise the use of CS, single condition or standard meta-analysis may be less likely to yield usable and nuanced information about what intervention components are more or less likely to yield success [ 31 ].

QCA uses ‘set theory’ to systematically compare characteristics of the cases (e.g. intervention in the case of systematic reviews) in relation to the outcomes [ 31 , 32 ]. This means QCA compares the characteristics of the successful ‘cases’ (e.g. interventions that are effective) to those unsuccessful ‘cases’ (e.g. interventions that are not effective). The comparison is conducted using a scoring system based on ‘set membership’ [ 31 , 32 ]. In this scoring, conditions and outcomes are coded based on the extent to which a certain feature is present or absent to form set membership scores [ 31 , 32 ]. There are two scoring systems in QCA: 1) crisp set QCA (csQCA) and 2) fuzzy set QCA (fsQCA). csQCA assigns binary scores of 0 (“fully out” to set membership for cases with certain conditions) and 1 (“fully in” to set membership for cases with certain conditions), while fsQCA assigns ordinal scoring of conditions and outcomes, permitting partial membership scores between 0 and 1 [ 31 , 32 ]. For example, using fsQCA we may assign a five-level scoring system (0, 0.33, 0.5, 0.67, 1), where 0.33 would indicate “more out” than “in” to the set of membership, and 0.67 would indicate “more in” than “out”, and 0.5 would indicate ambiguity (i.e. a lack of information about whether a case was “in” or “out”) [ 31 , 32 ]. In our analysis, we used the combination of both csQCA and fsQCA to calibrate our data. This approach was necessary because some conditions were better suited to binary options using csQCA, while others were more complex, depending on the distribution of cases, and required fsQCA to capture the necessary information. In our final analysis, however, the conditions run on the final analysis were all using the csQCA scoring system.

Two relationships can be investigated using QCA [ 24 , 31 ]. First, if all instances of successful interventions share the same condition(s), this suggests these features are ‘necessary’ to trigger successful outcomes [ 24 , 31 ]. Second, if all instances of a particular condition are associated with successful interventions, this suggests these conditions are ‘sufficient’ for triggering successful outcomes [ 24 , 31 ]. In this QCA, we were interested to explore the relationship of sufficiency: that is, to assess the various combinations of intervention components that can trigger successful outcomes. We were interested in sufficiency because our logic model (explained further below) highlighted the multiple pathways that can lead to a CS and different interventions that may optimise the use of CS along those pathways, which suggested that it would be unlikely for all successful interventions to share the same conditions. We calculated the degree of sufficiency using consistency measures, which evaluate the frequency in which conditions are present when the desired outcome is achieved [ 31 , 32 ]. The conditions with a consistency score of at least 0.8 were considered sufficient in triggering successful interventions [ 31 , 32 ]. At present, there is no tool available for reporting guidelines in the re-analysis of systematic reviews using QCA, however, CARU-QCA is currently being developed for this purpose [ 33 ]. QCA was conducted using R programming software with a package developed by Thiem & Duşa (2013) and QCA with R guidebook [ 32 ]. QCA was conducted in six stages based on Thomas et al. (2014) [ 31 ] and explained below.

QCA stage 1: Identifying conditions, building data tables and calibration

We used a deductive and inductive process to determine the potential conditions (intervention components) that may trigger successful implementation. Conditions were first derived deductively using the developed logic model (Additional file 1 ). We then added additional conditions inductively using Intervention Component Analysis from the intervention studies [ 34 ], and qualitative evidence (“view”) synthesis [ 22 ] using Melendez-Torres’s (2018) approach [ 35 ]. Intervention Component Analysis is a methodological approach that examines factors affecting implementation through reflections from the trialist, which is typically presented in the discussion section of a published trial [ 34 ]. Examples of conditions identified in the Intervention Component Analysis include using an individualised approach, interaction with health providers, policies that encourage CS and acknowledgement of women’s previous birth experiences. After consolidating or merging similar conditions, a total of 52 conditions were selected and extracted from each included intervention and analysed in this QCA (Details of conditions and definitions generated for this study can be found in Additional files 3 and 4 ). We adapted the coding framework from Harris et al. (2019) [ 24 ] by adapting coding rules and six domains that were used, to organize the 52 conditions and make more sense of the data. These six domains are broadly classified as 1) context and participants, 2) intervention design, 3) program content, 4) method of engagement, 5) health system factors, and 6) process outcomes.

One author (RIZ) extracted data relevant to the conditions for each included study into a data table, which was then double-reviewed by two other authors (MVC, MAB). The data table is a matrix in which each case is represented in a row, and columns are used to represent the conditions. Following data extraction, calibration rules using either csQCA or fsQCA (e.g. group-based intervention delivery condition: yes = 1 (present), no = 0 (absent)) were developed through consultation with all authors. We developed a table listing the conditions and rules of coding the conditions, by either direct or transformational assignment of quantitative and qualitative data [ 24 , 32 ] (Additional file 3 depicts the calibration rules). The data tables were then calibrated by applying scores, to explore the extent to which interventions have ‘set membership’ with the outcome or conditions of interest. During this iterative process, the calibration criteria were explicitly defined, emerging from the literature and the cases themselves. It is important to note, that maximum ambiguity is typically scored as 0.5 in QCA, however, we decided it would be more appropriate to assume that if a condition was not reported it was unlikely to be a feature of the intervention, so we treated not reported as “absence” that is we coded it 0.

QCA stage 2: Constructing truth tables

Truth tables are an analytical tool used in QCA to analyse associations between configurations of conditions and outcomes. Whereas the data table represents individual cases (rows) and individual conditions (columns) – the truth table synthesises this data to examine configurations – with each row representing a different configuration of the conditions. The columns indicate a) which conditions are featured in the configuration in that row, b) how many of the cases are represented by that configuration, and c) their association with the outcome.

We first constructed the truth tables based on context and participants, intervention designs, program content, and method of engagement; however, no configurations to trigger successful interventions were observed. Instead, we observed limited diversity, meaning there were many instances in which the configurations were unsupported by cases, likely due to the presence of too many conditions in the truth tables. We used the learning from these truth tables to return to the literature to explore potential explanatory theories about what conditions are important from the perspectives of participants and trialists to trigger successful interventions (adhering to the ‘utilisation of view’ perspective [ 35 ]). Through this process, we found that women and communities liked to learn new information about childbirth, and desired emotional support from partners and health providers while learning [ 22 ]. They also appreciated educational interventions that provide opportunities for discussion and dialogue with health providers and align with current clinical practice and advice from health providers [ 22 ]. Therefore, three models of truth tables were iteratively constructed and developed based on three important hypothesised theories about how the interventions should be delivered: 1) how birth information was provided to women, 2) emotional support was provided to women (including interactions between women and providers), and 3) a consolidated model examining the interactions of important conditions identified from model 1 and 2. We also conducted a sub-analysis of interventions targeting both women and health providers or systems (‘multi-target interventions’). This sub-analysis was conducted to explore if similar conditions were observed in triggering successful interventions in multi-target interventions, among the components for women only. Table 1 presents the list of truth tables that were iteratively constructed and refined.

QCA stage 3: Checking quality of truth tables

We iteratively developed and improved the quality of truth tables by checking the configurations of successful and unsuccessful interventions, as recommended by Thomas et al. (2014) [ 31 ]. This includes by assessing the number of studies clustering to each configuration, and exploring the presence of any contradictory results between successful and unsuccessful interventions. We found contradictory configurations across the five truth tables, which were resolved by considering the theoretical perspectives and iteratively refining the truth tables.

QCA stage 4: Identifying parsimonious configurations through Boolean minimization

Once we determined that the truth tables were suitable for further analysis, we used Boolean minimisation to explore pathways resulting in successful intervention through the configurations of different conditions [ 31 ]. We simplified the “complex solution” of the pathways to a “parsimonious solution” and an “intermediate solution” by incorporating logical remainders (configurations where no cases were observed) [ 36 ].

QCA stage 5: Checking the quality of the solution

We presented the intermediate solution as the final solution instead of the most parsimonious solution, as it is most closely aligned with the underlying theory. We checked consistency and coverage scores to assess if the pathways identified were sufficient to trigger success. We also checked the intermediate solution by negating the outcome to see if it predicts the observed solutions.

QCA stage 6: Interpretation of solutions

We iteratively interpreted the results of the findings through discussions among the QCA team. This reflexive approach ensured that the results of the analysis considered the perspectives from the literature discourse, methodological approach, and that the results were coherent with the current understanding of the phenomenon.

Overview of included studies

Out of 79 intervention studies assessed by Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 17 intervention studies targeted women and are included, comprising 11 interventions targeting only women [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] and six interventions targeting both women and health providers or systems [ 44 , 45 , 46 , 47 , 48 , 49 ]. From 17 included studies, 19 sibling studies were identified [ 43 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 ]. Thus, a total of 36 papers from 17 intervention studies are included in this QCA (See Fig.  1 : PRISMA Flowchart).

figure 1

PRISMA flowchart. *Sibling studies: studies that were conducted in the same settings, participants, and timeframe; **Intervention components: information on intervention input, activities, and outputs, including intervention context and other characteristics

The 11 interventions targeting women comprised of five successful interventions [ 37 , 68 , 69 , 70 , 71 ] and six unsuccessful interventions [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] in reducing CS. Sixteen sibling studies were identified, from five out of 11 included interventions [ 37 , 41 , 43 , 70 , 71 ]. Included studies were conducted in six countries across North America (2 from Canada [ 38 ] and 1 from United States of America [ 71 ]), Asia–Pacific (1 from Australia [ 41 ]), 5 from Iran [ 39 , 40 , 68 , 69 , 70 ]), Europe (2 from Finland [ 37 , 42 ], 1 from United Kingdom [ 43 ]). Six studies were conducted in high-income countries, while five studies were conducted in upper-middle-income countries (all from Iran). All 11 studies targeted women, with three studies also explicitly targeting women’s partners [ 68 , 69 , 71 ]. One study delivering psychoeducation allowed women to bring any family members to accompany them during the intervention but did not specifically target partners [ 37 ]. All 11 studies delivered childbirth education, with four delivering general antenatal education [ 38 , 40 , 68 , 69 ], six delivering psychoeducation [ 37 , 39 , 41 , 42 , 70 , 71 ], and one implementing decision aids [ 43 ]. All studies were included in Chen et al. (2018), and some risks of bias were identified [ 18 ] (Additional file 2).

The multi-target interventions consisted of five successful interventions [ 44 , 45 , 46 , 47 , 48 ] and one unsuccessful intervention [ 49 ]. Sibling studies were only identified from one study [ 48 ]. The interventions were delivered in five countries across: South America (1 from Brazil [ 46 ]), Asia–Pacific (4 from China [ 44 , 45 , 47 , 49 ]), Europe (1 from Italy [ 48 ], 1 from Ireland [ 48 ], and 1 from Germany [ 48 ]). Three studies were conducted in high-income countries and five studies in upper middle-income countries. The multi-target interventions targeted women, health providers and health organisations. For this analysis, however, we only consider the components of the intervention that targeted women, which was typically childbirth education. One study came from Chen et al. (2018) [ 18 ] and was graded as having some concerns [ 47 ], two studies from Opiyo et al. (2020) [ 23 ] were graded as having no serious concerns [ 45 , 46 ], and three studies are newly published studies assessed as low [ 44 ] and some concerns about risk of bias [ 48 , 49 ] Table 2 and 3 show characteristics of included studies.

The childbirth education interventions included information about mode of birth, birth process, mental health and coping strategies, pain relief methods, and partners’ roles in birth. Most interventions were delivered in group settings, and only in three studies they were delivered on a one-to-one basis [ 38 , 41 , 42 ]. Only one study explicitly stated that the intervention was individualised to a woman’s unique needs and experiences [ 38 ].

Overall, there was limited theory used to design interventions among the included studies: less than half of interventions (7/17) explicitly used theory in designing the intervention. Among the seven interventions that used theory in intervention development, the theories included the health promotion-disease prevention framework [ 38 ], midwifery counselling framework [ 41 ], cognitive behavioural therapy [ 42 ], Ost’s applied relaxation [ 70 ], conceptual model of parenting [ 71 ], attachment and social cognitive theories [ 37 ], and healthcare improvement scale-up framework [ 46 ]. The remaining 10 studies only relied on previously published studies to design the interventions. We identified very limited process evaluation or implementation outcome evidence related to the included interventions, which is a limitation of the field of CS and clinical interventions more broadly.

  • Qualitative comparative analysis

Model 1 – How birth information was provided to women

Model 1 is constructed based on the finding from Kingdon et al. (2018) [ 22 ] that women and communities enjoy learning new birth information, as it opens up new ways of thinking about vaginal birth and CS. Learning new information allows them to understand better the benefits and risks of CS and vaginal births, as well as increase their knowledge about CS [ 22 ].

We used four conditions in constructing model 1 truth table: 1) the provision of information, education, and communication (IEC) materials on what to expect during labour and birth, 2) type of education delivered (antenatal education or psychoeducation), and 3) group-based intervention delivery. We explored this model considering other conditions, such as type of information provided (e.g. information about mode of birth including birth process, mental health and coping strategies, pain relief), delivery technique (e.g. didactic, practical) and frequency and duration of intervention delivery; however these additional conditions did not result in configurations.

Of 16 possible configurations, we identified seven configurations (Table 4 ). The first two row shows perfect consistency of configurations (inclusion = 1) in five studies [ 37 , 68 , 69 , 70 , 71 ] in which all conditions are present, except antenatal education or psychoeducation. The remaining configurations are unsuccessful interventions. Interestingly, when either IEC materials or group-based intervention delivery are present (but not both), implementation is likely to be unsuccessful (rows 3–7).

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  2 ). The two pathways are similar, except for one condition: type of education. The antenatal education or psychoeducation materials is the content tailored to the type of women they target. Therefore, from the two pathways, we can see that the presence of distribution of IEC materials on birth information and group-based intervention delivery of either antenatal education to the general population of women (e.g. not groups of women with specific risks or conditions) or psychoeducation to women with fear of birth trigger successful interventions. From this solution, we can see that the successful interventions are consistently characterised by the presence of both IEC materials and group-based intervention delivery.

figure 2

Intermediate pathways from model 1 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Model 2 – Emotional support was provided to women

Model 2 was constructed based on the theory that women desire emotional support alongside the communication of information about childbirth [ 22 ]. This includes emotional support from husbands or partners, health professional, or doulas [ 22 ]. Furthermore, Kingdon et al. (2018) describe the importance of two-way conversation and dialogue between women and providers during pregnancy care, particularly to ensure the opportunity for discussion [ 22 ]. Interventions may generate more questions than they answered, creating the need and desire of women to have more dialogue with health professionals [ 22 ]. Women considered intervention content to be most useful when it complements clinical care, is consistent with advice from health professionals and provides a basis for more informed, meaningful dialogue between women and care providers [ 22 ].

Based on this underlying theory, we constructed model 3 truth table by considering three conditions representative of providing emotional support to women, including partner or family member involvement, group-based intervention delivery which provide social or peer support to women, and opportunity for women to interact with health providers. Of 8 possible configurations, we identified six configurations (Table 5 ). The first three rows represent successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present. The second and third row shows successful interventions with all conditions except partner or family member involvement or interaction with health providers. The remaining rows represent unsuccessful interventions, where at least two conditions are absent.

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  3 ). In the first pathway, the partner or family members involvement and group-based intervention delivery enable successful interventions. In the second pathway, however, when partner or family members are not involved, successful interventions can happen only when interaction with health providers is included alongside group-based intervention. From these two pathways, we can see that group-based intervention, involvement of partner and family member, and opportunity for women to interact with providers seem to be important in driving intervention success.

figure 3

Intermediate pathways from model 2 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Consolidated model – Essential conditions to prompt successful interventions focusing on women

Using the identified important conditions observed in models 1 and 2, we constructed a consolidated model to examine the final essential conditions which could prompt successful educational interventions targeting women. We merged and tested four conditions: the provision of IEC materials on what to expect during labour and birth, group-based intervention delivery, partner or family member involvement, and opportunity for interaction between women and health providers.

Of the 16 possible configurations, we identified six configurations (Table 6 ). The first three rows show configurations resulting in successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present; the second and third rows show successful interventions with all conditions present except interaction with health providers or partner or family member involvement. The remaining three rows are configurations of unsuccessful interventions, missing at least two conditions, including the consistent absence of partner or family member involvement.

Boolean minimisation identified two intermediate pathways to successful intervention (Fig.  4 ). The first pathway shows that the opportunity for women to interact with health providers, provision of IEC materials, and group-based intervention delivery prompts successful interventions. The second pathway, however, shows that when there is no opportunity for women to interact with health providers, it is important to have partner or family member involvement alongside group-based intervention delivery and provision of IEC materials. These two pathways suggest that the delivery of educational interventions accompanied by provision of IEC materials and presence of emotional support for women during the intervention is important to trigger successful interventions. These pathways also emphasise that emotional support for women during the intervention can come from either partner, family member, or health provider. For the consolidated model, we did not simplify the solution further, as the intermediate solution is more theoretically sound compared to the most parsimonious solution.

figure 4

Intermediate pathways from consolidated model that trigger successful interventions targeting pregnant women to optimise CS.  In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid.

Sub-analysis – Interventions targeting both women and health providers or systems

In this sub-analysis, we run the important conditions identified from the consolidated model, added condition of multi-target intervention, and applied it to 17 interventions: 11 interventions targeting women, and six interventions targeting both women and health providers or systems (multi-target interventions).

Of 32 possible configurations, we identified eight configurations (Table 7 ). The first four rows show configurations with successful interventions with perfect consistency (inclusion = 1). The first row is where all the multi-target interventions are clustered, except the unsuccessful intervention Zhang (2020) [ 49 ], and where all the conditions are present. All the conditions in the second to fourth rows are present, except multi-target interventions (all rows), interaction with health providers (third row) and partner and family member involvement (fourth row). The remaining rows are all configurations to unsuccessful interventions, where at least three conditions are missing, except row 8, which is a single case row. This case is the only multi-target intervention that is unsuccessful and in which partner or family members were not involved.

The Boolean minimisation identified two intermediate pathways (Fig.  5 ). The first pathway shows that partner or family involvement, provision of IEC materials, and group-based intervention delivery prompt successful interventions. The first pathway is comprised of all five successful multi-target interventions [ 44 , 45 , 46 , 47 , 48 ] and four of 11 interventions targeting only women [ 37 , 68 , 69 , 71 ]. The second pathway shows that when multi-target interventions are absent, but when interaction with health providers is present, alongside provision of IEC materials and group-based intervention delivery, it prompts successful interventions (3/11 interventions targeting women only [ 37 , 69 , 70 ]). The first pathway shows that there are successful configurations with and without multi-target interventions. Therefore, similar to the interventions targeting women, when implementing multi-target interventions, intervention components targeting women are more likely to be successful when partners or family members are involved, interventions are implemented through group-based intervention delivery, IEC materials were provided, and there is an opportunity for women to interact with health providers.

figure 5

Intermediate pathways from multi-target interventions sub-analysis that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

To summarise, there are four essential intervention components which trigger successful educational interventions focusing on pregnant women to reduce CS, this includes 1) group-based intervention delivery, 2) provision of IEC materials on what to expect during labour and birth, 3) partner or family member involvement on the intervention, and 4) opportunity for women to interact with health providers. These conditions do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions.

Our extensive QCA identified configurations of essential intervention components which are sufficient to trigger successful interventions to optimised CS. Educational interventions focusing on women were successful by: 1) leveraging social or peer support through group-based intervention delivery, 2) improving women’s knowledge and awareness of what to expect during labour and birth, 3) ensuring women have emotional support through partner or family participation in the intervention, and 4) providing opportunities for women to interact with health providers. We found that the absence of two or more of the above characteristics in an intervention result in unsuccessful interventions. Unlike our logic model, which predicted engagement strategies (i.e. intensity, frequency, technique, recruitment, incentives) to be essential to intervention success, we found that “support” seems to be central in maximising benefits of interventions targeting women.

Group-based intervention delivery is present across all four truth tables and eight pathways leading to successful intervention implementation, suggesting that group-based intervention delivery is an essential component of interventions targeting women. Despite this, we cannot conclude that group-based intervention delivery is a necessary condition, as there may be other pathways not captured in this QCA. The importance of group-based intervention delivery may be due to the group setting providing women with a sense of confidence through peer support and engagement. In group-based interventions, women may feel more confident when learning with others and peer support may motivate women. Furthermore, all group-based interventions in our included studies are conducted at health facilities, which may provide women with more confidence that information is aligned with clinical recommendations. Evidence on benefits of group-based interventions involving women who are pregnant has been demonstrated previously [ 72 , 73 ]. Women reported that group-based interventions reduce their feelings of isolation, provide access to group support, and allow opportunities for them to share their experiences [ 72 , 74 , 75 , 76 ]. This is aligned with social support theory, in which social support through a group or social environment may provide women with feelings of reassurance, compassion, reduce feelings of uncertainty, increase sense of control, access to new contacts to solve problems, and provision of instrumental support, which eventually influence positive health behaviours [ 72 , 77 ]. Women may resolve their uncertainties around mode of birth by sharing their concerns with others and learning at the same time how others cope with it. These findings are consistent with the benefits associated with group-based antenatal care, which is recommended by WHO [ 78 , 79 ].

Kingdon et al. (2018) reported that women and communities liked learning new birth information, as it opens new ways of thinking about vaginal birth and CS, and educates about benefits of different modes of birth, including risks of CS. Our QCA is aligned with this finding where provision of information about birth through education delivery leads to successful interventions but with certain caveats. That is, provision of birth information should be accompanied by IEC materials and through group-based intervention delivery. There is not enough information to distinguish what type of IEC materials lead to successful intervention; however, it is important to note that the format of the IEC materials (such as paper-based or mobile application) may affect success. More work is needed to understand how women and families react to format of IEC materials; for example, will paper-based IEC materials be relegated over more modern methods of reaching women with information through digital applications? The QUALI-DEC (Quality decision-making (QUALI-DEC) by women and healthcare providers for appropriate use of caesarean section) study is currently implementing a decision-analysis tool to help women make an informed decision on preferred mode of birth using both a paper-based and mobile application that may shed some light on this [ 80 ].

Previous research has shown that women who participated in interventions aiming to reduce CS desired emotional support (from partners, doulas or health providers) alongside the communication about childbirth [ 22 ]. Our QCA is aligned with this finding in which emotional support from partners or family members is highly influential in leading to successful interventions. Partner involvement in maternity care has been extensively studied and has been demonstrated to improve maternal health care utilisation and outcomes [ 81 ]. Both women and their partners perceived that partner involvement is crucial as it facilitates men to learn directly from providers, thus promoting shared decision-making among women and partners and enabling partners to reinforce adherence to any beneficial suggestions [ 82 , 83 , 84 , 85 , 86 ]. Partners provide psychosocial support to women, for example through being present during pregnancy and the childbirth process, as well as instrumental support, which includes supporting women financially [ 82 , 83 , 84 ]. Despite the benefits of partner involvement, partner's participation in maternity care is still low [ 82 ], as reflected in this study where only four out of 11 included interventions on this study involved partner or family member involvement. Reasons for this low participation, which include unequal gender norms and limited health system capability [ 82 , 84 , 85 , 86 ], should be explored and addressed to ensure the benefits of the interventions.

Furthermore, our QCA demonstrates the importance of interaction with health providers to trigger successful interventions. The interaction of women with providers in CS decision-making, however, is on a “nexus of power, trust, and risk”, where it may be beneficial but can also reinforce the structural oppression of women [ 13 ]. A recent study on patient-provider interaction in CS decision-making concluded that the interaction between providers who are risk-averse, and women who are cautious about their pregnancies in the health system results in discouragement of vaginal births [ 87 ]. However, this decision could be averted by meaningful communication between women and providers where CS risks and benefits are communicated in an environment where vaginal birth is encouraged [ 87 ]. Furthermore, the reasons women desire interaction with providers can come from opposite directions. Some women see providers as the most trusted and knowledgeable source, in which women can trust the judgement and ensure that the information learned is reliable and evidenced-based [ 22 ]. On the other hand, some women may have scepticism towards providers where women understand that providers’ preference may negatively influence their preferred mode of birth [ 22 ]. Therefore, adequate, two-way interaction is important for women to build a good rapport with providers.

It is also important to note that we have limited evidence (3/17 intervention studies) involving women with previous CS. Vaginal birth after previous CS (VBAC) can be a safe and positive experience for some women, but there are also potential risks depending on their obstetric history [ 88 , 89 , 90 ]. Davis (2020) found that women were motivated to have VBAC due to negative experiences of CS, such as the difficult recovery, and that health providers' roles served as pivotal drivers in motivating women towards VBAC [ 91 ]. Other than this, VBAC also requires giving birth in a suitably staffed and equipped maternity unit, with staff trained on VBAC, equipment for labour monitoring, and resources for emergency CS if needed [ 89 , 90 ]. There is comparatively less research conducted on VBAC and trial of labour after CS [ 88 ]. Therefore, more work is needed to explore if there are potentially different pathways that lead to successful intervention implementation for women with previous CS. It may be more likely that interventions targeting various stakeholders are more crucial in this group of women. For example, both education for women and partners or families, as well as training to upskill health providers might be needed to support VBAC.

Strength and limitations

We found many included studies had poor reporting of the interventions, including the general intervention components (e.g. presence of policies that may support interventions) and process evaluation components, which is reflective of the historical approach to reporting trial data. This poor reporting means we could not engage further in the interventions and thus may have missed important conditions that were not reported. However, we have attempted to compensate for limited process evaluation components by identifying all relevant sibling studies that could contribute to a better understanding of context. Furthermore, there are no studies conducted in low-income countries, despite rapidly increasing CS rates in these settings. Lastly, we were not able to conduct more nuanced analyses about CS, such as exploring how CS interventions impacted changes to emergency versus elective CS, VBAC, or instrumental birth, due to an insufficient number of studies and heterogeneity in outcome measurements. Therefore, it is important to note that we are not necessarily measuring the optimal outcome of interest—reducing unnecessary CS. However, it is unlikely that these non-clinical interventions will interfere with a decision of CS based on clinical indications.

Despite these limitations, this is the first study aiming to understand how certain interventions can be successful in targeting women to optimise CS use. We used the QCA approach and new analytical frameworks to re-analyse existing systematic review evidence to generate new knowledge. We ensure robustness through the use of a logic model and worked backwards in understanding what aspects are different in the intervention across different outcomes. The use of QCA and qualitative evidence synthesis ensured that the results are theory-driven, incorporate participants’ perspectives into the analysis, and explored iteratively to find the appropriate configurations, reducing the risk of data fishing. Lastly, this QCA extends the understanding of effectiveness review conducted by Chen et al. (2018) [ 18 ] by explaining the potential intervention components which may influence heterogeneity.

Implications for practice and research

To aid researchers and health providers to reduce CS in their contexts and designing educational interventions targeting women during pregnancy, we have developed a checklist of key components or questions to consider when designing the interventions that may help lead to successful implementation:

Is the intervention delivered in a group setting?

Are IEC materials on what to expect during labour and birth disseminated to women?

Are women’s partners or families involved in the intervention?

Do women have opportunities to interact with health providers?

We have used this checklist to explore the extent to which the included interventions in our QCA include these components using a matrix model (Fig.  6 ).

figure 6

Matrix model assessing the extent to which the included intervention studies have essential intervention components identified in the QCA

Additionally, future research on interventions to optimise the use of CS should report the intervention components implemented, including process outcomes such as fidelity, attrition, contextual factors (e.g. policies, details of how the intervention is delivered), and stakeholder factors (e.g. women’s perceptions and satisfaction). These factors are important in not just evaluating whether the intervention is successful or not, but also in exploring why similar interventions can work in one but not in another context. There is also a need for more intervention studies implementing VBAC to reduce CS, to understand how involving women with previous CS may result in successful interventions. Furthermore, more studies understanding impact of the interventions targeting women in LMICs are needed.

This QCA illustrates crucial intervention components and potential pathways that can trigger successful educational interventions to optimise CS, focusing on pregnant women. The following intervention components are found to be sufficient in triggering successful outcomes: 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. These intervention components do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions. Researchers, trialists, hospitals, or other institutions and stakeholders planning interventions focusing on pregnant women can consider including these components to ensure benefits. More studies understanding impact of the interventions targeting women to optimise CS are needed from LMICs. Researchers should clearly describe and report intervention components in trials, and consider how process evaluations can help explain why trials were successful or not. More robust trial reporting and process evaluations can help to better understand mechanisms of action and why interventions may work in one context yet not another.

Availability of data and materials

Additional information files have been provided and more data may be provided upon request to [email protected].

Abbreviations

Coverage score

  • Caesarean section

Crisp set qualitative comparative analysis

Fuzzy set qualitative comparative analysis

Information, education, and communication

Inclusion score

Low- and middle-income countries

Proportional reduction in inconsistency

Quality decision-making by women and healthcare providers for appropriate use of caesarean section

Vaginal birth after previous caesarean section

World Health Organization

World Health Organization. WHO statement on caesarean section rates. Available from: https://www.who.int/publications/i/item/WHO-RHR-15.02 . Cited 20 Sept 2023.

Zahroh RI, Disney G, Betrán AP, Bohren MA. Trends and sociodemographic inequalities in the use of caesarean section in Indonesia, 1987–2017. BMJ Global Health. 2020;5:e003844. https://doi.org/10.1136/bmjgh-2020-003844 .

Article   PubMed   PubMed Central   Google Scholar  

Betran AP, Ye J, Moller A-B, Souza JP, Zhang J. Trends and projections of caesarean section rates: global and regional estimates. BMJ Global Health. 2021;6:e005671. https://doi.org/10.1136/bmjgh-2021-005671 .

Boerma T, Ronsmans C, Melesse DY, Barros AJD, Barros FC, Juan L, et al. Global epidemiology of use of and disparities in caesarean sections. The Lancet. 2018;392:1341–8. https://doi.org/10.1016/S0140-6736(18)31928-7 .

Article   Google Scholar  

Sandall J, Tribe RM, Avery L, Mola G, Visser GH, Homer CS, et al. Short-term and long-term effects of caesarean section on the health of women and children. Lancet. 2018;392:1349–57. https://doi.org/10.1016/S0140-6736(18)31930-5 .

Article   PubMed   Google Scholar  

Abenhaim HA, Tulandi T, Wilchesky M, Platt R, Spence AR, Czuzoj-Shulman N, et al. Effect of Cesarean Delivery on Long-term Risk of Small Bowel Obstruction. Obstet Gynecol. 2018;131:354–9. https://doi.org/10.1097/AOG.0000000000002440 .

Gurol-Urganci I, Bou-Antoun S, Lim CP, Cromwell DA, Mahmood TA, Templeton A, et al. Impact of Caesarean section on subsequent fertility: a systematic review and meta-analysis. Hum Reprod. 2013;28:1943–52. https://doi.org/10.1093/humrep/det130 .

Article   CAS   PubMed   Google Scholar  

Hesselman S, Högberg U, Råssjö E-B, Schytt E, Löfgren M, Jonsson M. Abdominal adhesions in gynaecologic surgery after caesarean section: a longitudinal population-based register study. BJOG: An Int J Obstetrics Gynaecology. 2018;125:597–603. https://doi.org/10.1111/1471-0528.14708 .

Article   CAS   Google Scholar  

Tita ATN, Landon MB, Spong CY, Lai Y, Leveno KJ, Varner MW, et al. Timing of elective repeat cesarean delivery at term and neonatal outcomes. N Engl J Med. 2009;360:111–20. https://doi.org/10.1056/NEJMoa0803267 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wilmink FA, Hukkelhoven CWPM, Lunshof S, Mol BWJ, van der Post JAM, Papatsonis DNM. Neonatal outcome following elective cesarean section beyond 37 weeks of gestation: a 7-year retrospective analysis of a national registry. Am J Obstet Gynecol. 2010;202(250):e1-8. https://doi.org/10.1016/j.ajog.2010.01.052 .

Keag OE, Norman JE, Stock SJ. Long-term risks and benefits associated with cesarean delivery for mother, baby, and subsequent pregnancies: Systematic review and meta-analysis. PLoS Med. 2018;15:e1002494. https://doi.org/10.1371/journal.pmed.1002494 .

Ye J, Betrán AP, Guerrero Vela M, Souza JP, Zhang J. Searching for the optimal rate of medically necessary cesarean delivery. Birth. 2014;41:237–44. https://doi.org/10.1111/birt.12104 .

Eide KT, Morken N-H, Bærøe K. Maternal reasons for requesting planned cesarean section in Norway: a qualitative study. BMC Pregnancy Childbirth. 2019;19:102. https://doi.org/10.1186/s12884-019-2250-6 .

Long Q, Kingdon C, Yang F, Renecle MD, Jahanfar S, Bohren MA, et al. Prevalence of and reasons for women’s, family members’, and health professionals’ preferences for cesarean section in China: A mixed-methods systematic review. PLoS Med. 2018;15. https://doi.org/10.1371/journal.pmed.1002672 .

McAra-Couper J, Jones M, Smythe L. Caesarean-section, my body, my choice: The construction of ‘informed choice’ in relation to intervention in childbirth. Fem Psychol. 2012;22:81–97. https://doi.org/10.1177/0959353511424369 .

Panda S, Begley C, Daly D. Clinicians’ views of factors influencing decision-making for caesarean section: A systematic review and metasynthesis of qualitative, quantitative and mixed methods studies. PLoS One 2018;13. https://doi.org/10.1371/journal.pone.0200941 .

Takegata M, Smith C, Nguyen HAT, Thi HH, Thi Minh TN, Day LT, et al. Reasons for increased Caesarean section rate in Vietnam: a qualitative study among Vietnamese mothers and health care professionals. Healthcare. 2020;8:41. https://doi.org/10.3390/healthcare8010041 .

Chen I, Opiyo N, Tavender E, Mortazhejri S, Rader T, Petkovic J, et al. Non-clinical interventions for reducing unnecessary caesarean section. Cochrane Database Syst Rev. 2018. https://doi.org/10.1002/14651858.CD005528.pub3 .

Catling-Paull C, Johnston R, Ryan C, Foureur MJ, Homer CSE. Non-clinical interventions that increase the uptake and success of vaginal birth after caesarean section: a systematic review. J Adv Nurs. 2011;67:1662–76. https://doi.org/10.1111/j.1365-2648.2011.05662.x .

Kingdon C, Downe S, Betran AP. Non-clinical interventions to reduce unnecessary caesarean section targeted at organisations, facilities and systems: Systematic review of qualitative studies. PLOS ONE. 2018;13:e0203274. https://doi.org/10.1371/journal.pone.0203274 .

Kingdon C, Downe S, Betran AP. Interventions targeted at health professionals to reduce unnecessary caesarean sections: a qualitative evidence synthesis. BMJ Open. 2018;8:e025073. https://doi.org/10.1136/bmjopen-2018-025073 .

Kingdon C, Downe S, Betran AP. Women’s and communities’ views of targeted educational interventions to reduce unnecessary caesarean section: a qualitative evidence synthesis. Reprod Health. 2018;15:130. https://doi.org/10.1186/s12978-018-0570-z .

Opiyo N, Young C, Requejo JH, Erdman J, Bales S, Betrán AP. Reducing unnecessary caesarean sections: scoping review of financial and regulatory interventions. Reprod Health. 2020;17:133. https://doi.org/10.1186/s12978-020-00983-y .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

World Health Organization. Robson Classifcation: Implementation Manual. 2017. Available from: https://www.who.int/publications/i/item/9789241513197 . Cited 20 Sept 2023.

Zahroh RI, Kneale D, Sutcliffe K, Vazquez Corona M, Opiyo N, Homer CSE, et al. Interventions targeting healthcare providers to optimise use of caesarean section: a qualitative comparative analysis to identify important intervention features. BMC Health Serv Res. 2022;22:1526. https://doi.org/10.1186/s12913-022-08783-9 .

World Health Organization. WHO recommendations: non-clinical interventions to reduce unnecessary caesarean sections. 2018. Available from: https://www.who.int/publications/i/item/9789241550338 . Cited 20 Sept 2023.

Hanckel B, Petticrew M, Thomas J, Green J. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health. 2021;21:877. https://doi.org/10.1186/s12889-021-10926-2 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: Re-analysis of a systematic review to identify pathways to effectiveness. Health Expect. 2018;21:574–84. https://doi.org/10.1111/hex.12667 .

Chatterley C, Javernick-Will A, Linden KG, Alam K, Bottinelli L, Venkatesh M. A qualitative comparative analysis of well-managed school sanitation in Bangladesh. BMC Public Health. 2014;14:6. https://doi.org/10.1186/1471-2458-14-6 .

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67. https://doi.org/10.1186/2046-4053-3-67 .

Dușa A. QCA with R: A Comprehensive Resource. 2021. Available from: https://bookdown.org/dusadrian/QCAbook/ . Cited 20 Sept 2023.

Kneale D, Sutcliffe K, Thomas J. Critical Appraisal of Reviews Using Qualitative Comparative Analyses (CARU-QCA): a tool to critically appraise systematic reviews that use qualitative comparative analysis. In: Abstracts of the 26th Cochrane Colloquium, Santiago, Chile. Cochrane Database of Systematic Reviews 2020;(1 Suppl 1). https://doi.org/10.1002/14651858.CD201901 .

Sutcliffe K, Thomas J, Stokes G, Hinds K, Bangpan M. Intervention Component Analysis (ICA): a pragmatic approach for identifying the critical features of complex interventions. Syst Rev. 2015;4:140. https://doi.org/10.1186/s13643-015-0126-z .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Thomas J. Developing and testing intervention theory by incorporating a views synthesis into a qualitative comparative analysis of intervention effectiveness. Res Synth Methods. 2019;10:389–97. https://doi.org/10.1002/jrsm.1341 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45. https://doi.org/10.1186/1471-2288-8-45 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Obstetric outcome after intervention for severe fear of childbirth in nulliparous women – randomised trial. BJOG: An Int J Obstetrics Gynaecology. 2013;120:75–84. https://doi.org/10.1111/1471-0528.12011 .

Fraser W, Maunsell E, Hodnett E, Moutquin JM. Randomized controlled trial of a prenatal vaginal birth after cesarean section education and support program Childbirth alternatives Post-Cesarean study group. Am J Obstet Gynecol. 1997;176:419–25. https://doi.org/10.1016/s0002-9378(97)70509-x .

Masoumi SZ, Kazemi F, Oshvandi K, Jalali M, Esmaeili-Vardanjani A, Rafiei H. Effect of training preparation for childbirth on fear of normal vaginal delivery and choosing the type of delivery among pregnant women in Hamadan, Iran: a randomized controlled trial. J Family Reprod Health. 2016;10:115–21.

PubMed   PubMed Central   Google Scholar  

Navaee M, Abedian Z. Effect of role play education on primiparous women’s fear of natural delivery and their decision on the mode of delivery. Iran J Nurs Midwifery Res. 2015;20:40–6.

Fenwick J, Toohill J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. Effects of a midwife psycho-education intervention to reduce childbirth fear on women’s birth outcomes and postpartum psychological wellbeing. BMC Pregnancy Childbirth. 2015;15:284. https://doi.org/10.1186/s12884-015-0721-y .

Saisto T, Salmela-Aro K, Nurmi J-E, Könönen T, Halmesmäki E. A randomized controlled trial of intervention in fear of childbirth. Obstet Gynecol. 2001;98:820–6. https://doi.org/10.1016/S0029-7844(01)01552-6 .

Montgomery AA, Emmett CL, Fahey T, Jones C, Ricketts I, Patel RR, et al. Two decision aids for mode of delivery among women with previous Caesarean section: randomised controlled trial. BMJ: British Medic J. 2007;334:1305–9.

Xia X, Zhou Z, Shen S, Lu J, Zhang L, Huang P, et al. Effect of a two-stage intervention package on the cesarean section rate in Guangzhou, China: A before-and-after study. PLOS Medicine. 2019;16:e1002846. https://doi.org/10.1371/journal.pmed.1002846 .

Yu Y, Zhang X, Sun C, Zhou H, Zhang Q, Chen C. Reducing the rate of cesarean delivery on maternal request through institutional and policy interventions in Wenzhou. China PLoS ONE. 2017;12:1–12. https://doi.org/10.1371/journal.pone.0186304 .

Borem P, de Cássia SR, Torres J, Delgado P, Petenate AJ, Peres D, et al. A quality improvement initiative to increase the frequency of Vaginal delivery in Brazilian hospitals. Obstet Gynecol. 2020;135:415–25. https://doi.org/10.1097/AOG.0000000000003619 .

Ma R, Lao Terence T, Sun Y, Xiao H, Tian Y, Li B, et al. Practice audits to reduce caesareans in a tertiary referral hospital in south-western China. Bulletin World Health Organiz. 2012;90:488–94. https://doi.org/10.2471/BLT.11.093369 .

Clarke M, Devane D, Gross MM, Morano S, Lundgren I, Sinclair M, et al. OptiBIRTH: a cluster randomised trial of a complex intervention to increase vaginal birth after caesarean section. BMC Pregnancy Childbirth. 2020;20:143. https://doi.org/10.1186/s12884-020-2829-y .

Zhang L, Zhang L, Li M, Xi J, Zhang X, Meng Z, et al. A cluster-randomized field trial to reduce cesarean section rates with a multifaceted intervention in Shanghai. China BMC Medicine. 2020;18:27. https://doi.org/10.1186/s12916-020-1491-6 .

Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, Sneddon A, et al. Study protocol for reducing childbirth fear: a midwife-led psycho-education intervention. BMC Pregnancy Childbirth. 2013;13:190. https://doi.org/10.1186/1471-2393-13-190 .

Toohill J, Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. A randomized controlled trial of a psycho-education intervention by midwives in reducing childbirth fear in pregnant women. Birth. 2014;41:384–94. https://doi.org/10.1111/birt.12136 .

Toohill J, Callander E, Gamble J, Creedy D, Fenwick J. A cost effectiveness analysis of midwife psycho-education for fearful pregnant women – a health system perspective for the antenatal period. BMC Pregnancy Childbirth. 2017;17:217. https://doi.org/10.1186/s12884-017-1404-7 .

Turkstra E, Mihala G, Scuffham PA, Creedy DK, Gamble J, Toohill J, et al. An economic evaluation alongside a randomised controlled trial on psycho-education counselling intervention offered by midwives to address women’s fear of childbirth in Australia. Sex Reprod Healthc. 2017;11:1–6. https://doi.org/10.1016/j.srhc.2016.08.003 .

Emmett CL, Shaw ARG, Montgomery AA, Murphy DJ, DiAMOND study group. Women’s experience of decision making about mode of delivery after a previous caesarean section: the role of health professionals and information about health risks. BJOG 2006;113:1438–45. https://doi.org/10.1111/j.1471-0528.2006.01112.x .

Emmett CL, Murphy DJ, Patel RR, Fahey T, Jones C, Ricketts IW, et al. Decision-making about mode of delivery after previous caesarean section: development and piloting of two computer-based decision aids. Health Expect. 2007;10:161–72. https://doi.org/10.1111/j.1369-7625.2006.00429.x .

Hollinghurst S, Emmett C, Peters TJ, Watson H, Fahey T, Murphy DJ, et al. Economic evaluation of the DiAMOND randomized trial: cost and outcomes of 2 decision aids for mode of delivery among women with a previous cesarean section. Med Decis Making. 2010;30:453–63. https://doi.org/10.1177/0272989X09353195 .

Frost J, Shaw A, Montgomery A, Murphy D. Women’s views on the use of decision aids for decision making about the method of delivery following a previous caesarean section: Qualitative interview study. BJOG : An Int J Obstetrics Gynaecology. 2009;116:896–905. https://doi.org/10.1111/j.1471-0528.2009.02120.x .

Rees KM, Shaw ARG, Bennert K, Emmett CL, Montgomery AA. Healthcare professionals’ views on two computer-based decision aids for women choosing mode of delivery after previous caesarean section: a qualitative study. BJOG. 2009;116:906–14. https://doi.org/10.1111/j.1471-0528.2009.02121.x .

Emmett CL, Montgomery AA, Murphy DJ. Preferences for mode of delivery after previous caesarean section: what do women want, what do they get and how do they value outcomes? Health Expect. 2011;14:397–404. https://doi.org/10.1111/j.1369-7625.2010.00635.x .

Bastani F, Hidarnia A, Montgomery KS, Aguilar-Vafaei ME, Kazemnejad A. Does relaxation education in anxious primigravid Iranian women influence adverse pregnancy outcomes?: a randomized controlled trial. J Perinat Neonatal Nurs. 2006;20:138–46. https://doi.org/10.1097/00005237-200604000-00007 .

Feinberg ME, Kan ML. Establishing Family Foundations: Intervention Effects on Coparenting, Parent/Infant Well-Being, and Parent-Child Relations. J Fam Psychol. 2008;22:253–63. https://doi.org/10.1037/0893-3200.22.2.253 .

Me F, Ml K, Mc G. Enhancing coparenting, parenting, and child self-regulation: effects of family foundations 1 year after birth. Prevention Science: Official J Soc Prevention Res. 2009;10. https://doi.org/10.1007/s11121-009-0130-4 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Life satisfaction, general well-being and costs of treatment for severe fear of childbirth in nulliparous women by psychoeducative group or conventional care attendance. Acta Obstet Gynecol Scand. 2015;94:527–33. https://doi.org/10.1111/aogs.12594 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Ryding E-L, et al. Group psychoeducation with relaxation for severe fear of childbirth improves maternal adjustment and childbirth experience–a randomised controlled trial. J Psychosom Obstet Gynaecol. 2015;36:1–9. https://doi.org/10.3109/0167482X.2014.980722 .

Healy P, Smith V, Savage G, Clarke M, Devane D, Gross MM, et al. Process evaluation for OptiBIRTH, a randomised controlled trial of a complex intervention designed to increase rates of vaginal birth after caesarean section. Trials. 2018;19:9. https://doi.org/10.1186/s13063-017-2401-x .

Clarke M, Savage G, Smith V, Daly D, Devane D, Gross MM, et al. Improving the organisation of maternal health service delivery and optimising childbirth by increasing vaginal birth after caesarean section through enhanced women-centred care (OptiBIRTH trial): study protocol for a randomised controlled trial (ISRCTN10612254). Trials. 2015;16:542. https://doi.org/10.1186/s13063-015-1061-y .

Lundgren I, Healy P, Carroll M, Begley C, Matterne A, Gross MM, et al. Clinicians’ views of factors of importance for improving the rate of VBAC (vaginal birth after caesarean section): a study from countries with low VBAC rates. BMC Pregnancy Childbirth. 2016;16:350. https://doi.org/10.1186/s12884-016-1144-0 .

Sharifirad G, Rezaeian M, Soltani R, Javaheri S, Mazaheri MA. A survey on the effects of husbands’ education of pregnant women on knowledge, attitude, and reducing elective cesarean section. J Educ Health Promotion. 2013;2:50. https://doi.org/10.4103/2277-9531.119036 .

Valiani M, Haghighatdana Z, Ehsanpour S. Comparison of childbirth training workshop effects on knowledge, attitude, and delivery method between mothers and couples groups referring to Isfahan health centers in Iran. Iran J Nurs Midwifery Res. 2014;19:653–8.

Bastani F, Hidarnia A, Kazemnejad A, Vafaei M, Kashanian M. A randomized controlled trial of the effects of applied relaxation training on reducing anxiety and perceived stress in pregnant women. J Midwifery Womens Health. 2005;50:e36-40. https://doi.org/10.1016/j.jmwh.2004.11.008 .

Feinberg ME, Roettger ME, Jones DE, Paul IM, Kan ML. Effects of a psychosocial couple-based prevention program on adverse birth outcomes. Matern Child Health J. 2015;19:102–11. https://doi.org/10.1007/s10995-014-1500-5 .

Evans K, Spiby H, Morrell CJ. Developing a complex intervention to support pregnant women with mild to moderate anxiety: application of the medical research council framework. BMC Pregnancy Childbirth. 2020;20:777. https://doi.org/10.1186/s12884-020-03469-8 .

Rising SS. Centering pregnancy. An interdisciplinary model of empowerment. J Nurse Midwifery. 1998;43:46–54. https://doi.org/10.1016/s0091-2182(97)00117-1 .

Breustedt S, Puckering C. A qualitative evaluation of women’s experiences of the Mellow Bumps antenatal intervention. British J Midwife. 2013;21:187–94. https://doi.org/10.12968/bjom.2013.21.3.187 .

Evans K, Spiby H, Morrell JC. Non-pharmacological interventions to reduce the symptoms of mild to moderate anxiety in pregnant women a systematic review and narrative synthesis of women’s views on the acceptability of and satisfaction with interventions. Arch Womens Ment Health. 2020;23:11–28. https://doi.org/10.1007/s00737-018-0936-9 .

Hoddinott P, Chalmers M, Pill R. One-to-one or group-based peer support for breastfeeding? Women’s perceptions of a breastfeeding peer coaching intervention. Birth. 2006;33:139–46. https://doi.org/10.1111/j.0730-7659.2006.00092.x .

Heaney CA, Israel BA. Social networks and social support. In Glanz K, Rimer BK, Viswanath K (Eds.), Health behavior and health education: Theory, research, and practice. Jossey-Bass; 2008. pp. 189–210. https://psycnet.apa.org/record/2008-17146-009 .

World Health Organization. WHO recommendations on antenatal care for a positive pregnancy experience. 2016. Available from: https://www.who.int/publications/i/item/9789241549912 . Cited 20 Sept 2023.

World Health Organization. WHO recommendation on group antenatal care. WHO - RHL. 2021. Available from: https://srhr.org/rhl/article/who-recommendation-on-group-antenatal-care . Cited 20 Sept 2023.

Dumont A, Betrán AP, Kabore C, de Loenzien M, Lumbiganon P, Bohren MA, et al. Implementation and evaluation of nonclinical interventions for appropriate use of cesarean section in low- and middle-income countries: protocol for a multisite hybrid effectiveness-implementation type III trial. Implementation Science 2020. https://doi.org/10.21203/rs.3.rs-35564/v2 .

Tokhi M, Comrie-Thomson L, Davis J, Portela A, Chersich M, Luchters S. Involving men to improve maternal and newborn health: A systematic review of the effectiveness of interventions. PLOS ONE. 2018;13:e0191620. https://doi.org/10.1371/journal.pone.0191620 .

Gibore NS, Bali TAL. Community perspectives: An exploration of potential barriers to men’s involvement in maternity care in a central Tanzanian community. PLOS ONE. 2020;15:e0232939. https://doi.org/10.1371/journal.pone.0232939 .

Galle A, Plaieser G, Steenstraeten TV, Griffin S, Osman NB, Roelens K, et al. Systematic review of the concept ‘male involvement in maternal health’ by natural language processing and descriptive analysis. BMJ Global Health. 2021;6:e004909. https://doi.org/10.1136/bmjgh-2020-004909 .

Ladur AN, van Teijlingen E, Hundley V. Male involvement in promotion of safe motherhood in low- and middle-income countries: a scoping review. Midwifery. 2021;103:103089. https://doi.org/10.1016/j.midw.2021.103089 .

Comrie-Thomson L, Tokhi M, Ampt F, Portela A, Chersich M, Khanna R, et al. Challenging gender inequity through male involvement in maternal and newborn health: critical assessment of an emerging evidence base. Cult Health Sex. 2015;17:177–89. https://doi.org/10.1080/13691058.2015.1053412 .

Article   PubMed Central   Google Scholar  

Comrie-Thomson L, Gopal P, Eddy K, Baguiya A, Gerlach N, Sauvé C, et al. How do women, men, and health providers perceive interventions to influence men’s engagement in maternal and newborn health? A qualitative evidence synthesis. Soc Scie Medic. 2021;291:114475. https://doi.org/10.1016/j.socscimed.2021.114475 .

Doraiswamy S, Billah SM, Karim F, Siraj MS, Buckingham A, Kingdon C. Physician–patient communication in decision-making about Caesarean sections in eight district hospitals in Bangladesh: a mixed-method study. Reprod Health. 2021;18:34. https://doi.org/10.1186/s12978-021-01098-8 .

Dodd JM, Crowther CA, Huertas E, Guise J-M, Horey D. Planned elective repeat caesarean section versus planned vaginal birth for women with a previous caesarean birth. Cochrane Database Syst Rev. 2013. https://doi.org/10.1002/14651858.CD004224.pub3 .

Royal College of Obstetricians and Gynaecologists. Birth After Previous Caesarean Birth:Green-top Guideline No. 45. 2015. Available from: https://www.rcog.org.uk/globalassets/documents/guidelines/gtg_45.pdf . Cited 20 Sept 2023.

Royal Australian and New Zealand College of Obstetricians and Gynaecologists. Birth after previous caesarean section. 2019. Available from: https://ranzcog.edu.au/RANZCOG_SITE/media/RANZCOG-MEDIA/Women%27s%20Health/Statement%20and%20guidelines/Clinical-Obstetrics/Birth-after-previous-Caesarean-Section-(C-Obs-38)Review-March-2019.pdf?ext=.pdf . Cited 20 Sept 2023.

Davis D, Homer CS, Clack D, Turkmani S, Foureur M. Choosing vaginal birth after caesarean section: Motivating factors. Midwifery. 2020;88:102766. https://doi.org/10.1016/j.midw.2020.102766 .

Download references

Acknowledgements

We extend our thanks to Jim Berryman (Brownless Medical Library, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne) for his help in refining the search strategy for sibling studies.

This research was made possible with the support of UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), a co-sponsored programme executed by the World Health Organization (WHO). RIZ is supported by Melbourne Research Scholarship and Human Rights Scholarship from The University of Melbourne. CSEH is supported by a National Health and Medical Research Council (NHMRC) Principal Research Fellowship. MAB’s time is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200100264) and a Dame Kate Campbell Fellowship (University of Melbourne Faculty of Medicine, Dentistry, and Health Sciences). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The contents of this publication are the responsibility of the authors and do not reflect the views of the UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization.

Author information

Authors and affiliations.

Gender and Women’s Health Unit, Nossal Institute for Global Health, School of Population and Global Health, University of Melbourne, Melbourne, VIC, Australia

Rana Islamiah Zahroh, Martha Vazquez Corona & Meghan A. Bohren

EPPI Centre, UCL Social Research Institute, University College London, London, UK

Katy Sutcliffe & Dylan Kneale

Department of Sexual and Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Ana Pilar Betrán & Newton Opiyo

Maternal, Child, and Adolescent Health Programme, Burnet Institute, Melbourne, VIC, Australia

Caroline S. E. Homer

You can also search for this author in PubMed   Google Scholar

Contributions

- Conceptualisation and study design: MAB, APB, RIZ

- Funding acquisition: MAB, APB

- Data curation: RIZ, MAB, MVC

- Investigation, methodology and formal analysis: all authors

- Visualisation: RIZ, MAB

- Writing – original draft preparation: RIZ, MAB

- Writing – review and editing: all authors

Corresponding author

Correspondence to Rana Islamiah Zahroh .

Ethics declarations

Ethics approval and consent to participate.

This study utilised published and openly available data, and thus ethics approval is not required.

Consent for publication

No direct individual contact is involved in this study, therefore consent for publication is not needed.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Logic model in optimizing CS use.

Additional file 2.

Risk of bias assessments.

Additional file 3.

Coding framework and calibration rules.

Additional file 4.

Coding framework as applied to each intervention (data table).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zahroh, R.I., Sutcliffe, K., Kneale, D. et al. Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis. BMC Public Health 23 , 1851 (2023). https://doi.org/10.1186/s12889-023-16718-0

Download citation

Received : 07 March 2022

Accepted : 07 September 2023

Published : 23 September 2023

DOI : https://doi.org/10.1186/s12889-023-16718-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Maternal health
  • Complex intervention
  • Intervention implementation

BMC Public Health

ISSN: 1471-2458

data analysis research paper section

This paper is in the following e-collection/theme issue:

Published on 12.4.2024 in Vol 26 (2024)

Application of AI in in Multilevel Pain Assessment Using Facial Images: Systematic Review and Meta-Analysis

Authors of this article:

Author Orcid Image

  • Jian Huo 1 * , MSc   ; 
  • Yan Yu 2 * , MMS   ; 
  • Wei Lin 3 , MMS   ; 
  • Anmin Hu 2, 3, 4 , MMS   ; 
  • Chaoran Wu 2 , MD, PhD  

1 Boston Intelligent Medical Research Center, Shenzhen United Scheme Technology Company Limited, Boston, MA, United States

2 Department of Anesthesia, Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Key Medical Discipline, Shenzhen, China

3 Shenzhen United Scheme Technology Company Limited, Shenzhen, China

4 The Second Clinical Medical College, Jinan University, Shenzhen, China

*these authors contributed equally

Corresponding Author:

Chaoran Wu, MD, PhD

Department of Anesthesia

Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology

Shenzhen Key Medical Discipline

No 1017, Dongmen North Road

Shenzhen, 518020

Phone: 86 18100282848

Email: [email protected]

Background: The continuous monitoring and recording of patients’ pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions. However, there is a lack of proper comparison of results between studies to identify current research gaps.

Objective: The purpose of this systematic review and meta-analysis was to investigate the diagnostic performance of artificial intelligence models for multilevel pain assessment from facial images.

Methods: The PubMed, Embase, IEEE, Web of Science, and Cochrane Library databases were searched for related publications before September 30, 2023. Studies that used facial images alone to estimate multiple pain values were included in the systematic review. A study quality assessment was conducted using the Quality Assessment of Diagnostic Accuracy Studies, 2nd edition tool. The performance of these studies was assessed by metrics including sensitivity, specificity, log diagnostic odds ratio (LDOR), and area under the curve (AUC). The intermodal variability was assessed and presented by forest plots.

Results: A total of 45 reports were included in the systematic review. The reported test accuracies ranged from 0.27-0.99, and the other metrics, including the mean standard error (MSE), mean absolute error (MAE), intraclass correlation coefficient (ICC), and Pearson correlation coefficient (PCC), ranged from 0.31-4.61, 0.24-2.8, 0.19-0.83, and 0.48-0.92, respectively. In total, 6 studies were included in the meta-analysis. Their combined sensitivity was 98% (95% CI 96%-99%), specificity was 98% (95% CI 97%-99%), LDOR was 7.99 (95% CI 6.73-9.31), and AUC was 0.99 (95% CI 0.99-1). The subgroup analysis showed that the diagnostic performance was acceptable, although imbalanced data were still emphasized as a major problem. All studies had at least one domain with a high risk of bias, and for 20% (9/45) of studies, there were no applicability concerns.

Conclusions: This review summarizes recent evidence in automatic multilevel pain estimation from facial expressions and compared the test accuracy of results in a meta-analysis. Promising performance for pain estimation from facial images was established by current CV algorithms. Weaknesses in current studies were also identified, suggesting that larger databases and metrics evaluating multiclass classification performance could improve future studies.

Trial Registration: PROSPERO CRD42023418181; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=418181

Introduction

The definition of pain was revised to “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage” in 2020 [ 1 ]. Acute postoperative pain management is important, as pain intensity and duration are critical influencing factors for the transition of acute pain to chronic postsurgical pain [ 2 ]. To avoid the development of chronic pain, guidelines were promoted and discussed to ensure safe and adequate pain relief for patients, and clinicians were recommended to use a validated pain assessment tool to track patients’ responses [ 3 ]. However, these tools, to some extent, depend on communication between physicians and patients, and continuous data cannot be provided [ 4 ]. The continuous assessment and recording of patient pain intensity will not only reduce caregiver burden but also provide data for chronic pain research. Therefore, automatic and accurate pain measurements are necessary.

Researchers have proposed different approaches to measuring pain intensity. Physiological signals, for example, electroencephalography and electromyography, have been used to estimate pain [ 5 - 7 ]. However, it was reported that current pain assessment from physiological signals has difficulties isolating stress and pain with machine learning techniques, as they share conceptual and physiological similarities [ 8 ]. Recent studies have also investigated pain assessment tools for certain patient subgroups. For example, people with deafness or an intellectual disability may not be able to communicate well with nurses, and an objective pain evaluation would be a better option [ 9 , 10 ]. Measuring pain intensity from patient behaviors, such as facial expressions, is also promising for most patients [ 4 ]. As the most comfortable and convenient method, computer vision techniques require no attachments to patients and can monitor multiple participants using 1 device [ 4 ]. However, pain intensity, which is important for pain research, is often not reported.

With the growing trend of assessing pain intensity using artificial intelligence (AI), it is necessary to summarize current publications to determine the strengths and gaps of current studies. Existing research has reviewed machine learning applications for acute postoperative pain prediction, continuous pain detection, and pain intensity estimation [ 10 - 14 ]. Input modalities, including facial recordings and physiological signals such as electroencephalography and electromyography, were also reviewed [ 5 , 8 ]. There have also been studies focusing on deep learning approaches [ 11 ]. AI was applied in children and infant pain evaluation as well [ 15 , 16 ]. However, no study has focused on pain intensity measurement, and no comparison of test accuracy results has been made.

Current AI applications in pain research can be categorized into 3 types: pain assessment, pain prediction and decision support, and pain self-management [ 14 ]. We consider accurate and automatic pain assessment to be the most important area and the foundation of future pain research. In this study, we performed a systematic review and meta-analysis to assess the diagnostic performance of current publications for multilevel pain evaluation.

This study was registered with PROSPERO (International Prospective Register of Systematic Reviews; CRD42023418181) and carried out strictly following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 17 ] .

Study Eligibility

Studies that reported AI techniques for multiclass pain intensity classification were eligible. Records including nonhuman or infant participants or 2-class pain detection were excluded. Only studies using facial images of the test participants were accepted. Clinically used pain assessment tools, such as the visual analog scale (VAS) and numerical rating scale (NRS), and other pain intensity indicators, were rejected in the meta-analysis. Textbox 1 presents the eligibility criteria.

Study characteristics and inclusion criteria

  • Participants: children and adults aged 12 months or older
  • Setting: no restrictions
  • Index test: artificial intelligence models that measure pain intensity from facial images
  • Reference standard: no restrictions for systematic review; Prkachin and Solomon pain intensity score for meta-analysis
  • Study design: no need to specify

Study characteristics and exclusion criteria

  • Participants: infants aged 12 months or younger and animal subjects
  • Setting: no need to specify
  • Index test: studies that use other information such as physiological signals
  • Reference standard: other pain evaluation tools, e.g., NRS, VAS, were excluded from meta-analysis
  • Study design: reviews

Report characteristics and inclusion criteria

  • Year: published between January 1, 2012, and September 30, 2023
  • Language: English only
  • Publication status: published
  • Test accuracy metrics: no restrictions for systematic reviews; studies that reported contingency tables were included for meta-analysis

Report characteristics and exclusion criteria

  • Year: no need to specify
  • Language: no need to specify
  • Publication status: preprints not accepted
  • Test accuracy metrics: studies that reported insufficient metrics were excluded from meta-analysis

Search Strategy

In this systematic review, databases including PubMed, Embase, IEEE, Web of Science, and the Cochrane Library were searched until December 2022, and no restrictions were applied. Keywords were “artificial intelligence” AND “pain recognition.” Multimedia Appendix 1 shows the detailed search strategy.

Data Extraction

A total of 2 viewers screened titles and abstracts and selected eligible records independently to assess eligibility, and disagreements were solved by discussion with a third collaborator. A consentient data extraction sheet was prespecified and used to summarize study characteristics independently. Table S5 in Multimedia Appendix 1 shows the detailed items and explanations for data extraction. Diagnostic accuracy data were extracted into contingency tables, including true positives, false positives, false negatives, and true negatives. The data were used to calculate the pooled diagnostic performance of the different models. Some studies included multiple models, and these models were considered independent of each other.

Study Quality Assessment

All included studies were independently assessed by 2 viewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool [ 18 ]. QUADAS-2 assesses bias risk across 4 domains, which are patient selection, index test, reference standard, and flow and timing. The first 3 domains are also assessed for applicability concerns. In the systematic review, a specific extension of QUADAS-2, namely, QUADAS-AI, was used to specify the signaling questions [ 19 ].

Meta-Analysis

Meta-analyses were conducted between different AI models. Models with different algorithms or training data were considered different. To evaluate the performance differences between models, the contingency tables during model validation were extracted. Studies that did not report enough diagnostic accuracy data were excluded from meta-analysis.

Hierarchical summary receiver operating characteristic (SROC) curves were fitted to evaluate the diagnostic performance of AI models. These curves were plotted with 95% CIs and prediction regions around averaged sensitivity, specificity, and area under the curve estimates. Heterogeneity was assessed visually by forest plots. A funnel plot was constructed to evaluate the risk of bias.

Subgroup meta-analyses were conducted to evaluate the performance differences at both the model level and task level, and subgroups were created based on different tasks and the proportion of positive and negative samples.

All statistical analyses and plots were produced using RStudio (version 4.2.2; R Core Team) and the R package meta4diag (version 2.1.1; Guo J and Riebler A) [ 20 ].

Study Selection and Included Study Characteristics

A flow diagram representing the study selection process is shown in ( Figure 1 ). After removing 1039 duplicates, the titles and abstracts of a total of 5653 papers were screened, and the percentage agreement of title or abstract screening was 97%. After screening, 51 full-text reports were assessed for eligibility, among which 45 reports were included in the systematic review [ 21 - 65 ]. The percentage agreement of the full-text review was 87%. In 40 of the included studies, contingency tables could not be made. Meta-analyses were conducted based on 8 AI models extracted from 6 studies. Individual study characteristics included in the systematic review are provided in Tables 1 and 2 . The facial feature extraction method can be categorized into 2 classes: geometrical features (GFs) and deep features (DFs). One typical method of extracting GFs is to calculate the distance between facial landmarks. DFs are usually extracted by convolution operations. A total of 20 studies included temporal information, but most of them (18) extracted temporal information through the 3D convolution of video sequences. Feature transformation was also commonly applied to reduce the time for training or fuse features extracted by different methods before inputting them into the classifier. For classifiers, support vector machines (SVMs) and convolutional neural networks (CNNs) were mostly used. Table 1 presents the model designs of the included studies.

data analysis research paper section

a No temporal features are shown by – symbol, time information extracted from 2 images at different time by +, and deep temporal features extracted through the convolution of video sequences by ++.

b SVM: support vector machine.

c GF: geometric feature.

d GMM: gaussian mixture model.

e TPS: thin plate spline.

f DML: distance metric learning.

g MDML: multiview distance metric learning.

h AAM: active appearance model.

i RVR: relevance vector regressor.

j PSPI: Prkachin and Solomon pain intensity.

k I-FES: individual facial expressiveness score.

l LSTM: long short-term memory.

m HCRF: hidden conditional random field.

n GLMM: generalized linear mixed model.

o VLAD: vector of locally aggregated descriptor.

p SVR: support vector regression.

q MDS: multidimensional scaling.

r ELM: extreme learning machine.

s Labeled to distinguish different architectures of ensembled deep learning models.

t DCNN: deep convolutional neural network.

u GSM: gaussian scale mixture.

v DOML: distance ordering metric learning.

w LIAN: locality and identity aware network.

x BiLSTM: bidirectional long short-term memory.

a UNBC: University of Northern British Columbia-McMaster shoulder pain expression archive database.

b LOSO: leave one subject out cross-validation.

c ICC: intraclass correlation coefficient.

d CT: contingency table.

e AUC: area under the curve.

f MSE: mean standard error.

g PCC: Pearson correlation coefficient.

h RMSE: root mean standard error.

i MAE: mean absolute error.

j ICC: intraclass coefficient.

k CCC: concordance correlation coefficient.

l Reported both external and internal validation results and summarized as intervals.

Table 2 summarizes the characteristics of model training and validation. Most studies used publicly available databases, for example, the University of Northern British Columbia-McMaster shoulder pain expression archive database [ 57 ]. Table S4 in Multimedia Appendix 1 summarizes the public databases. A total of 7 studies used self-prepared databases. Frames from video sequences were the most used test objects, as 37 studies output frame-level pain intensity, while few measure pain intensity from video sequences or photos. It was common that a study redefined pain levels to have fewer classes than ground-truth labels. For model validation, cross-validation and leave-one-subject-out validation were commonly used. Only 3 studies performed external validation. For reporting test accuracies, different evaluation metrics were used, including sensitivity, specificity, mean absolute error (MAE), mean standard error (MSE), Pearson correlation coefficient (PCC), and intraclass coefficient (ICC).

Methodological Quality of Included Studies

Table S2 in Multimedia Appendix 1 presents the study quality summary, as assessed by QUADAS-2. There was a risk of bias in all studies, specifically in terms of patient selection, caused by 2 issues. First, the training data are highly imbalanced, and any method to adjust the data distribution may introduce bias. Next, the QUADAS-AI correspondence letter [ 19 ] specifies that preprocessing of images that changes the image size or resolution may introduce bias. However, the applicability concern is low, as the images properly represent the feeling of pain. Studies that used cross-fold validation or leave-one-out cross-validation were considered to have a low risk of bias. Although the Prkachin and Solomon pain intensity (PSPI) score was used by most of the studies, its ability to represent individual pain levels was not clinically validated; as such, the risk of bias and applicability concerns were considered high when the PSPI score was used as the index test. As an advantage of computer vision techniques, the time interval between the index tests was short and was assessed as having a low risk of bias. Risk proportions are shown in Figure 2 . For all 315 entries, 39% (124) were assessed as high-risk. In total, 5 studies had the lowest risk of bias, with 6 domains assessed as low risk [ 26 , 27 , 31 , 32 , 59 ].

data analysis research paper section

Pooled Performance of Included Models

In 6 studies included in the meta-analysis, there were 8 different models. The characteristics of these models are summarized in Table S1 in Multimedia Appendix 2 [ 23 , 24 , 26 , 32 , 41 , 57 ]. Classification of PSPI scores greater than 0, 2, 3, 6, and 9 was selected and considered as different tasks to create contingency tables. The test performance is shown in Figure 3 as hierarchical SROC curves; 27 contingency tables were extracted from 8 models. The sensitivity, specificity, and LDOR were calculated, and the combined sensitivity was 98% (95% CI 96%-99%), the specificity was 98% (95% CI 97%-99%), the LDOR was 7.99 (95% CI 6.73-9.31) and the AUC was 0.99 (95% CI 0.99-1).

data analysis research paper section

Subgroup Analysis

In this study, subgroup analysis was conducted to investigate the performance differences within models. A total of 8 models were separated and summarized as a forest plot in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ]. For model 1, the pooled sensitivity, specificity, and LDOR were 95% (95% CI 86%-99%), 99% (95% CI 98%-100%), and 8.38 (95% CI 6.09-11.19), respectively. For model 2, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 84%-99%), 95% (95% CI 88%-99%), and 6.23 (95% CI 3.52-9.04), respectively. For model 3, the pooled sensitivity, specificity, and LDOR were 100% (95% CI 99%-100%), 100% (95% CI 99%-100%), and 11.55% (95% CI 8.82-14.43), respectively. For model 4, the pooled sensitivity, specificity, and LDOR were 83% (95% CI 43%-99%), 94% (95% CI 79%-99%), and 5.14 (95% CI 0.93-9.31), respectively. For model 5, the pooled sensitivity, specificity, and LDOR were 92% (95% CI 68%-99%), 94% (95% CI 78%-99%), and 6.12 (95% CI 1.82-10.16), respectively. For model 6, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 74%-100%), 94% (95% CI 78%-99%), and 6.59 (95% CI 2.21-11.13), respectively. For model 7, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 90%-100%), 97% (95% CI 87%-100%), and 8.31 (95% CI 4.3-12.29), respectively. For model 8, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 93%-100%), 97% (95% CI 88%-100%), and 8.65 (95% CI 4.84-12.67), respectively.

Heterogeneity Analysis

The meta-analysis results indicated that AI models are applicable for estimating pain intensity from facial images. However, extreme heterogeneity existed within the models except for models 3 and 5, which were proposed by Rathee and Ganotra [ 24 ] and Semwal and Londhe [ 32 ]. A funnel plot is presented in Figure 4 . A high risk of bias was observed.

data analysis research paper section

Pain management has long been a critical problem in clinical practice, and the use of AI may be a solution. For acute pain management, automatic measurement of pain can reduce the burden on caregivers and provide timely warnings. For chronic pain management, as specified by Glare et al [ 2 ], further research is needed, and measurements of pain presence, intensity, and quality are one of the issues to be solved for chronic pain studies. Computer vision could improve pain monitoring through real-time detection for clinical use and data recording for prospective pain studies. To our knowledge, this is the first meta-analysis dedicated to AI performance in multilevel pain level classification.

In this study, one model’s performance at specific pain levels was described by stacking multiple classes into one to make each task a binary classification problem. After careful selection in both the medical and engineering databases, we observed promising results of AI in evaluating multilevel pain intensity through facial images, with high sensitivity (98%), specificity (98%), LDOR (7.99), and AUC (0.99). It is reasonable to believe that AI can accurately evaluate pain intensity from facial images. Moreover, the study quality and risk of bias were evaluated using an adapted QUADAS-2 assessment tool, which is a strength of this study.

To investigate the source of heterogeneity, it was assumed that a well-designed model should have familiar size effects regarding different levels, and a subgroup meta-analysis was conducted. The funnel and forest plots exhibited extreme heterogeneity. The model’s performance at specific pain levels was described and summarized by a forest plot. Within-model heterogeneity was observed in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ] except for 2 models. Models 3 and 5 were different in many aspects, including their algorithms and validation methods, but were both trained with a relatively small data set, and the proportion of positive and negative classes was relatively close to 1. Because training with imbalanced data is a critical problem in computer vision studies [ 66 ], for example, in the University of Northern British Columbia-McMaster pain data set, fewer than 10 frames out of 48,398 had a PSPI score greater than 13. Here, we emphasized that imbalanced data sets are one major cause of heterogeneity, resulting in the poorer performance of AI algorithms.

We tentatively propose a method to minimize the effect of training with imbalanced data by stacking multiple classes into one class, which is already presented in studies included in the systematic review [ 26 , 32 , 42 , 57 ]. Common methods to minimize bias include resampling and data augmentation [ 66 ]. This proposed method is used in the meta-analysis to compare the test results of different studies as well. The stacking method is available when classes are only different in intensity. A disadvantage of combined classes is that the model would be insufficient in clinical practice when the number of classes is low. Commonly used pain evaluation tools, such as VAS, have 10 discrete levels. It is recommended that future studies set the number of pain levels to be at least 10 for model training.

This study is limited for several reasons. First, insufficient data were included because different performance metrics (mean standard error and mean average error) were used in most studies, which could not be summarized into a contingency table. To create a contingency table that can be included in a meta-analysis, the study should report the following: the number of objects used in each pain class for model validation, and the accuracy, sensitivity, specificity, and F 1 -score for each pain class. This table cannot be created if a study reports the MAE, PCC, and other commonly used metrics in AI development. Second, a small study effect was observed in the funnel plot, and the heterogeneity could not be minimized. Another limitation is that the PSPI score is not clinically validated and is not the only tool that assesses pain from facial expressions. There are other clinically validated pain intensity assessment methods, such as the Faces Pain Scale-revised, Wong-Baker Faces Pain Rating Scale, and Oucher Scale [ 3 ]. More databases could be created based on the above-mentioned tools. Finally, AI-assisted pain assessments were supposed to cover larger populations, including incommunicable patients, for example, patients with dementia or patients with masked faces. However, only 1 study considered patients with dementia, which was also caused by limited databases [ 50 ].

AI is a promising tool that can help in pain research in the future. In this systematic review and meta-analysis, one approach using computer vision was investigated to measure pain intensity from facial images. Despite some risk of bias and applicability concerns, CV models can achieve excellent test accuracy. Finally, more CV studies in pain estimation, reporting accuracy in contingency tables, and more pain databases are encouraged for future studies. Specifically, the creation of a balanced public database that contains not only healthy but also nonhealthy participants should be prioritized. The recording process would be better in a clinical environment. Then, it is recommended that researchers report the validation results in terms of accuracy, sensitivity, specificity, or contingency tables, as well as the number of objects for each pain class, for the inclusion of a meta-analysis.

Acknowledgments

WL, AH, and CW contributed to the literature search and data extraction. JH and YY wrote the first draft of the manuscript. All authors contributed to the conception and design of the study, the risk of bias evaluation, data analysis and interpretation, and contributed to and approved the final version of the manuscript.

Data Availability

The data sets generated during and analyzed during this study are available in the Figshare repository [ 67 ].

Conflicts of Interest

None declared.

PRISMA checklist, risk of bias summary, search strategy, database summary and reported items and explanations.

Study performance summary.

Forest plot presenting pooled performance of subgroups in meta-analysis.

  • Raja SN, Carr DB, Cohen M, Finnerup NB, Flor H, Gibson S, et al. The revised International Association for the Study of Pain definition of pain: concepts, challenges, and compromises. Pain. 2020;161(9):1976-1982. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glare P, Aubrey KR, Myles PS. Transition from acute to chronic pain after surgery. Lancet. 2019;393(10180):1537-1546. [ CrossRef ] [ Medline ]
  • Chou R, Gordon DB, de Leon-Casasola OA, Rosenberg JM, Bickler S, Brennan T, et al. Management of postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists' Committee on Regional Anesthesia, Executive Committee, and Administrative Council. J Pain. 2016;17(2):131-157. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hassan T, Seus D, Wollenberg J, Weitz K, Kunz M, Lautenbacher S, et al. Automatic detection of pain from facial expressions: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;43(6):1815-1831. [ CrossRef ] [ Medline ]
  • Mussigmann T, Bardel B, Lefaucheur JP. Resting-State Electroencephalography (EEG) biomarkers of chronic neuropathic pain. A systematic review. Neuroimage. 2022;258:119351. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moscato S, Cortelli P, Chiari L. Physiological responses to pain in cancer patients: a systematic review. Comput Methods Programs Biomed. 2022;217:106682. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Thiam P, Hihn H, Braun DA, Kestler HA, Schwenker F. Multi-modal pain intensity assessment based on physiological signals: a deep learning perspective. Front Physiol. 2021;12:720464. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rojas RF, Brown N, Waddington G, Goecke R. A systematic review of neurophysiological sensing for the assessment of acute pain. NPJ Digit Med. 2023;6(1):76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mansutti I, Tomé-Pires C, Chiappinotto S, Palese A. Facilitating pain assessment and communication in people with deafness: a systematic review. BMC Public Health. 2023;23(1):1594. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • El-Tallawy SN, Ahmed RS, Nagiub MS. Pain management in the most vulnerable intellectual disability: a review. Pain Ther. 2023;12(4):939-961. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gkikas S, Tsiknakis M. Automatic assessment of pain based on deep learning methods: a systematic review. Comput Methods Programs Biomed. 2023;231:107365. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Borna S, Haider CR, Maita KC, Torres RA, Avila FR, Garcia JP, et al. A review of voice-based pain detection in adults using artificial intelligence. Bioengineering (Basel). 2023;10(4):500. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • De Sario GD, Haider CR, Maita KC, Torres-Guzman RA, Emam OS, Avila FR, et al. Using AI to detect pain through facial expressions: a review. Bioengineering (Basel). 2023;10(5):548. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang M, Zhu L, Lin SY, Herr K, Chi CL, Demir I, et al. Using artificial intelligence to improve pain assessment and pain management: a scoping review. J Am Med Inform Assoc. 2023;30(3):570-587. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hughes JD, Chivers P, Hoti K. The clinical suitability of an artificial intelligence-enabled pain assessment tool for use in infants: feasibility and usability evaluation study. J Med Internet Res. 2023;25:e41992. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fang J, Wu W, Liu J, Zhang S. Deep learning-guided postoperative pain assessment in children. Pain. 2023;164(9):2029-2035. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Whiting PF, Rutjes AWS, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-536. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sounderajah V, Ashrafian H, Rose S, Shah NH, Ghassemi M, Golub R, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27(10):1663-1665. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guo J, Riebler A. meta4diag: Bayesian bivariate meta-analysis of diagnostic test studies for routine practice. J Stat Soft. 2018;83(1):1-31. [ CrossRef ]
  • Hammal Z, Cohn JF. Automatic detection of pain intensity. Proc ACM Int Conf Multimodal Interact. 2012;2012:47-52. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adibuzzaman M, Ostberg C, Ahamed S, Povinelli R, Sindhu B, Love R, et al. Assessment of pain using facial pictures taken with a smartphone. 2015. Presented at: 2015 IEEE 39th Annual Computer Software and Applications Conference; July 01-05, 2015;726-731; Taichung, Taiwan. [ CrossRef ]
  • Majumder A, Dutta S, Behera L, Subramanian VK. Shoulder pain intensity recognition using Gaussian mixture models. 2015. Presented at: 2015 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE); December 19-20, 2015;130-134; Dhaka, Bangladesh. [ CrossRef ]
  • Rathee N, Ganotra D. A novel approach for pain intensity detection based on facial feature deformations. J Vis Commun Image Represent. 2015;33:247-254. [ CrossRef ]
  • Sikka K, Ahmed AA, Diaz D, Goodwin MS, Craig KD, Bartlett MS, et al. Automated assessment of children's postoperative pain using computer vision. Pediatrics. 2015;136(1):e124-e131. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rathee N, Ganotra D. Multiview distance metric learning on facial feature descriptors for automatic pain intensity detection. Comput Vis Image Und. 2016;147:77-86. [ CrossRef ]
  • Zhou J, Hong X, Su F, Zhao G. Recurrent convolutional neural network regression for continuous pain intensity estimation in video. 2016. Presented at: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); June 26-July 01, 2016; Las Vegas, NV. [ CrossRef ]
  • Egede J, Valstar M, Martinez B. Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. 2017. Presented at: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017); May 30-June 03, 2017;689-696; Washington, DC. [ CrossRef ]
  • Martinez DL, Rudovic O, Picard R. Personalized automatic estimation of self-reported pain intensity from facial expressions. 2017. Presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); July 21-26, 2017;2318-2327; Honolulu, HI. [ CrossRef ]
  • Bourou D, Pampouchidou A, Tsiknakis M, Marias K, Simos P. Video-based pain level assessment: feature selection and inter-subject variability modeling. 2018. Presented at: 2018 41st International Conference on Telecommunications and Signal Processing (TSP); July 04-06, 2018;1-6; Athens, Greece. [ CrossRef ]
  • Haque MA, Bautista RB, Noroozi F, Kulkarni K, Laursen C, Irani R. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities. 2018. Presented at: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018); May 15-19, 2018;250-257; Xi'an, China. [ CrossRef ]
  • Semwal A, Londhe ND. Automated pain severity detection using convolutional neural network. 2018. Presented at: 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS); December 21-22, 2018;66-70; Belgaum, India. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep binary representation of facial expressions: a novel framework for automatic pain intensity recognition. 2018. Presented at: 2018 25th IEEE International Conference on Image Processing (ICIP); October 07-10, 2018;1952-1956; Athens, Greece. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep spatiotemporal representation of the face for automatic pain intensity estimation. 2018. Presented at: 2018 24th International Conference on Pattern Recognition (ICPR); August 20-24, 2018;350-354; Beijing, China. [ CrossRef ]
  • Wang J, Sun H. Pain intensity estimation using deep spatiotemporal and handcrafted features. IEICE Trans Inf & Syst. 2018;E101.D(6):1572-1580. [ CrossRef ]
  • Bargshady G, Soar J, Zhou X, Deo RC, Whittaker F, Wang H. A joint deep neural network model for pain recognition from face. 2019. Presented at: 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS); February 23-25, 2019;52-56; Singapore. [ CrossRef ]
  • Casti P, Mencattini A, Comes MC, Callari G, Di Giuseppe D, Natoli S, et al. Calibration of vision-based measurement of pain intensity with multiple expert observers. IEEE Trans Instrum Meas. 2019;68(7):2442-2450. [ CrossRef ]
  • Lee JS, Wang CW. Facial pain intensity estimation for ICU patient with partial occlusion coming from treatment. 2019. Presented at: BIBE 2019; The Third International Conference on Biological Information and Biomedical Engineering; June 20-22, 2019;1-4; Hangzhou, China.
  • Saha AK, Ahsan GMT, Gani MO, Ahamed SI. Personalized pain study platform using evidence-based continuous learning tool. 2019. Presented at: 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC); July 15-19, 2019;490-495; Milwaukee, WI. [ CrossRef ]
  • Tavakolian M, Hadid A. A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics. Int J Comput Vis. 2019;127(10):1413-1425. [ FREE Full text ] [ CrossRef ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Ensemble neural network approach detecting pain intensity from facial expressions. Artif Intell Med. 2020;109:101954. [ CrossRef ] [ Medline ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Syst Appl. 2020;149:113305. [ CrossRef ]
  • Dragomir MC, Florea C, Pupezescu V. Automatic subject independent pain intensity estimation using a deep learning approach. 2020. Presented at: 2020 International Conference on e-Health and Bioengineering (EHB); October 29-30, 2020;1-4; Iasi, Romania. [ CrossRef ]
  • Huang D, Xia Z, Mwesigye J, Feng X. Pain-attentive network: a deep spatio-temporal attention model for pain estimation. Multimed Tools Appl. 2020;79(37-38):28329-28354. [ CrossRef ]
  • Mallol-Ragolta A, Liu S, Cummins N, Schuller B. A curriculum learning approach for pain intensity recognition from facial expressions. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;829-833; Buenos Aires, Argentina. [ CrossRef ]
  • Peng X, Huang D, Zhang H. Pain intensity recognition via multi‐scale deep network. IET Image Process. 2020;14(8):1645-1652. [ FREE Full text ] [ CrossRef ]
  • Tavakolian M, Lopez MB, Liu L. Self-supervised pain intensity estimation from facial videos via statistical spatiotemporal distillation. Pattern Recognit Lett. 2020;140:26-33. [ CrossRef ]
  • Xu X, de Sa VR. Exploring multidimensional measurements for pain evaluation using facial action units. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;786-792; Buenos Aires, Argentina. [ CrossRef ]
  • Pikulkaew K, Boonchieng W, Boonchieng E, Chouvatut V. 2D facial expression and movement of motion for pain identification with deep learning methods. IEEE Access. 2021;9:109903-109914. [ CrossRef ]
  • Rezaei S, Moturu A, Zhao S, Prkachin KM, Hadjistavropoulos T, Taati B. Unobtrusive pain monitoring in older adults with dementia using pairwise and contrastive training. IEEE J Biomed Health Inform. 2021;25(5):1450-1462. [ CrossRef ] [ Medline ]
  • Semwal A, Londhe ND. S-PANET: a shallow convolutional neural network for pain severity assessment in uncontrolled environment. 2021. Presented at: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC); January 27-30, 2021;0800-0806; Las Vegas, NV. [ CrossRef ]
  • Semwal A, Londhe ND. ECCNet: an ensemble of compact convolution neural network for pain severity assessment from face images. 2021. Presented at: 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence); January 28-29, 2021;761-766; Noida, India. [ CrossRef ]
  • Szczapa B, Daoudi M, Berretti S, Pala P, Del Bimbo A, Hammal Z. Automatic estimation of self-reported pain by interpretable representations of motion dynamics. 2021. Presented at: 2020 25th International Conference on Pattern Recognition (ICPR); January 10-15, 2021;2544-2550; Milan, Italy. [ CrossRef ]
  • Ting J, Yang YC, Fu LC, Tsai CL, Huang CH. Distance ordering: a deep supervised metric learning for pain intensity estimation. 2021. Presented at: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA); December 13-16, 2021;1083-1088; Pasadena, CA. [ CrossRef ]
  • Xin X, Li X, Yang S, Lin X, Zheng X. Pain expression assessment based on a locality and identity aware network. IET Image Process. 2021;15(12):2948-2958. [ FREE Full text ] [ CrossRef ]
  • Alghamdi T, Alaghband G. Facial expressions based automatic pain assessment system. Appl Sci. 2022;12(13):6423. [ FREE Full text ] [ CrossRef ]
  • Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Sci Rep. 2022;12(1):17297. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fontaine D, Vielzeuf V, Genestier P, Limeux P, Santucci-Sivilotto S, Mory E, et al. Artificial intelligence to evaluate postoperative pain based on facial expression recognition. Eur J Pain. 2022;26(6):1282-1291. [ CrossRef ] [ Medline ]
  • Hosseini E, Fang R, Zhang R, Chuah CN, Orooji M, Rafatirad S, et al. Convolution neural network for pain intensity assessment from facial expression. 2022. Presented at: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); July 11-15, 2022;2697-2702; Glasgow, Scotland. [ CrossRef ]
  • Huang Y, Qing L, Xu S, Wang L, Peng Y. HybNet: a hybrid network structure for pain intensity estimation. Vis Comput. 2021;38(3):871-882. [ CrossRef ]
  • Islamadina R, Saddami K, Oktiana M, Abidin TF, Muharar R, Arnia F. Performance of deep learning benchmark models on thermal imagery of pain through facial expressions. 2022. Presented at: 2022 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT); November 03-05, 2022;374-379; Solo, Indonesia. [ CrossRef ]
  • Swetha L, Praiscia A, Juliet S. Pain assessment model using facial recognition. 2022. Presented at: 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS); May 25-27, 2022;1-5; Madurai, India. [ CrossRef ]
  • Wu CL, Liu SF, Yu TL, Shih SJ, Chang CH, Mao SFY, et al. Deep learning-based pain classifier based on the facial expression in critically ill patients. Front Med (Lausanne). 2022;9:851690. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ismail L, Waseem MD. Towards a deep learning pain-level detection deployment at UAE for patient-centric-pain management and diagnosis support: framework and performance evaluation. Procedia Comput Sci. 2023;220:339-347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vu MT, Beurton-Aimar M. Learning to focus on region-of-interests for pain intensity estimation. 2023. Presented at: 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG); January 05-08, 2023;1-6; Waikoloa Beach, HI. [ CrossRef ]
  • Kaur H, Pannu HS, Malhi AK. A systematic review on imbalanced data challenges in machine learning: applications and solutions. ACM Comput Surv. 2019;52(4):1-36. [ CrossRef ]
  • Data for meta-analysis of pain assessment from facial images. Figshare. 2023. URL: https:/​/figshare.​com/​articles/​dataset/​Data_for_Meta-Analysis_of_Pain_Assessment_from_Facial_Images/​24531466/​1 [accessed 2024-03-22]

Abbreviations

Edited by A Mavragani; submitted 26.07.23; peer-reviewed by M Arab-Zozani, M Zhang; comments to author 18.09.23; revised version received 08.10.23; accepted 28.02.24; published 12.04.24.

©Jian Huo, Yan Yu, Wei Lin, Anmin Hu, Chaoran Wu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 12.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. (PDF) Qualitative Research Strategies and Data Analysis Methods in Real

    data analysis research paper section

  2. (PDF) Principles of survey research part 6: Data analysis

    data analysis research paper section

  3. FREE 40+ Research Paper Samples in PDF

    data analysis research paper section

  4. FREE 42+ Research Paper Examples in PDF

    data analysis research paper section

  5. Tables in Research Paper

    data analysis research paper section

  6. DATA Analysis

    data analysis research paper section

VIDEO

  1. Data Analysis

  2. Data Analysis Using #SPSS (Part 1)

  3. What is Data Analysis in research

  4. How to Assess the Quantitative Data Collected from Questionnaire

  5. How to interpret Reliability analysis results

  6. Data Analysis and Report Writing Part 1

COMMENTS

  1. PDF Structure of a Data Analysis Report

    - Data - Methods - Analysis - Results This format is very familiar to those who have written psych research papers. It often works well for a data analysis paper as well, though one problem with it is that the Methods section often sounds like a bit of a stretch: In a psych research paper the Methods section describes what you did to ...

  2. How to Write an APA Methods Section

    The methods section of an APA style paper is where you report in detail how you performed your study. Research papers in the social and natural sciences often follow APA style. ... Specify the data collection methods, the research design and data analysis strategy, including any steps taken to transform the data and statistical analyses. ...

  3. Reporting Research Results in APA Style

    Reporting Research Results in APA Style | Tips & Examples. Published on December 21, 2020 by Pritha Bhandari.Revised on January 17, 2024. The results section of a quantitative research paper is where you summarize your data and report the findings of any relevant statistical analyses.. The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields ...

  4. PDF How to Write the Methods Section of a Research Paper

    Data Analysis Summary The methods section of a research paper provides the information by which a study's validity is judged. Therefore, it requires a clear and precise description of how an experiment was done, and the rationale for why specific experimental procedures were chosen. The methods section should describe what was

  5. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  6. Research Paper

    The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. ... Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic ...

  7. How to Write a Results Section

    The most logical way to structure quantitative results is to frame them around your research questions or hypotheses. For each question or hypothesis, share: A reminder of the type of analysis you used (e.g., a two-sample t test or simple linear regression). A more detailed description of your analysis should go in your methodology section.

  8. PDF Methodology Section for Research Papers

    The methodology section of your paper describes how your research was conducted. This information allows readers to check whether your approach is accurate and dependable. A good methodology can help increase the reader's trust in your findings. First, we will define and differentiate quantitative and qualitative research.

  9. AP Research 2024

    Key Terms to Review ( 1) : A research question is a clear and concise statement that identifies the main focus of a research study. It outlines what the researcher wants to investigate and provides direction for the entire research process. Cram for AP Research - The Academic Paper with Fiveable Study Guides.

  10. Creating a Data Analysis Plan: What to Consider When Choosing

    For those interested in conducting qualitative research, previous articles in this Research Primer series have provided information on the design and analysis of such studies. 2, 3 Information in the current article is divided into 3 main sections: an overview of terms and concepts used in data analysis, a review of common methods used to ...

  11. Learning to Do Qualitative Data Analysis: A Starting Point

    On the basis of Rocco (2010), Storberg-Walker's (2012) amended list on qualitative data analysis in research papers included the following: (a) the article should provide enough details so that reviewers could follow the same analytical steps; (b) the analysis process selected should be logically connected to the purpose of the study; and (c ...

  12. How to write statistical analysis section in medical research

    Results. Although biostatistical inputs are critical for the entire research study (online supplemental table 2), biostatistical consultations were mostly used for statistical analyses only 15.Even though the conduct of statistical analysis mismatched with the study objective and DGP was identified as the major problem in articles submitted to high-impact medical journals. 16 In addition ...

  13. PDF Results/Findings Sections for Empirical Research Papers

    The Results section should also describe other pertinent discoveries, trends, or insights revealed by analysis of the raw data. Typical structure of the Results section in an empirical research paper: Data Analysis. In some disciplines, the Results section begins with a description of how the researchers analyzed

  14. How to Write an Effective Results Section

    A key element of writing a research paper is to clearly and objectively report the study's findings in the Results section. The Results section is where the authors inform the readers about the findings from the statistical analysis of the data collected to operationalize the study hypothesis, optimally adding novel information to the ...

  15. How to clearly articulate results and construct tables and figures in a

    As an example elucidating the abovementioned issues, graphics, and flow diagram in the 'Results' section of a research paper written by the authors of this review article, and published in the World Journal of Urology in the year 2010 (World J Urol 2010;28:17-22.) are shown in Figures 1, and and2 2.

  16. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  17. How to Write the Results/Findings Section in Research

    Step 1: Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study. The guidelines will generally outline specific requirements for the results or findings section, and the published articles will ...

  18. How to Write the Methods Section of a Research Paper

    A simple rule of thumb for sectioning the method section is to begin by explaining the methodological approach (what was done), describing the data collection methods (how it was done), providing the analysis method (how the data was analyzed), and explaining the rationale for choosing the methodological strategy.

  19. PDF CHAPTER 4: ANALYSIS AND INTERPRETATION OF RESULTS

    42. CHAPTER 4: ANALYSIS AND INTERPRETATION OF RESULTS. 4.1 INTRODUCTION. To complete this study properly, it is necessary to analyse the data collected in order to test the hypothesis and answer the research questions. As already indicated in the preceding chapter, data is interpreted in a descriptive form.

  20. How To Write The Research Paper Data Analysis Section

    Follow these simple tips to compose a strong piece of writing: Avoid analyzing your results in the data analysis section. Indicate whether your research is quantitative or qualitative. Provide your main research questions and the analysis methods that were applied to answer them. Report what software you used to gather and analyze your data.

  21. PDF Results Section for Research Papers

    The results section of a research paper tells the reader what you found, while the discussion section tells the reader what your findings mean. The results section should present the facts in an academic and unbiased manner, avoiding any attempt at analyzing or interpreting the data. Think of the results section as setting the stage for the ...

  22. How to write the results section of a research paper

    Practical guidance for writing an effective results section for a research paper. Always use simple and clear language. Avoid the use of uncertain or out-of-focus expressions. The findings of the study must be expressed in an objective and unbiased manner. While it is acceptable to correlate certain findings in the discussion section, it is ...

  23. Full article: Management accounting and data analytics: technology

    The next section of this paper provides an overview of data analytics, both in the recent literature and within its connection to accounting, followed by a brief overview of the ETAM used in the study. Further section describes the research methods employed, and Section 4 presents the results.

  24. AI Index Report

    The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

  25. Qualitative Research: Data Collection, Analysis, and Management

    INTRODUCTION. In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area.

  26. International Collaboration in Selected Critical and Emerging Fields

    Artificial intelligence (AI) and COVID-19 research are two areas that have complex challenges that both domestic and international institutions are motivated to overcome. A concentration on domestic research can indicate the presence of sufficient domestic knowledge and resources or an interest in preserving in-house expertise. This InfoBrief examines the extent to which top producers of ...

  27. Research on the Exposure Risk Analysis of Wildfires with a ...

    This paper introduces a novel method that integrates a spatiotemporal knowledge graph with wildfire spread data and an exposure risk analysis model to address this issue. This approach enables the semantic integration of varied and heterogeneous spatiotemporal data, capturing the dynamic nature of wildfire propagation for precise risk analysis.

  28. Educational interventions targeting pregnant women to optimise the use

    In recent years, caesarean section (CS) rates have increased globally [1,2,3,4].CS can be a life-saving procedure when vaginal birth is not possible; however, it comes with higher risks both in the short- and long-term for women and babies [1, 5].Women with CS have increased risks of surgical complications, complications in future pregnancies, subfertility, bowel obstruction, and chronic pain ...

  29. Journal of Medical Internet Research

    Background: The continuous monitoring and recording of patients' pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions.