• Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

51k Accesses

26 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research methodology review of literature pdf

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Writing a Critical Review of Literature: A Practical Guide for English Graduate Students

Profile image of Fasih Ahmed

Global Language Review

As an integral part of dissertations and theses, research scholars in different disciplines require to write a comprehensive chapter on "literature review" that establishes the conceptual and theoretical foundations of an empirical research study. This, however, poses an intellectual challenge to produce a critical review of the published research on a given topic. Therefore, this paper addresses the students problems of writing the literature review in a thesis or dissertation at the graduate and postgraduate levels. It explains the process and steps of reviewing literature for a thesis chapter. These steps include; a) critical reading and note-taking, b) writing a summary of the reviewed literature, c) organization of literature review, and d) the use of a synthesis matrix. The last part of the paper offers suggestions on how to write critically and make the researcher's voice explicit in the chapter.

Related Papers

Scientific Research Publishing: Creative Education

Dr. Qais Faryadi

Literature writing is a skill that every PhD candidate must procure to communicate his or her research findings clearly. The main objective of this paper is to facilitate the literature writing process so that PhD candidates under- stand what PhD literature is and are able to write their PhD literature cor- rectly and scientifically. The methodology used in this research is a descrip- tive method as it deliberates and defines the various parts of literature writing process and elucidates the how to do of it in a very unpretentious and under- standing language. As thus, this paper summarizes the various steps of litera- ture writing to pilot the PhD students so that the task of PhD literature writ- ing process becomes adaptable and less discouraging. This research is a useful roadmap especially for students of the social science studies. Additionally, in this paper, literature writing techniques, procedures and important strategies are enlightened in a simple manner. This paper adopts a how-to approach when discussing a variety of relevant topics, such as literature review intro- duction, types of literature review, advantages of literature reviews, objective of literature review, literature review template, and important check lists about literature review are discussed. This paper has 5 parts, such as Intro- duction, Literature Review, Methodology, Results and Conclusion. The lit- erature review chapter is discussed in this paper. I will discuss the rest as a se- ries in the future. Keywords Thesis Writing Process, Literature Review, PhD, Social Science, Research Methodology

research methodology review of literature pdf

Mohammed I S

Literature review and writing form the basis of every academic research and writing, and it is most significant and indispensable to every academic research work. Its systematic process of writing has, however, been mysterious, complex, messy and boring, especially to inexperienced researchers and postgraduate students. This study explored the mysteries and ease with academic literature, writing and review. The study used secondary source to gather data and for the analysis, and found that academic literature writing and review comprise of different patterns and systems, dependent upon the nature and character of the research, the writing in contexts and its specific objectives; there are different types of literature and writing in academics, and while no one way is universally accepted by all at the same time, different approaches are required for different types of review and writings. The difficulty in understanding, reviewing and writing of literature mainly emanates from failure right from the inception to clearly identify what precisely the reviewer wants and how to go about looking for it in a systematic and comprehensive manner. Reviewing and writing of academic literature is a herculean task and for it to be successful there must be focus, specific objectives, adequate and timely provision and access to relevant materials. With proper understanding, it can be mastered and made easy. The study is essential for academics and post graduate students who must undergo literature review and writing at varying stages, especially at critical, stipulated and limited times.

Auxiliadora Padilha

HUMANUS DISCOURSE

Humanus Discourse

The importance of literature review in academic writing of different categories, levels, and purposes cannot be overemphasized. The literature review establishes both the relevance and justifies why new research is relevant. It is through a literature review that a gap would be established, and which the new research would fix. Once the literature review sits properly in the research work, the objectives/research questions naturally fall into their proper perspective. Invariably, other chapters of the research work would be impacted as well. In most instances, scanning through literature also provides you with the need and justification for your research and may also well leave a hint for further research. Literature review in most instances exposes a researcher to the right methodology to use. The literature review is the nucleus of a research work that might when gotten right spotlights a work and can as well derail a research work when done wrongly. This paper seeks to unveil the practical guides to writing a literature review, from purpose, and components to tips. It follows through the exposition of secondary literature. It exposes the challenges in writing a literature review and at the same time recommended tips that when followed will impact the writing of the literature review.

John Schostak

The literature is a multiplicity of voices. With each voice agendas emerge. Each text is itself a framing of voices and their agendas, shaped to present a debate slanted towards a conclusion. Within that debate can often be detected the friends, the strangers, the guests, the hosts and the enemies that are entertained by the writer. So, there is a problem. It is that whilst acts of framing bring and impose order, those very processes of ordering and categorisation select and edit so that some things are chosen to be foregrounded, others to be background and yet others to be excluded. In the writing task, agenda setting and framing pin possibilities and options down to what is regarded as 'realistic', 'plausible', 'do-able', 'true'. However, there has to be a moment when the literature appears like the vertigo experienced over a sheer and endless drop. Engagement with the literature is the essential step in widening out, indeed seeing the limitless possibilities for open debate with a public extending over centuries, even millennia. Making a voice map of the public space of debate is a way of trying to locate what is at stake in adopting a given way of framing the world and its agendas. Getting a sense of the historical development of major debates, discovering the tributaries, the dead-ends, the forgotten, the overlooked is all a part of the gradual sense of knowing where you are, where you stand, in relation to others. In particular, who claims to know what and why? What kinds of arguments are being made, and why? What are the assumptions at the back of explanations and theories? What happens if the assumptions are challenged or changed? From a review of the literature it is possible to sketch and fill out the details of the problematic, that is, the knot of problems, issues, concerns, interests that each of the voices in the literature have historically addressed. In determining how they address their chosen problems, the outlines of their methodologies can be formulated. Then it is a question of what is at stake expressed by each voice in the choices they make in exploring, examining and forming their conclusions using their chosen methodologies in relation to the problems they address. Which voices have they included in their own reviews of the debates, which have they excluded and why? By asking such questions as these a literature review then can be designed specifically to increase the power of a given argument, set of findings, recommendations and conclusions that have implications for action.

Abdullah Ramdhani , Tatam Chiway , Muhammad Ali Ramdhani

Mahendra Budhathoki

Literature review or research synthesis is an essential component in research field. Novice and student researchers usually take it as a required burden in research, and present haphazardly under sub-topics in research. There is the problem of application and correlating LR with their studies. The main purpose of this paper is to present introduction of LR/research synthesis, its functions and methods in research. LR/research synthesis consists of searching relevant literature, discussing the findings and evidence, correlating the individual studies, interpreting critically, and synthesizing them to build an argument for future research. It is a review article based on qualitative research, but not based on primary data. This paper contributes to answer the questions of writing a LR or synthesis paper, and becomes a useful reference material to novice and student researchers of higher education.

Nicolao Buenaventura

QUEST JOURNALS

Literature review and writing form the basis of research to which it is indispensable. Its systematic process however, remains mysterious, complex and problematic especially to postgraduate students most of whom undertake research for the first time at graduate level. This paper explored the challenges, strengths and mysteries with which literature review and writing was undertaken by graduate students at both master's and Doctoral levels. The paper used primary sources to gather data from graduate students' theses and proposals. Data from those sources revealed how literature review and writing showed different patterns depending on the nature of the research, and the specific objectives of the study. Similarly, different approaches were found to be suitable to different research contexts and methods. The challenges in writing and reviewing literature mainly springs from the failure to clearly define the research problem which propels clarity in the presentation of literature.

Publications

Cherley C Du Plessis

The ability to conduct an explicit and robust literature review by students, scholars or scientists is critical in producing excellent journal articles, academic theses, academic dissertations or working papers. A literature review is an evaluation of existing research works on a specific academic topic, theme or subject to identify gaps and propose future research agenda. Many postgraduate students in higher education institutions lack the necessary skills and understanding to conduct in-depth literature reviews. This may lead to the presentation of incorrect, false or biased inferences in their theses or dissertations. This study offers scientific knowledge on how literature reviews in different fields of study could be conducted to mitigate against biased inferences such as unscientific analogies and baseless recommendations. The literature review is presented as a process that involves several activities including searching, identifying, reading, summarising, compiling, analysing, interpreting and referencing. We hope this article serves as reference material to improve the academic rigour in the literature review chapters of postgraduate students' theses or dissertations. This article prompts established scholars to explore more innovative ways through which scientific literature reviews can be conducted to identify gaps (empirical, knowledge, theoretical, methodological, application and population gap) and propose a future research agenda.

RELATED PAPERS

Mario Tanno

Genome Announcements

Esther Julián

Journal of Applied Sciences

محمد الواثق سعيد ميرغني

… Journal of Business

Olivier LEVYNE

Grand Valley Review

Paula Provencio

ACDI - Anuario Colombiano de Derecho Internacional

Patricio Masbernat

NATAŠA ŠTAJNER

Jérôme Blanc

Planta Medica

IEEE Security & Privacy

Jules Polonetsky

IEEE Latin America Transactions

Dr. Andres Blanco Ortega

Journal of Agroindustrial Technology

Alfia Nurul Ilma

Arthroscopy Techniques

Dr Douglas Pavão

Jornal Brasileiro de Nefrologia

Elizabeth Ramos

Helvi Petrus

Journal of Applied Biomechanics

Cassie Wilson

Proceedings of the 8th international conference on Intelligent user interfaces - IUI '03

Lewis Johnson

Hanna Lappalainen

Orthopaedic Journal of Sports Medicine

George Muschler

Joseph Fokam

2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Abdul Rehman

Vojnosanitetski pregled

Slobodan Jankovic

UCLA文凭证书 加州大学洛杉矶分校文凭证书 klhjkgh

Antonio Balbo

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

SJSU | School of Information

Fostering the cultivation of practices in multimodal and culturally responsive literature review research methods.

Published : April 17th, 2024 by Dr. Kristen Radsliff Rebmann

When I talk to students about their program of study, scholarship in LIS, and their identity as researchers, they often tell me that they have no interest in doing research and that they just want to be librarians. Furthermore, I’ve been asked why the iSchool has developed a required course in research methods: INFO 285.  In response to these queries, I try to emphasize that research absolutely is in our wheelhouse as information professionals (a librarian superpower) and that students should take the opportunity in INFO 285 to deepen their skill set and competencies relating to research and writing for scholarly communication.

That said, student resistance made me consider how I (as a faculty member) can do a better job of designing a course that meets students “where they are”, while supporting their development as information professionals who excel in research and other forms of scholarly production. With these experiences in mind I sent out, back in 2021, to develop a new section of info 285 that might support students in a domain of research methods that would deepen their knowledge of research practices but also connect them with competencies that they could immediately use in the workplace.

As a faculty member that has been teaching INFO 275 for nearly a decade (a course relating to the design of programs and services for diverse populations), I contemplated designing a new section of INFO 285 that is both equity-forward and is primed to provide a framework for research in diversity, equity, and inclusion.  I was inspired by SJSU’s language around acknowledging and celebrating this type of scholarship of engagement.  Yet, I realized that there are so many inductive and critical theories out there that to focus on one related methodology, connected to equity-forward research, would be very limiting or (at least) so niche that the course may not fill.  So, I embarked on a new journey to work towards identifying research methods that are new or articulated in new, exciting ways.  I read many new (to me) textbooks and articles on novel methodologies and new takes on established approaches.

I asked myself: What would be exciting AND useful for our graduate students?

research methodology review of literature pdf

In my travels, I came across Anthony J. Onwuegbuzie and Rebecca Frels’ culturally relevant approach to writing literature reviews for programs of study and publication: Seven Steps to a Comprehensive Literature Review – A Multimodal and Cultural Approach .  I connected strongly with the textbook’s emphasis on the use of multimodal texts in charting the landscape of a topic and the authors’ core argument that researchers must take a reflexive stance in their work - reckoning with diverse voices as they make intellectual claims and form arguments in support of moving the field forward.  I found this the perfect text to support a new course in literature review research methods – a methodology that would support both scholarship and professional writing.

So, what IS a comprehensive literature review anyway?

The authors’ own definition of the comprehensive literature review appears on page 4 of the text.

”The comprehensive Literature Review is a methodology, conducted either to stand alone or to inform primary research at multiple stages of their research process which optimally involves the use of mixed research techniques inclusive of culture, ethics, and multimodal texts and settings in a systematic, holistic, synergistic and cyclical process of exploring, interpreting, synthesizing, and communicating published and or unpublished information.”

If you’ve worked as a researcher for many years like I have, you’ll find and read many, many literature reviews but notice along the way that many authors deploy a methodological framework that “makes visible” their epistemological and ontological standpoints.  Literature review authors also traditionally “stay in their lane” when writing about their chosen topics.  This textbook is a reaction against these dispositions. 

What makes Onwuegbuzie and Frels’ comprehensive literature review methodology multimodal, and cultural in its approach?

On page 39 of the textbook, Onwuegbuzie and Frels argue that the multimodal characteristics of our face2face and online experiences of the world require (not only) culturally progressive and ethical research approaches to the world but an approach that includes information harvested from resources harvested in multiple modalities.  The acronym they introduce, MODES, refers to information harvested in the forms of media, observations, documents, experts, and secondary data.

Further, their CLR framework operationalizes the process of locating literature review methods within the author(s) own belief system and stances but also to acknowledge the assets and wealth of knowledge across the many research paradigms and cultural communities that exist in the many field producing knowledge.These efforts represent an important movement within the field: creating a literature review methodology that makes visible the belief systems that shape knowledge production and the value of incorporating diverse intellectual traditions and communities of practice into the information that is collected and synthesized.

I was very proud when students in my course were able to publish their literature review that charts the landscape of refugee services in library and information science.  You can find their open access article here: Refugees’ Digital Equity, Inclusion, and Access in Public Libraries: A Narrative Review .

Onwuegbuzie, A. J. and Frels, R. (2016). Seven steps to a comprehensive literature review: A multimodal and cultural approach. (1st ed.) Sage. https://uk.sagepub.com/en-gb/eur/seven-steps-to-a-comprehensive-literature-review/book238001

Stoner, J., Sagran, N., Cervantes, D., Baseley, S., & Borgolini, S. (2022). Refugees’ Digital Equity, Inclusion, and Access in Public Libraries: A Narrative Review. Library Philosophy & Practice. (p. 7219). Available: https://digitalcommons.unl.edu/libphilprac/7219

Post new comment

  • Request new password
  • Support portal

research methodology review of literature pdf

Hybrid intelligence failure analysis for industry 4.0: a literature review and future prospective

  • Open access
  • Published: 22 April 2024

Cite this article

You have full access to this open access article

  • Mahdi Mokhtarzadeh   ORCID: orcid.org/0000-0002-0348-6718 1 , 2 ,
  • Jorge Rodríguez-Echeverría 1 , 2 , 3 ,
  • Ivana Semanjski 1 , 2 &
  • Sidharta Gautama 1 , 2  

Industry 4.0 and advanced technology, such as sensors and human–machine cooperation, provide new possibilities for infusing intelligence into failure analysis. Failure analysis is the process of identifying (potential) failures and determining their causes and effects to enhance reliability and manufacturing quality. Proactive methodologies, such as failure mode and effects analysis (FMEA), and reactive methodologies, such as root cause analysis (RCA) and fault tree analysis (FTA), are used to analyze failures before and after their occurrence. This paper focused on failure analysis methodologies intelligentization literature applied to FMEA, RCA, and FTA to provide insights into expert-driven, data-driven, and hybrid intelligence failure analysis advancements. Types of data to establish an intelligence failure analysis, tools to find a failure’s causes and effects, e.g., Bayesian networks, and managerial insights are discussed. This literature review, along with the analyses within it, assists failure and quality analysts in developing effective hybrid intelligence failure analysis methodologies that leverage the strengths of both proactive and reactive methods.

Avoid common mistakes on your manuscript.

Introduction

Failure analysis entails activities to identify, categorize, and prioritize (potential) failures and determine causes and effects of each failure and failure propagation and interdependencies (Rausand & Øien, 1996 ). Failure analysis significance in manufacturing has grown since Industry 3.0 to mitigate defects and/or failures in production processes, thereby maximizing reliability and quality and minimizing production interruptions, associated risks, and costs (Wu et al., 2021 ; Ebeling, 2019 ).

Failure analysis methodologies have been supported by mathematical, statistical, and graph theories and tools, including MCDM theory, fuzzy theory, six-sigma, SPC, DOE, simulation, Pareto charts, and analysis of mean and variance (Oliveira et al., 2021 ; Huang et al., 2020 ; Tari & Sabater, 2004 ). Industry 4.0 is driven by (real-time) data from sensors, the Internet of Things (IoT), such as Internet-enabled machines and tools, and artificial intelligence (AI). Advances in artificial intelligence theory and technology have brought new tools to strengthen failure analysis methodologies (Oztemel & Gursev, 2020 ). Examples of tools include Bayesian networks (BNs), case-based reasoning (CBR), neural networks, classifications, clusterings algorithms, principal component analysis (PCA), deep learning, decision trees, and ontology-driven methods (Zheng et al., 2021 ). These Industry 4.0 advancments enable more efficient data collection and analysis, enhancing predictive capabilities, increasing efficiency and automation, and improving collaboration and knowledge sharing.

Failure analysis methodologies can be categorized into expert-driven, data-driven, and hybrid ones. Expert-driven failure analysis methods rely on experts’ knowledge and experience (Yucesan et al., 2021 ; Huang et al., 2020 ). This approach is useful when the data is limited or when there is a high degree of uncertainty. Expert-driven methods are also useful when the failure structure is complex and difficult to understand. However, this approach is limited by the availability and expertise of the experts, and is prone to bias and subjective interpretations (Liu et al., 2013 ).

Data-driven failure analysis methods, on the other hand, rely on statistical analysis and machine learning algorithms to identify patterns in the data and predict the causes of the failure (Zhang et al., 2023 ; Mazzoleni et al., 2017 ). This approach is useful when there is a large amount of data available and when the failure structure is well-defined. However, data-driven methods is limited by the quality and completeness of the data (Oliveira et al., 2021 ).

Until recently, most tools have focused on replacing humans with artificial intelligence (Yang et al., 2020 ; Filz et al., 2021b ), which causes them to remove human intellect and capabilities from intelligence systems. Hybrid intelligence creates hybrid human–machine intelligence systems, in which humans and machines collaborate synergistically, proactively, and purposefully to augment human intellect and capabilities rather than replace them with machine intellect and capabilities to achieve shared goals (Akata et al., 2020 ).

Collaboration between humans and machines can enhance the failure analysis process, allowing for analyses that were previously unattainable by either humans or machines alone. Thus, hybrid failure analysis provides a more comprehensive analysis of the failure by incorporating strengths of both expert-driven and data-driven approaches to identify the most likely causes and effects of failures (Dellermann et al., 2019 ; van der Aalst, 2021 ).

Benefits from a smart failure analysis may include reduced costs and production stoppages, improved use of human resources, improved use of knowledge, improved failure, root causes, and effects identification, and real-time failure analysis. Yet, only a few studies specifically addressed hybrid failure analysis (Chhetri et al., 2023 ). A case example of hybrid expert data-driven failure analysis involves using data from similar product assemblies to construct a Bayesian network for proccess failure mode and effects analysis (pFMEA), while also incorporating expert knowledge as constraints based on the specific product being analyzed (Chhetri et al., 2023 ).

Over the past few years, several literature reviews, as reported in Section Literature review , have been accomplished under different outlooks in relation to different failure analysis methodologies including failure mode and effects analysis (FMEA), root cause analysis (RCA), and fault tree analysis (FTA). Currently, most existing literature does not systematically summarize the research status of these failure analysis methodologies from the perspective of Industry 4.0 and (hybrid) intelligence failure analysis with the benefits from new technologies. Therefore, this study aims to review, categorize, and analyze the literature of these three general failure analysis methodologies in production systems. The objective is to provide researchers with a comprehensive overview of these methodologies, with a specific focus on hybrid intelligence, and its benefits for quality issues in production. We address two questions "How can failure analysis methodologies benefit from hybrid intelligence?" and "Which tools are suitable for a good fusion of human and machine intelligence?" Consequently, the main contributions of this study to the failure analysis literature are as follows:

Analysis of 86 papers out of 7113 papers from FMEA, RCA, and FTA with respect to methods and data types that might be useful for a hybrid intelligence failure analysis.

Identification of data and methods to construct and detect multiple failures within different research related to FMEA, RCA, and FTA methodologies.

Identification of the most effective methods for analyzing failures, identifying their sources and effects, and assessing related risks.

Proposal of a categorization of research based on the levels of automation/intelligence, along with the identification of limitations in current research in this regard.

Provision of hybrid intelligent failure analysis future research, along with other future directions such as future research on failure propagation and correlation.

The plan of this paper is as follows. Section Literature review briefly introduces related literature reviews on FMEA, RCA, and FTA. A brief description of other failure analysis methodologies is also provided. Section Research methodology presents our review methodology, including the review scope and protocols, defining both our primary and secondary questions, and the criteria for selecting journals and papers to be reviewed. A bibliography summary of the selected papers is provided. Literature has been categorized in Section Literature categorization based on the four general steps of a failure analysis methodology, involving failure structure detection, failure event probability detection, failure risk analysis, and outputs. Managerial insights, limitations, and future research are discussed in Section Managerial insights, limitations, and future research . This assists researchers with applications and complexity, levels of intelligence, how knowledge is introduced into the failure analysis. A more in-depth discussion of hybrid intelligence, failure propagation and correlation, hybrid methodologies, and other areas of future research is also included. Conclusions are presented in Section Conclusion .

Literature review

General and industry/field-specific failure analysis methodologies have been developed over the last few decades. In this section, we provide useful review papers regarding FMEA, RCA, and FTA, which are the focus of our paper. Additionally, some other general and industry/field-specific failure analysis methodologies are briefly discussed.

FMEA is a most commonly used bottom-up proactive qualitative methodologies for potential quality failure analysis (Huang et al., 2020 ; Stamatis, 2003 ). Among its extensions, process FMEA (pFMEA) proactively identifies potential quality failures in production processes such as assembly lines (Johnson & Khan, 2003 ). Typically, (p)FMEA uses expert knowledge to determine potential failures, effects, and causes, and to prioritize the failures based on the risk priority number (RPN). RPN is a product of severity, occurrence, and detection rates for each failure (Wu et al., 2021 ). Some of the FMEA shortcomings include time-consuming, subjectivity, inability to determine multiple failures, and failure propagation and interdependency (Liu et al., 2013 ).

RCA is a bottom-up reactive quantitative methodology that determines the causal mechanism behind a failure to prevent the recurrence of the failure in manufacturing processes (Oliveira et al., 2023 ). To locate, identify, and/or explain the reasons behind the occurrence of root causes, RCA utilizes statistical analysis tools, such as regression, statistical process control (SPC), design of experiments (DOE), PCA, and cause-effect diagram (Williams, 2001 ). Limited ability to predict future failures and difficulty in identifying complex or systemic issues are among RCA limitations (Yuniarto, 2012 ).

FTA is a top-down reactive graphical method to model failure propagation through a system, i.e., how component failures lead to system failures (Kumar & Kaushik, 2020 ). FTA uses qualitative data to model the structure of a system and quantitative data, including probabilities and graph methods such as minimal cut/path sets, binary decision diagrams, simulation, and BNs, to model failures propagation. Requiring extensive data, limited ability to identify contributing factors, and time-consuming are among the FTA limitations (Ruijters & Stoelinga, 2015 ).

In recent years, several literature reviews have been conducted on failure analysis methodologies, exploring various perspectives and approaches. Liu et al. ( 2013 ) reviewed FMEA risk evaluation tools including rule-based systems, mathematical programming, and multi-criteria decision-making (MCDM). They concluded that artificial intelligence and MCDM tools, particularly fuzzy rule base systems, grey theory, and cost-based models, are the most cited tools to prioritize risks in FMEA. Liu et al. ( 2019a ) and Dabous et al. ( 2021 ) reviewed MCDM tools application for FMEA. Papers with different areas, automotive, electronics, machinery and equipment, and steel manufacturing were considered. The most used MCDM tools, namely technique for order of preference by similarity to ideal solution (TOPSIS), analytic hierarchy process (AHP), decision-making trial and evaluation laboratory (DEMATEL), and grey theory, were identified.

Spreafico et al. ( 2017 ) provided a FMEA/Failure mode, effects, and criticality analysis (FMECA) critical review by classifying FMEA/FMECA limitations and issues and reviewing suggested improvements and solutions for the limitations. FMEA issues were classified into four groups of applicabilities, cause and effect analysis, risk analysis, and problem-solving. Main problems (and solutions) are being time-consuming (integration with design tools, using more structured templates, and automation), lack of secondary effects modeling (integration with other tools such as FTA, BN, and Petri net), being too subjective (using statistical evaluation and cost-based approaches), and lack in evaluating the implementation of a solution (using the improved presentation of the results and integration with other tools such as maintenance management tools), respectively. Huang et al. ( 2020 ) provided a bibliographic analysis of FMEA and its applications in manufacturing, marine, healthcare, aerospace, and electronics. Wu et al. ( 2021 ) sorted out potential failure mode identification approaches such as analyzing entry point for system failure mode identification, failure mode recognition tools, and failure mode specification description. Then a review of FMEA risk assessment tools had been provided.

Oliveira et al. ( 2023 ) reviewed automatic RCA literature in manufacturing. Different data types, location-time, physical, and log-action, that are usually used were identified. Industries with the most use of RCA are ranked, semiconductor, chemical, automotive, and others. Then different tools used to automate RCA, including decision trees, regression models, classification methods, clustering methods, neural networks, BNs, PCA, statistical tests, and control charts, were discussed. Ruijters and Stoelinga ( 2015 ) provided FTA qualitative and quantitative analysis methods. Also, different types of FTA, standard FTA, dynamic FTA, and other extensions, were discussed. Zhu and Zhang ( 2022 ) also reviewed dynamic FTA. Cai et al. ( 2017 ) reviewed the application of BN in fault diagnosis. First, an overview of BN types (static, dynamic, and object-oriented), structure modeling, parameters modeling, and interference has been provided. Then applicability of BN for fault identification in process, energy, structural, manufacturing, and network systems has been discussed. BN verification and validation methods are provided. Future prospects including integration of big data with BN, real-time fault diagnosis BN inference algorithms, and hybrid fault diagnosis methods are finally resulted. More relevant BN reviews include BN application in reliability (Insua et al., 2020 ) and safety and risk assessments (Kabir & Papadopoulos, 2019 ).

The integration of FMEA, RCA, and FTA holds immense potential for quality and production managers to minimize failures and enhance system efficiency. By capitalizing on the unique strengths of each approach, the integration of these failure analysis methodologies enables a more comprehensive and effective examination of failures. However, existing studies and literature reviews have predominantly focused on individual methodologies, leading to a lack of integration and limited familiarity with three approaches among engineers and industry experts. To address this gap and promote the integration of them, this study aims to review the progress of intelligence failure analysis within FMEA, RCA, and FTA.

Other general failure analysis methodologies include, but are not limited to, the following methodologies. Event Tree Analysis, similar to FTA, is a graphical representation that models the progression of events following an initiating event, helping to analyze the potential consequences (Ruijters & Stoelinga, 2015 ). Bow-Tie Analysis, usually used in risk management, visualizes the relationship between different potential causes of a hazard and their possible consequences (Khakzad et al., 2012 ). Human Reliability Analysis focuses on assessing the probability of human error and its potential impact on systems and processes (French et al., 2011 ). The Fishbone Diagram visually represents potential causes of a problem to identify root causes by categorizing them into specific factors like people, process, equipment, materials, etc.

There are also industry-specific methodologies, including but not limited to the following ones. Electrostatic Discharge (ESD) Failure Analysis focuses on identifying failures caused by electrostatic discharge, a common concern in the electronics industry. Hazard and Operability Study is widely used in the chemical industry to examine deviations from the design intent and identify potential hazards and operability issues. Incident Response and Post-Incident Analysis, in the IT industry, is used for analyzing and responding to security incidents, with a focus on preventing future occurrences. Hazard Analysis and Critical Control Points is a systematic preventive approach to food safety that identifies, evaluates, and controls hazards throughout the production process. Maximum credible accident analysis assesses and mitigates the most severe accidents that could occur in high-risk industries. For more information on industry-specific methodologies, an interested reader may consult the paper on that industry, as they are wide and out of the scope of this paper for deep discussion.

Our review focuses on the historical progress of (hybrid) intelligence failure analysis to identify and classify methodologies and tools used within them. In Industry 4.0, (hybrid) intelligence failure analysis can contribute to improve quality management and automate quality through an improved human cyber-physical experience. Different from the abovementioned reviews, the purpose of our study is to provide a rich comprehensive understanding of the recent developments in these methodologies from industry 4.0 and hybrid intelligence, the benefits of making them intelligent, i.e., (augmented) automatic and/or data-driven, and their limitations.

Research methodology

A systematic literature review analyses a particular knowledge domain’s body of literature to provide insights into research and practice and identify research gaps (Thomé et al., 2016 ). This section discusses our review scope and protocols, defining both our primary and secondary questions, and the criteria for selecting journals and papers to be reviewed. A bibliography analysis of the selected papers is also presented, including distributions by year, affiliation, and journals.

Review scope and protocol

We follow Thomé et al. ( 2016 ) 8-step literature review methodology to assure a rigorous literature review of intelligence, automated/data-driven, failure analysis methodology for Industry 4.0.

In Step 1, our (hybrid) intelligence failure analysis problem is planned and formulated by identifying the needs, scope, and questions for this research. Our initial need for this literature review comes from a relevant industrial project entitled "assembly quality management using system intelligence" which aims to reduce the quality failures in assembly lines. The trend towards automated and data-driven methodologies in recent years signifies the need for this systematic literature review. Thus, three general failure analysis methodologies, FMEA, RCA, and FTA, are reviewed with respect to tools to make them intelligent and to derive benefits from hybrid intelligence.

Our primary questions are as follows. (i) What are the failure analysis general methodologies and what tools have been used to make them intelligent? (ii) How these methodologies may benefit from hybrid intelligence? (iii) What are the strengths and weaknesses of these methodologies and tools? Our secondary questions are as follows. (i) How intelligent are these tools? (ii) What types of data do they use? Which tools allow a good fusion of human and machine intelligence? (iii) How well do they identify the root causes of failures? (iv) What are the possible future prospectives?

figure 1

Distribution of papers by year and affliation

Step 2 concerns searching the literature by selecting relevant journals, databases, keywords, and criteria to include or exclude papers. We select the SCOPUS database to scan the relevant paper from 1990 to the first half of 2022. SCOPUS contains all high-quality English publications and covers other databases such as ScienceDirect and IEEE Xplore. A two-level keyword structure is used. The first level retrieves all papers that have either failure mode and effect analysis, FMEA, failure mode and effects and criticality analysis, FMECA, fault tree analysis, FTA, event tree analysis, ETA, root cause analysis, RCA, failure identification, failure analysis, or fault diagnosis in the title, abstract, and/or keywords. The second level limits the retrieved paper by the first level keywords to papers that have either Bayesian network, BN, automated, automatic, automation, smart, intelligence or data-driven in the title, abstract, and/or keywords.

To ensure the scientific rigor of our literature review process, we have removed papers that met at least one of the following criteria: Publications with concise and/or ambiguous information that would make it impossible to re-implement the tools and methodologies described in the paper later on. Publications in low-level journals, i.e., journals in the third quarter (Q3) or lower in the Scimago Journal & Country Rank. Papers with subject areas that are irrelevant to our research topic, such as physics and astronomy.

Steps 3 and 4 involve gathering data and evaluating data quality. We download papers and check their sources according to exclusion criteria. Step 5 concerns data analysis. Step 6 focuses on interpreting the data. The final selected papers are analyzed and interpreted in Section Managerial insights, limitations, and future research . Step 7 involves preparing the results and report. Step 8 requires the review to be updated continuously.

Discussion and statistical analysis

Here is a bibliometric analysis of our literature review. About 15,977 papers were found in our first search. By excluding criteria, we shortened the search to 7113. Then, we checked the titles of 7113 papers including 4359 conference and 2754 journal papers. We downloaded 1,203 papers to read their abstracts and skim their bodies. Then, 1114 low-quality/irrelevant papers were excluded. The remaining 86 high-quality papers were examined for this study.

Distributions of papers by year and affiliation are shown in Fig. 1 . 28 countries have contributed in total. Most affiliations are in advanced countries including China, Germany, and the UK. Surprisingly, we found no publications from Japan and only five from the USA. Only one papers had been published between 1990 and 1999 because of limited data and technology, e.g., sensors and industrial cameras. A slow growth observed between 2000 and 2014 coincides with the technology advancement and Industry 4.0 emergence. The advanced technology and researchers focus on Industry 4.0 have led to significant growth every year since 2015. Worth to note that 2022 information is incomplete because this research has been conducted in the middle of 2022. We expect more publications, at least equal to 2021, for 2022.

Papers distribution by journal is in Fig. 2 . 58 journals and conferences have contributed. Journals with a focus on production and quality, e.g., International Journal of Production Research , have published most papers. Technology-focused journals, e.g., IEEE Access , also have contributed.

figure 2

Distribution of papers by journal

Literature categorization

Selected papers are now categorized based on the four general steps of a failure analysis methodology, involving failure structure detection, failure event probabilities detection, failure risk analysis, and outputs. Then, a statistical analysis of these categorizations is provided.

These four steps of a failure analysis methodology are illustrated in Fig. 3 . The first two steps deal with input data. In step 1, the failure structure is identified, encompassing all (possible) failures, the failure propagation structure, failure interdependency, and causes and effects. Step 2 involves detecting event probabilities in a failure structure. For example, classical FMEA scores each failure with severity, occurrence, and detection rates.

figure 3

Four general steps of a failure analysis methodology

To analyze failures in a (production) system, data should be collected to identify the failure structure and detect failures. Reactive methodologies, such as RCA, are data-driven and typically gather available data in a system, while proactive methodologies, such as FMEA, are expert-driven and gather data through expert knowledge. However, a (hybrid) intelligence failure analysis methodology should take advantage of both advanced technologies, such as sensors and Internet-enabled machines and tools, and experts to automatically gather required data, combining proactive and reactive approaches, and providing highly reliable analyses and solutions.

In step 3, all input data are processed to determine the associated risk value with each failure, and the most probable causes (usually based on an observed or potential effect). Typically, a main tool, such as Bayesian networks, neural rule-based systems, statistical analysis, or expert analysis, is used to determine root causes, classify failures, and/or rank failures.

Step 4 outputs results that may include failures and sources, reasons behind the sources, and mitigation actions. The output of this tool is post-processed to provide possible solutions and information that is explainable and easy to use for both humans and machines.

Steps 1: failure structure

Failure structure identification is the first step in a failure analysis methodology. (Potential) failures, causes, effects, and/or failure interdependency are identified. We categorize the literature to develop a (hybrid) intelligence failure methodology to identify failure structure, causes, effects, interdependencies, and relationships between failures, failures and causes, and failures and effects.

Traditionally, experts have defined failure structures by analyzing causes, effects, and the interdependency of failures. However, recent studies have explored alternative approaches to identifying failure structures, leveraging available data sources such as problem-solving databases, design forms, and process descriptions. Problem-solving databases include quality issue records, maintenance records, failure analysis records, and CBR databases. These records could be stored in structured databases and sheets, or unstructured texts. Design forms may include design FMEA forms, reliability characteristics, and product quality characteristics. Process descriptions may include operations, stations, and key operational characteristics. Moreover, simulation can be used to generate failures, causes, and effects (Snooke & Price, 2012 ). Design forms and process descriptions are generated by experts, usually for other purposes, and are re-used for failure analysis. Problem-solving databases could be generated by experts, such as previous FMEAs, or by an automated failure analysis methodology, such as automated RCA. Table 1 classifies studies based on the data sources used to identify the failure structure.

Data processing methods

To define failure structure from operational expert-driven data, no specific tool has been used. In the industry, failure structures are typically defined by an expert (or group of experts). When expert-driven or data-driven historical data and/or design forms and process descriptions are available, ontology-driven algorithms, including heuristics (Sayed & Lohse, 2014 ; Zhou et al., 2015 ; Steenwinckel et al., 2018 ; Xu & Dang, 2023 ) and SysML modeling language (Hecht & Baum, 2019 ), process/system decomposition (the operation, the station, and the key characteristics levels) (Zuo et al., 2016 ; Khorshidi et al., 2015 ; Zhou et al., 2015 ), rule-based algorithms that use CBR (Yang et al., 2018 ; Liu & Ke, 2007 ; Xu & Dang, 2023 ; Oliveira et al., 2022 , 2021 ), and FTA/BN modeling from FMEA/expert data (Yang et al., 2022 ; Steenwinckel et al., 2018 ; Palluat et al., 2006 ) and from Perti net (Yang & Liu, 1998 ) have been suggested. Rivera Torres et al. ( 2018 ) divided a system into components and related failures to each of the components to make a tree of components and failures.

Component-failure matrix is generated using unstructured and quality problem texts mining from historical documents such as bills of material and failure analysis. Apriori algorithms were used to find synonyms in the set of failure modes (Xu et al., 2020 ). The 8D method is used to describe a failure. Ontology was used to store and retrieve data in a knowledge base CBR system.

Yang et al. ( 2022 ), Leu and Chang ( 2013 ) and Waghen and Ouali ( 2021 ) have suggested building a BN structure from the FTA model. Wang et al. ( 2018 ) has proposed to use the fault feature diagram, the fault-labeled transition system based on the Kripke structure to describe the system behavior. The MASON (manufacturing semantic ontology) has been used to construct the structure of the failure class by Psarommatis and Kiritsis ( 2022 ). Teoh and Case ( 2005 ) has developed a functional diagram to construct a failure structure between components of a system and to identify causes and effect propagation. Yang et al. ( 2018 ) used an FMEA style CBR to collect failures to search for similarity. They then used CBR to build a BN using a heuristic algorithm.

Step 2: failure detection

Failure detection data are gathered to determine the strength of relationships among failures, causes, and effects.

Failure detection can be based on operational or historical expert-driven data, as well as data-driven historical and/or real-time data obtained from sensors. Such data can come from a variety of sources, including design and control parameters (such as machine age or workpiece geometry), state variables (such as power demand), performance criteria (such as process time or acoustic emission), and internal/external influencing factors (such as environmental conditions) (Filz et al., 2021b ; Dey & Stori, 2005 ). These data are usually used to determine occurrence probability of failures. To determine the severity and detection probabilities of failures, conditional severity utility data/tables may be used (Lee, 2001 ). Simulation can also be used to determine occurrence, severity, and detection (Price & Taylor, 2002 ). Table 2 summarizes types of data that are usually used to detect failures in the literature.

Processing data refers to the transformation of raw data into meaningful information. A data processing tool is needed that provides accurate and complete information about the system and relationships between data and potential failures.

First, data from different sources should be pre-processed. In a data pre-processing step, data is cleaned, edited, reduced, or wrangled to ensure or enhance performance, such as replacing a missing value with the mean value of the entire column (Filz et al., 2021b ; Schuh et al., 2021 ; Zhang et al., 2023 ; Musumeci et al., 2020 ; Jiao et al., 2020 ; Yang et al., 2015 ; Chien et al., 2017 ).

Data then may need to be processed according to the tools used in Step 3. Common data processing methods between all tools include data normalization using the min-max method (Filz et al., 2021b ; Musumeci et al., 2020 ) and other methods (Yang et al., 2018 ; Schuh et al., 2021 ; Jiao et al., 2020 ; Sariyer et al., 2021 ; Chien et al., 2017 ).

Feature selection/extraction algorithms have been used to select the most important features of data (Filz et al., 2021b ; Xu & Dang, 2020 ; Mazzoleni et al., 2017 ; Duan et al., 2020 ; Schuh et al., 2021 ; Zhang et al., 2023 ; Musumeci et al., 2020 ; Yang et al., 2015 ; Sariyer et al., 2021 ).

For BN-based failure analysis, maximum entropy theory is proposed to calculate failure probabilities from expert-based data (Rastayesh et al., 2019 ). Fuzzy methods have also been used to convert linguistic terms to occurrence probabilities (Yucesan et al., 2021 ; Wan et al., 2019 ; Nie et al., 2019 ; Nepal & Yadav, 2015 ; Ma & Wu, 2020 ; Li et al., 2013 ; Duan et al., 2020 ). Euclidean distance-based similarity measure (Chang et al., 2015 ) and fuzzy rule base RPN model (Tay et al., 2015 ), heuristic algorithms (Brahim et al., 2019 ; Dey & Stori, 2005 ; Yang et al., 2022 ), and a fuzzy probability function (Khorshidi et al., 2015 ) have been suggested to build failure probabilities.

Failure analysis data may be incomplete, inaccurate, imprecise, and limited. Therefore, several studies have used tools to deal with uncertainty in data. The most commonly used methods are fuzzy FMEA (Yang et al., 2022 ; Nepal & Yadav, 2015 ; Ma & Wu, 2020 ), fuzzy BN (Yucesan et al., 2021 ; Wan et al., 2019 ; Nie et al., 2019 ), fuzzy MCDM (Yucesan et al., 2021 ; Nie et al., 2019 ; Nepal & Yadav, 2015 ), fuzzy neural network (Tay et al., 2015 ; Palluat et al., 2006 ), and fuzzy evidential reasoning and Petri nets (Shi et al., 2020 ).

Step 3: analysis

A failure analysis tool is essential for conducting any failure analysis. Table 3 categorizes various data-driven tools, such as BNs, Clustering/Classification, Rule-based Reasoning, and other tools used in the literature and the aspects they support.

BNs model probabilistic relationships among failure causes, modes, and effects using directed acyclic graphs and conditional probabilities. Pieces of evidence, i.e., known variables, are propagated through the graph to evaluate unobserved variables (Cai et al., 2017 ). For example, Rastayesh et al. ( 2019 ) applied BNs for FMEA and perform risk analysis of a Proton Exchange Membrane Fuel Cell. Various elements and levels of the system were identified along with possible routes of failure, including failure causes, modes, and effects. A BN was constructed to perform the failure analysis. Some other examples of the BNs application include an assembly system (Sayed & Lohse, 2014 ), kitchen equipment manufacturing (Yucesan et al., 2021 ), and Auxiliary Power Unit (APU) fault isolation (Yang et al., 2015 ).

Classification assigns predefined labels to input data based on learned patterns, Clustering organizes data into groups based on similarities. Neural networks are commonly used for failure classification and have been employed in most studies. Hence, we separated these studies from those that used other clustering/classification tools. Neural networks consist of layers of interconnected nodes, with an input layer receiving data, one or more hidden layers for processing, and an output layer providing the final classification (Jiang et al., 2024 ). For example, Ma and Wu ( 2020 ) applied neural networks to assess the quality of 311 apartments in Shanghai, China, for FMEA. The input includes various APIs collected for the apartments, and the output was the risk rate of each apartment. In another study, Ma et al. ( 2021 ) applied neural networks for RCA to predict the root causes of multiple quality problems in an automobile factory. Some other examples of the neural networks application include industrial valve manufacturing (Pang et al., 2021 ), complex cyber–physical systems (Liu et al., 2021 ), and an electronic module designed for use in a medical device (Psarommatis & Kiritsis, 2022 ).

Other clustering/classification tools include evolving tree (Chang et al., 2015 ), reinforced concrete columns (Mangalathu et al., 2020 ), K-means, random forest algorithms (Xu & Dang, 2020 ; Chien et al., 2017 ; Oliveira et al., 2022 , 2021 ), contrasting clusters (Zhang et al., 2023 ), K-nearest neighbors (Ma et al., 2021 ), self-organizing maps (Gómez-Andrades et al., 2015 ), and Naive Bayes (Schuh et al., 2021 ; Yang et al., 2015 ).

Rule-based reasoning represents knowledge in the form of "if-then" rules. Rule-based reasoning involves a knowledge base containing the rules and a reasoning engine that applies these rules to incoming data or situations. For instance, Jacobo et al. ( 2007 ) utilized rule-based reasoning for analyzing failures in mechanical components. This approach serves as a knowledgeable assistant, offering guidance to less experienced users with foundational knowledge in materials science and related engineering fields throughout the failure analysis process. Also, the application of the rule-based reasoning for wind turbines FMEA is studied by (Zhou et al., 2015 ).

Other tools include gradient-boosted trees, logistic regression (Filz et al., 2021b ), CBR (Tönnes, 2018 ; Camarillo et al., 2018 ; Jacobo et al., 2007 ), analyzing sensitivities of the machining operation by the stream of variations and errors probability distribution determination (Zuo et al., 2016 ), causal reasoning (Teoh & Case, 2005 ), probabilistic Boolean networks with interventions (Rivera Torres et al., 2018 ), principal component analysis (PCA) (Duan et al., 2020 ; Zhang et al., 2023 ; Jiao et al., 2020 ; Sun et al., 2021 ), factor ranking algorithms (Oliveira et al., 2022 , 2021 ), heuristics and/or new frameworks (Camarillo et al., 2018 ; Yang et al., 2009 , 2020 ; Snooke & Price, 2012 ; Xu & Dang, 2023 ; Rokach & Hutter, 2012 ; Wang et al., 2018 ; Hecht & Baum, 2019 ; Yang & Liu, 1998 ; Liu & Ke, 2007 ), and mathematical optimization methods (Khorshidi et al., 2015 ).

These tools may be integrated by other tools including sequential state switching and artificial anomaly association in a neural network (Liu et al., 2021 ), MCDM/optimization (Yucesan et al., 2021 ; Jomthanachai et al., 2021 ; Ma et al., 2021 ; Sun et al., 2021 ), game theory (Mangalathu et al., 2020 ), fuzzy evidential reasoning and Petri nets (Shi et al., 2020 ), and maximum spanning tree, conditional Granger causality, and multivariate time series (Chen et al., 2018 ).

Step 4: output

A data analysis process can benefit not only humans but also machines and tools in a hybrid intelligence failure analysis methodology. Therefore, the output information should be carefully designed. Table 4 ranks the output data, and the list of studies for each output is available in Online Appendix EC.1. Most studies have focused on automatically identifying the root causes of failures, which is the primary objective of a failure analysis methodology. In addition, researchers have also focused on failure occurrence rating, ranking, and classification. While automatically finding the root causes of failures is important, a hybrid intelligence failure analysis process needs to interpret the related data and information and automatically provide mitigation actions for both operators and machines. However, only a few studies have proposed tools to automatically find possible mitigation actions, usually based on CBR databases and only readable for humans. Therefore, future studies may focus on finding possible automated mitigation actions for failures and developing a quality inspection strategy.

Data post-processing

A data post-processing step transforms data from the main tool into readable, actionable, and useful information for both humans and machines. Adapting solutions from similar failures in a database (i.e., CBR) to propose a solution for a detected failure has been proposed by Tönnes ( 2018 ), Camarillo et al. ( 2018 ), Hecht and Baum ( 2019 ), Jacobo et al. ( 2007 ), Liu and Ke ( 2007 ) and Ma et al. ( 2021 ). Simulation to analyze different scenarios (Psarommatis & Kiritsis, 2022 ; Jomthanachai et al., 2021 ; Chien et al., 2017 ; Oliveira et al., 2022 ), mathematical optimization model (Khorshidi et al., 2015 ; Ma et al., 2021 ) and self-organizing map (SOM) neural network (Chang et al., 2017 ) to automatically select the best corrective action have also been proposed. Also, fuzzy rule-based systems to obtain RPN (Nepal & Yadav, 2015 ) and visualisation (Xu & Dang, 2020 ; Yang et al., 2009 ) are discussed.

The statistical analysis of the paper reveals that most FMEA-based studies rely solely on expert-based information to construct failure structures, while RCA-based papers tend to use a hybrid of problem-solving and system-related data. This is depicted in Fig. 4 , which shows the distribution of papers by data used over time. FMEA is used to identify potential failures when there is not enough data available to construct a failure structure based on system-based data. The trend shows some effort to use data, instead of expert knowledge, to construct failure structures, using data from similar products/processes. RCA and FTA are a reactive methodology that analyzes more information than FMEA. Advances in data mining techniques, along with increased data availability, have led to a growing trend of using data to construct failure structures. For a comprehensive and reliable intelligence failure analysis, a combination of all kinds of data is necessary. It is worth noting that Waghen and Ouali ( 2021 ) proposed a heuristic method to augment failure structure identification that uses expert and historical data. They suggested engaging expert knowledge when historical data are insufficient to identify a failure structure and/or the reliability of a failure structure is low. Other studies have solely focused on failure identification through expert knowledge or historical data, without considering the potential benefits of combining different types of data.

figure 4

Input data statistical analysis

While most FMEA-based papers use only expert-based data to determine failure probability, there is a significant growth in the utilization of problem-solving data and a hybrid of problem-solving and system-related data, i.e., production line data, over time. RCA and FTA usually tend to use more problem-solving and system-related data. Moreover, this figure and Fig. 5 show that the literature on RCA has been growing in recent years, while the trend for FMEA has remained the same over time. We found that Filz et al. ( 2021b ), Mazzoleni et al. ( 2017 ), Ma and Wu ( 2020 ) and Yang et al. ( 2015 ) improved FMEA to use a combination of expert-based, problem-solving, and system-related data to determine potential failures and their causes. They analyzed these data using deep learning, classification, and neural networks, respectively. Duan et al. ( 2020 ), Ma et al. ( 2021 ) tried to use the benefits of both expert-based data and problem-solving and system-related data in the RCA context. They analyzed the root cause of failures using neural networks.

The distribution of papers by the tools used is shown in Fig. 5 . BNs have been mainly used within the context of FMEA methodologies with a growing trend during the recent years, while RCA researchers have used them less frequently. BNs have the potential to model failure propagation, multi-failure scenarios, and solution analysis to propose potential solutions. However, all of the studies reviewed in this paper only used BNs to identify the root causes of failures. BNs offer a clear graphical representation of failures, their causes, and their effects, which facilitates the interpretation of results by humans. They also provide an easy way for humans to intervene and analyze the sensitivity of results and correct processed data if it appears unrealistic. BNs are well-developed tool and have the ability to work with expert-based, historical, and system-based data, even when data is fuzzy or limited. Developing methodologies that leverage the advantages of BNs seems promising for FMEA, RCA, and FTA.

figure 5

Tools distribution statistical analysis

RCA and FTA are reliant on various tools over time with no trend of using a specific tool, such as PCA and regression, due to their need for a large amount of data. However, these methods have limitations in incorporating both human and machine intelligence and mostly rely on machine intelligence. Although neural networks and classification algorithms have gained attention in both FMEA and RCA during the last few years, they are black boxes and difficult for humans to modify. Also, classification algorithms typically do not address failure propagation or multi-failure modes. BNs offer a promising alternative, as they can model failure propagation, multiple-failures, and provide a clear graphical representation of failures, causes, and effects. Furthermore, BNs can incorporate both expert-based and historical data, making them well-suited for FMEA, RCA, and FTA. Therefore, developing methodologies that fully leverage the benefits of BNs in these domains would be valuable.

Managerial insights, limitations, and future research

In this section, we discuss managerial insights, limitations, and future research related to different aspects of a Hybrid Intelligence failure analysis methodology. The aim is to assist researchers in focusing on relevant recommendations. Section Section Applications and complexity delves into the applications and complexity of each study, and provides examples for each tool. Section Levels of automation/intelligence presents the levels of intelligence for a failure analysis methodology. Section Introducing knowledge into tools discusses how knowledge is introduced into the failure analysis tools for an effective failure analysis. A more in-depth discussion of hybrid intelligence is in Section Hybrid intelligence . The last three sections provide insights into failure propagation and correlation, hybrid methodologies, and other areas of future research.

Applications and complexity

Intelligent FMEA, RCA, and FTA have been applied to various applications, including production quality management, computer systems, reliability and safety, chemical systems, and others. Table 5 presents the distribution of reviewed papers by application. The list of studies per application is available in Online Appendix EC.2. Production quality management has been the most common application of intelligent failure analysis methodologies due to the significant costs associated with quality assurance. Smart failure analysis methodologies have also been impacted by the increased use of sensors and IoT to collect precise data from machines, tools, operators, and stations, as well as powerful computers to analyze the data. Computer systems failure analysis and system reliability and safety rank second, while chemical systems rank third, as these systems often require specific methodologies, such as hazard and operability analysis.

We checked every paper dataset to find information about the complexity of their case-study and reasons behind their good results to help readers select a validated study on a large set of data. An enriched dataset of problem-solving data are used by Xu et al. ( 2020 ), Du et al. ( 2012 ), Oliveira et al. ( 2021 ), Gómez-Andrades et al. ( 2015 ), Leu and Chang ( 2013 ), Price and Taylor ( 2002 ), Sariyer et al. ( 2021 ), Gomez-Andrades et al. ( 2016 ) and Xu and Dang ( 2023 ). An enriched dataset of historical problem-solving and sensors data is used by

Filz et al. ( 2021b ), Sun et al. ( 2021 ), Mazzoleni et al. ( 2017 ), Hireche et al. ( 2018 ), Yanget al. ( 2015 ) Demirbaga et al. ( 2021 ), Waghen and Ouali ( 2021 ), Zhang et al. ( 2023 ), Oliveira et al. ( 2022 ), Sun et al. ( 2021 ). Data from the system and processes are used by Teoh and Case ( 2005 ), Ma et al. ( 2021 ), Schuh et al. ( 2021 ), Waghen and Ouali ( 2021 ). Other studies demonstrated their methodology on a small problem.

Levels of automation/intelligence

Failure analysis intelligence can be divided into five levels based on the data used. Level 1 involves analyzing failures using expert-based data with the use of intelligence tools. This level can be further improved by incorporating fuzzy-based tools, such as fuzzy BNs, fuzzy neural networks, and fuzzy rule-based systems. If the amount of historical data can be increased over time, we suggest using BNs in a heuristic-based algorithm, as they have the capability to work with all possible data, resulting in fewer modifications in the failure analysis methodology over time. Good examples for Level 1 include Yucesan et al. ( 2021 ) and Brahim et al. ( 2019 ).

Level 2 involves analyzing failures using experts to identify failure structures and problem-solving and system-related data to determine failure probabilities. This level can be used by a professional team who can correctly and completely identify failure structure. It can also be used by those who work with variable structures where updating the structure requires a lot of data modification. Identifying failure structures and analyzing failures are both automated at level 3. This level is the most applicable when a good amount of data is available. BNs, classification algorithms, and neural networks are among the best tools to analyze failure within RCA, FMEA, and FTA methodologies. Studies such as Filz et al. ( 2021b ) Zuo et al. ( 2016 ), Dey and Stori ( 2005 ), Mangalathu et al. ( 2020 ), Yang et al. ( 2015 ) and Ma et al. ( 2021 ) are good examples for Levels 2 and 3.

In level 4, mitigation actions are also determined automatically. This level represents a whole automation of failure analysis. BNs are among the few tools that can encompass all steps of failure analysis. As such, we suggest using them. CBR databases can be used by BNs plus system-based data to provide possible corrective actions. Tönnes ( 2018 ), Zuo et al. ( 2016 ) and Hecht and Baum ( 2019 ) are among good studies for Level 4. Chang et al. ( 2017 ) has focused to automate and visualize corrective actions using a self-organizing map (SOM) neural network in an FMEA methodology. Future research should concentrate on the development of an automated FMEA that dynamically updates the current RPN (Risk Priority Number). This can aid in predicting failures in parts or components of a system using a "Live RPN." The predictive capability of such a tool can be utilized to optimize the overall system. It enables the transformation of a manufacturing system into a self-controlling system, allowing adjustments based on current parameters (Filz et al., 2021b ).

Level 5 is a hybrid intelligence approach to failure analysis that encompasses all other levels and can be implemented within FMEA, RCA, and FTA methodologies when a limited amount of historical and system-based data is available until a comprehensive CBR database is built. BNs provide a good graphical representation and can work with all possible data types. The advantages of BNs are significant enough to be suggested for hybrid intelligence failure analysis. However, we did not find any comprehensive study for this level. A combination of studies that proposed methods to use integrated expert-based, problem-solving, and system-based data, such as Waghen and Ouali ( 2021 ); Filz et al. ( 2021b ), is suggested. Nonetheless, this level remains open and needs to be the focus of future research by scholars. To facilitate the implementation of hybrid intelligence failure analysis, a user-friendly interface is crucial for operators to interact with. Several studies have proposed user-interface applications for this purpose, including (Chan & McNaught, 2008 ; Camarillo et al., 2018 ; Li et al., 2013 ; Jacobo et al., 2007 ; Yang et al., 2009 , 2020 ; Demirbaga et al., 2021 ; Snooke & Price, 2012 ; Palluat et al., 2006 ).

Introducing knowledge into tools

In this section, we analyze which types of knowledge, expert-driven, data-driven, or a hybrid of both, are usually used with which tools and what the implications are for providing insights on suitable tools for hybrid intelligence failure analyses.

Figure 6 shows the distribution of literature based on the input data, tools, and outputs (four general steps of a failure analysis methodology in Fig. 3 ). The first column of nodes shows various combinations of types of knowledge, expert-driven, data-driven, or a hybrid of both, that are usually used in the literature to identify the structure of failure and to detect the probability of failures. The second column of nodes shows various tools that are used to analyze the failure. The third column of nodes shows outputs of a failure analysis. The number of studies with each particular focus is shown by the thickness of an arrow. Details are in Appendix EC.1.

figure 6

Literature distribution based on inputs, tools, and outputs

The following studies have tried to introduce knowledge and data from expert and data based sources to a failure analysis methodology. Filz et al. ( 2021b ) utilized expert knowledge to identify the structure of failure, the components involved, and the necessary sensors to be used. They then employed sensors to capture data and leveraged problem-solving data from the recorded expert archive to identify failures in a deep learning model. Similarly, Musumeci et al. ( 2020 ) used supervised algorithms to classify failures. Mazzoleni et al. ( 2017 ) they used data from sensors to select the most effective features related to a failure, and subsequently employed sensor data and failure expert data-sets within a gradient boosting tree algorithm to identify the possibility of the failure. Duan et al. ( 2020 ) used data from different sources in a similar way for a neural network to identify the root cause of a failure. Ma and Wu ( 2020 ) utilized expert knowledge to identify failures in construction projects. Subsequently, expert datasets were employed in conjunction with project performance indices to predict the possibility of a failure and determine the root cause of the failure using a neural network tool.

Hireche et al. ( 2018 ), Yang et al. ( 2015 ) gathered data from sensors to determine the conditions of each failure/component node. Then, a BN was used to identify the risks and causes. A multi-level tree is developed by Waghen and Ouali ( 2021 ). Each level contains a solution, pattern, and condition level. Solutions are retrieved from a historical failure database as a combination of certain patterns. The pattern in each problem has been identified and related to the solution using a supervised machine-learning tool. Each level is linked to the next level until the root cause of a failure is correctly identified.

Other usefull tips for introducing knowledge from different sources to a failure analysis methodology can be found in the following studies. Zuo et al. ( 2016 ) divided a multi-operation machining process operation, station, and key characteristics levels. Stream of variations (SoV) was used to evaluate the sensitivities of the machining operations level by level. Results were used to find the sources affecting the quality. Distribution techniques for each quality precision using multi-objective optimization were chosen. Dey and Stori ( 2005 ) used a message-passing method (Pearl, 1988 ) to update a BN using data from sensors to estimate the condition of the system and update the CPTs, when each sensor output is considered as a node in the BN. Chan and McNaught ( 2008 ) also used sensor data to change the probabilities in a BN. A user interface is also developed to make inferences and present the results to operators.

Rokach and Hutter ( 2012 ) used the sequence of machines and a commonality graph of steps and failure causes data to cluster failures to find commonalities between them. A GO methodology is used by Liu et al. ( 2019b ) to model the system and a heuristic is used to construct BN structure and probabilities from the GO methodology model. Teoh and Case ( 2005 ) developed an objective-oriented framework that considers conceptual design information. A hierarchy of components, an assembly tree, and a functional diagram are built to capture data from processes and feed it to FMEA. Bhardwaj et al. ( 2022 ) used historical data from a similar system to estimate failure detection probabilities. Hecht and Baum ( 2019 ) used SysML to describe components and failures.

Zhou et al. ( 2015 ) used a tree of a system. Two classes of knowledge, shallow knowledge and deep knowledge, were gathered to generate rules for failure analysis. The former indicates the experiential knowledge of domain experts, and the latter is the knowledge about the structure and basic principle of the diagnosis system. Liu and Ke ( 2007 ) used CBR to find similar problems and solutions, text mining to find key concepts of the failure in the historical failure record texts, and rule mining to find hidden patterns among system features and failures. Filz et al. ( 2021a ) gathered process parameters after each station using a quality check station. Then a self-organizing Map was used to find failure propagation and cause and effect. Ma et al. ( 2021 ) used data from the system to determine features of problems, products, and operators. Data from problem-solving databases was used to find new failures and classified them using the features and historical data.

Psarommatis and Kiritsis ( 2022 ) developed a methodology that uses data-driven and knowledge-based approaches, an ontology base on the MASON ontology to describe the production domain and enrich the available data. Wang et al. ( 2018 ) developed a data acquisition system including a monitor, sensor, and filter modules. A fault diagram models failure propagation. They extended the Kripke structure by proposing the feature-labeled transition system, which is used to distinguish the behavior of the transition relationship by adding a signature to the transition relationship.

This section highlights that in the realm of failure analysis, a majority of research papers have utilized a hybrid approach, combining expert and data knowledge for tasks such as failure detection, classification, and feature selection. However, to achieve real-time failure analysis, a more effective integration of these two sources is crucial. This integration should enable operators and engineers to provide timely input to the system and observe immediate results. Furthermore, only a limited number of studies have specifically focused on the identification of failure structures using either data or a hybrid of expert and data knowledge.

The use of BNs has emerged as a highly promising approach for achieving real-time input and structure identification in the field of failure analysis. By leveraging both expert knowledge and data sources, BNs have the capability to effectively incorporate expert knowledge as constraints within structure identification algorithms. Unlike traditional classification algorithms that are primarily designed for continuous data, BNs are versatile in handling both discrete and continuous data types. Moreover, BNs possess several strengths that make them particularly suitable for failure analysis. They excel at performing real-time inferences, engaging in counterfactual reasoning, and effectively managing confounding factors. Given these advantages, it is essential to allocate more attention to the application of BNs in hybrid intelligence failure analysis. This involves further exploration of their capabilities and conducting comparative analyses with other tools to assess their effectiveness in various scenarios. By focusing on BNs and conducting comprehensive evaluations, researchers can enhance the understanding and adoption of these powerful tools for improved failure analysis in real-time settings.

Hybrid intelligence

A collaborative failure analysis methodology is needed, in which artificial intelligence tools, machines, and humans can communicate. While hybrid intelligence has gained attention in various fields, literature on the subject for failure analysis is still limited. For example, Piller et al. ( 2022 ) discussed methods to enhance productivity in manufacturing using hybrid intelligence. They explored considerations such as task allocation between humans and machines and the degree of machine intelligence integrated into manufacturing processes. Petrescu and Krishen ( 2023 ) and references within have delved into the benefits and future directions of hybrid intelligence for marketing analytics. Mirbabaie et al. ( 2021 ) has reviewed challenges associated with hybrid intelligence, focusing particularly on conversational agents in hospital settings. Ye et al. ( 2022 ) developed a parallel cognition model. This model draws on both a psychological model and user behavioral data to adaptively learn an individual’s cognitive knowledge. Lee et al. ( 2020 ) combined a data-driven prediction model with a rule-based system to benefit from the combination of human and machine intelligence for personalized rehabilitation assessment.

An artificial intelligence tool should not only provide its final results but also provide its reasoning. A human can analyze the artificial intelligence tool reasoning through a user-interface application and correct possible mistakes instantly and effortlessly. To enable this capability, the use of a white-box artificial tool, such as Bayesian networks, is essential. Explainable AI aids in comprehending and trusting the decision-making process of the hybrid intelligence system by providing the reasoning behind it (Confalonieri et al., 2021 ). Moreover, a machine should be able to interpret and implement an artificial intelligence tool and/or human solutions. Artificial intelligence tools, machines, and humans can learn from mistakes (Correia et al., 2023 ).

To fully exploit the complementarity in human–machine collaborations and effectively utilize the strengths of both, it is important to recognize and understand their roles, limitations, and capabilities in the context of failure analysis. Future research should focus on developing a clear plan for their teamwork and joint actions, including determining the optimal sensor types and locations, quality inspection stations, and human/machine analysis processes. In other words, How to design a decision support system that integrates both human knowledge and machine intelligence with respect to quality management? should be answered. Additionally, tools should be developed to propose possible mitigation actions based on the unique characteristics of the system, environment, humans, and machines. To achieve this, system-related data along with CBR data can be analyzed to find potential mitigation actions.

A general framework for human–machine fusion could involve the following steps: identifying applicable human knowledge and machine data for the problem, determining machine intelligence tools that facilitate the integration of human–machine elements like BNs, identifying the suitable points in the decision-making process to combine human knowledge and machine intelligence effectively, designing the user interface, and incorporating online learning using input from human knowledge (Jarrahi et al., 2022 ). However, human–machine fusion is not an easy task due to the complexity of human–machine interaction, the need for effective and online methods to work with both human and machine data, and the challenge of online learning from human knowledge. For instance, while ChatGPT interacts well with humans, it currently does not update its knowledge using human knowledge input for future cases (Dellermann et al., 2019 ; Correia et al., 2023 ).

Failure propagation and correlation

Most FMEA papers concentrated on analyzing failures in individual products, processes, or machines. It is essential to acknowledge that production processes and machines are interconnected, leading to the correlation and propagation of failures among them. Consequently, it becomes crucial to address the challenge of analyzing failures in multiple machines. To effectively tackle this issue, a holistic approach is necessary. Rather than focusing solely on individual machines, take a broader perspective by considering the entire production system to identify the interdependencies and interactions among different machines, multiple processes, and within the system.

For an intelligence failure analysis, it is necessary to exploit detailed system-related data to carefully and comprehensively identify the relations between different parts of a system, product, and/or process. Some papers have suggested methods to identify failure propagation and correlation (Wang et al., 2021 ; Zhu et al., 2021 ; Chen et al., 2017 ). They usually proposed methods to analyze correlations only between failures or risk criteria using MCDM or statistical methods. However, an intelligence failure analysis should go beyond this and identify failure propagation and correlation among parts of a system.

In the literature, Chen and Jiao ( 2017 ) proposed finite state machine (FSM) theory to model the interactive behaviors between the components, constructing the transition process of fault propagation through the extraction of the state, input, output, and state function of the component. Zuo et al. ( 2016 ) used SoV to model propagation of variations from station to station and operation to operation. A propagation from one station (operation) to the next station (operation) was modeled using a regression like formula. Ament and Goch ( 2001 ) used quality check data after each station to train a neural network for failure progagation and estimate the relationships betweenfailure in stations using a regression model to find patterns in quality check data. Ma et al. ( 2021 ) used patterns in data to classify failures and identify causes.

To conduct an intelligence failure analysis, it is important to identify every part involved, their roles, characteristics, and states. The analysis should include the identification of failure propagation and effects on functions, parts, and other failures. One approach to analyzing failures is through simulation, which can help assess the changes in the characteristics of every part of a system, including humans, machines, and the environment. To analyze the complexity of failure propagation and mutual interactions among different parts of a system, data-driven tools and heuristic algorithms need to be developed. These tools should be capable of managing a large bill of materials and analyzing the failure structure beyond the traditional statistical and MCDM methods. Rule mining can be a useful tool for detecting failure correlation and propagation, especially in situations where there is limited data available, and human interpretation is crucial.

Hybrid methodologies

FMEA, RCA, and FTA methodologies are all complementary and can improve each other’s performance. Furthermore, the availability of data, advanced tools to process data, and the ability to gather online data may lead to a unified FMEA, RCA, and FTA methodology. The reason for this is that while FMEA tries to find potential failures, RCA and FTA try to find root causes of failures, they use similar data and tools to analyze data.

In the literature, FTA has been used as an internal part of FMEA by Steenwinckel et al. ( 2018 ), Palluat et al. ( 2006 )and RCA by Chen et al. ( 2018 ). Using automated mappings from FMEA data to a domain-specific ontology and rules derived from a constructed FTA, Steenwinckel et al. ( 2018 ) annotated and reasoned on sensor observations. Palluat et al. ( 2006 ) used FTA to illustrate the failure structure of a system within an FMEA methodology and developed a neuro-fuzzy network to analyze failures. Chen et al. ( 2018 ) used FTA and graph theory tools, such as the maximum spanning tree, to find the root cause of failures in an RCA methodology. However, studies on the integration of these methodologies regarding the availability of data, tools, and applications should be done to use their advantages within a unified methodology that detects potential failures, finds root causes and effects, and improves the system.

Other future research

Several promising future research directions can be pursued. Cost-based and economic quantification approaches can be integrated into intelligent methodologies to enable more informed decision-making related to failures, their effects, and corrective actions. Additionally, incorporating customer satisfaction criteria, such as using the Kano model, can be useful in situations where there are several costly failures in a system, and budget constraints make it necessary to select the most effective corrective action. This approach has been successfully applied in previous studies (Madzík & Kormanec, 2020 ), and can help optimize decision-making in complex failure scenarios.

Data management is a critical aspect of intelligence methodologies, given the large volume and diverse types of data that need to be processed. Therefore, it is important to design reliable databases that can store and retrieve all necessary data. Ontology can be a valuable tool to help integrate and connect different types of data (Rajpathak & De, 2016 ; Ebrahimipour et al., 2010 ). However, it is also essential to consider issues such as data obsolescence and updates, especially when corrective actions are taken and root causes are removed. Failure to address these issues can lead to incorrect analysis and decision-making.

Traditionally, only single failures were considered in analysis because analyzing a combination of multiple failures was impossible. However, in a system, two or more failures may occur simultaneously or sequentially. It is also possible that a failure occurs as a consequence of another failure. These circumstances are complicated because each failure can have several root causes, and another failure is only one of its causes. Therefore, a clear and powerful tool, such as Bayesian Networks (BNs), should be used to analyze failures and accurately identify possible causes.

The traditional failure analysis methodologies had limitations such as repeatability, subjectivity, and time consumption, which have been addressed by intelligence failure analysis. However, there is a need for more focus on explainability, objective evaluation criteria, and results reliability as some intelligent tools, such as neural networks, act as black boxes. Therefore, suitable tools, such as BNs, should be well-developed and adapted for (hybrid) intelligence failure analysis. Details such as the time and location of the detected failure, possible factors of the causes, such as location, time, conditions, and description of the cause, and reasons behind the causes, such as human fatigue, should be considered within a methodology. These can help to go beyond the CBR and propose intelligence solutions based on the reasons behind a cause. While RCA has implemented these data to a limited extent, FMEA lacks such implementation.

This paper has collected information on both proactive and reactive failure analysis methodologies from 86 papers that focus on FMEA, RCA, or FTA. The goal is to identify areas for improvement, trends, and open problems regarding intelligent failure analysis. This information can help researchers learn the benefits of both methodologies, use their tools, and integrate them to strengthen failure analysis. Each paper has been read and analyzed to extract data and tools used within the paper and their benefits. It was observed that the literature on the three methodologies, FMEA, RCA, and FTA, is diverse. In Industry 4.0, the availability of data, and advances in technology are helping these methodologies benefit from the same tools, such as BNs and neural networks, and make them more integrated.

The literature was classified based on the data needed for a (hybrid) intelligence failure analysis methodology and the tools used for failure analysis to be data-driven and automated. In addition, trends to make these methodologies smart and possible future research in this regard were discussed.

Two main classes of failure structure and failure detection data are usually needed for a failure analysis methodology, each of which can be classified as expert-driven and data-driven. However, a combination of all types of data can lead to more reliable failure analysis. Most papers focused on operational and historical expert-driven and/or data-driven problem-solving data. Among the tools used within FMEA, RCA, and FTA methodologies, BNs have the capability to make a methodology smart and interact with both humans and machines to benefit from hybrid intelligence. BNs not only can analyze failures to identify root causes but also can analyze possible solutions to provide necessary action to prevent failures. A BN’s are also capable of real-time inference, counterfactual reasoning, and managing confounding factors. BNs handle both discrete and continuous data types, unlike traditional classification algorithms. Besides BNs, classification by neural networks, other classification tools, rule-based algorithms, and other tools have been proposed in the literature.

Finally, managerial insights and future research are provided. Most studies have focused on the determination of root causes. It is necessary to automatically find possible mitigation and corrective actions. This step of a failure analysis methodology needs more interaction with humans. Thus, the benefits of hybrid intelligence can be more evident here. It is imperative for humans and machines to work together to properly identify and resolve failures. System-related data should be analyzed to find possible corrective actions. This data is usually available for both proactive and reactive methodologies. Our study showed an effectively tool to integrate knowledge from experts and sensors in needed, enabling operators and engineers to provide timely input and observe immediate results. There is a need to identify failure structures using a hybrid approach that combines expert and data knowledge. Real-time input and structure identification with Bayesian networks can be achieved through the use of Bayesian networks. Further exploration of BNs and comparative analyses with other tools is necessary to enhance understanding and adoption of the best tools for a hybrid intelligence failure analysis in real-time scenarios to prevent failures.

Data availability

There is no data related to this paper.

Agrawal, V., Panigrahi, B. K., & Subbarao, P. (2016). Intelligent decision support system for detection and root cause analysis of faults in coal mills. IEEE Transactions on Fuzzy Systems, 25 (4), 934–944.

Article   Google Scholar  

Akata, Z., Balliet, D., De Rijke, M., Dignum, F., Dignum, V., Eiben, G., Fokkens, A., Grossi, D., Hindriks, K., Hoos, H., et al. (2020). A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer, 53 (08), 18–28.

Al-Mamory, S. O., & Zhang, H. (2009). Intrusion detection alarms reduction using root cause analysis and clustering. Computer Communications, 32 (2), 419–430.

Ament, C., & Goch, G. (2001). A process oriented approach to automated quality control. CIRP Annals, 50 (1), 251–254.

Bhardwaj, U., Teixeira, A., & Soares, C. G. (2022). Bayesian framework for reliability prediction of subsea processing systems accounting for influencing factors uncertainty. Reliability Engineering & System Safety, 218 , 108143.

Brahim, I. B., Addouche, S. A., El Mhamedi, A., & Boujelbene, Y. (2019). Build a Bayesian network from FMECA in the production of automotive parts: Diagnosis and prediction. IFAC-PapersOnLine, 52 (13), 2572–2577.

Cai, B., Huang, L., & Xie, M. (2017). Bayesian networks in fault diagnosis. IEEE Transactions on Industrial Informatics, 13 (5), 2227–2240.

Camarillo, A., Ríos, J., & Althoff, K. D. (2018). Knowledge-based multi-agent system for manufacturing problem solving process in production plants. Journal of Manufacturing Systems, 47 , 115–127.

Chan, A., & McNaught, K. R. (2008). Using Bayesian networks to improve fault diagnosis during manufacturing tests of mobile telephone infrastructure. Journal of the Operational Research Society, 59 (4), 423–430.

Chang, W. L., Pang, L. M., & Tay, K. M. (2017). Application of self-organizing map to failure modes and effects analysis methodology. Neurocomputing, 249 , 314–320.

Chang, W. L., Tay, K. M., & Lim, C. P. (2015). Clustering and visualization of failure modes using an evolving tree. Expert Systems with Applications, 42 (20), 7235–7244.

Chen, H. S., Yan, Z., Zhang, X., Liu, Y., & Yao, Y. (2018). Root cause diagnosis of process faults using conditional Granger causality analysis and maximum spanning tree. IFAC-PapersOnLine, 51 (18), 381–386.

Chen, L., Jiao, J., Wei, Q., & Zhao, T. (2017). An improved formal failure analysis approach for safety-critical system based on mbsa. Engineering Failure Analysis, 82 , 713–725.

Chen, X., & Jiao, J. (2017). A fault propagation modeling method based on a finite state machine. Annual Reliability and Maintainability Symposium (RAMS), 2017 , 1–7.

Google Scholar  

Chhetri, T. R., Aghaei, S., Fensel, A., Göhner, U., Gül-Ficici, S., & Martinez-Gil, J. (2023). Optimising manufacturing process with Bayesian structure learning and knowledge graphs. Computer Aided Systems Theory - EUROCAST, 2022 , 594–602.

Chien, C. F., Liu, C. W., & Chuang, S. C. (2017). Analysing semiconductor manufacturing big data for root cause detection of excursion for yield enhancement. International Journal of Production Research, 55 (17), 5095–5107.

Clancy, R., O’Sullivan, D., & Bruton, K. (2023). Data-driven quality improvement approach to reducing waste in manufacturing. The TQM Journal, 35 (1), 51–72.

Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11 (1), e1391.

Correia, A., Grover, A., Schneider, D., Pimentel, A. P., Chaves, R., De Almeida, M. A., & Fonseca, B. (2023). Designing for hybrid intelligence: A taxonomy and survey of crowd-machine interaction. Applied Sciences, 13 (4), 2198.

Dabous, S. A., Ibrahim, F., Feroz, S., & Alsyouf, I. (2021). Integration of failure mode, effects, and criticality analysis with multi-criteria decision-making in engineering applications: Part I- manufacturing industry. Engineering Failure Analysis, 122 , 105264.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61 , 637–643.

Demirbaga, U., Wen, Z., Noor, A., Mitra, K., Alwasel, K., Garg, S., Zomaya, A. Y., & Ranjan, R. (2021). Autodiagn: An automated real-time diagnosis framework for big data systems. IEEE Transactions on Computers, 71 (5), 1035–1048.

Dey, S., & Stori, J. (2005). A Bayesian network approach to root cause diagnosis of process variations. International Journal of Machine Tools and Manufacture, 45 (1), 75–91.

Du, S., Lv, J., & Xi, L. (2012). A robust approach for root causes identification in machining processes using hybrid learning algorithm and engineering knowledge. Journal of Intelligent Manufacturing, 23 (5), 1833–1847.

Duan, P., He, Z., He, Y., Liu, F., Zhang, A., & Zhou, D. (2020). Root cause analysis approach based on reverse cascading decomposition in QFD and fuzzy weight ARM for quality accidents. Computers & Industrial Engineering, 147 , 106643.

Ebeling, C. E. (2019). An introduction to reliability and maintainability engineering . Waveland Press.

Ebrahimipour, V., Rezaie, K., & Shokravi, S. (2010). An ontology approach to support FMEA studies. Expert Systems with Applications, 37 (1), 671–677.

Filz, M. A., Gellrich, S., Lang, F., Zietsch, J., Abraham, T., & Herrmann, C. (2021). Data-driven analysis of product property propagation to support process-integrated quality management in manufacturing systems. Procedia CIRP, 104 , 900–905.

Filz, M. A., Langner, J. E. B., Herrmann, C., & Thiede, S. (2021). Data-driven failure mode and effect analysis (FMEA) to enhance maintenance planning. Computers in Industry, 129 , 103451.

French, S., Bedford, T., Pollard, S. J., & Soane, E. (2011). Human reliability analysis: A critique and review for managers. Safety Science, 49 (6), 753–763.

Gomez-Andrades, A., Barco, R., Serrano, I., Delgado, P., Caro-Oliver, P., & Munoz, P. (2016). Automatic root cause analysis based on traces for LTE self-organizing networks. IEEE Wireless Communications, 23 (3), 20–28.

Gómez-Andrades, A., Munoz, P., Serrano, I., & Barco, R. (2015). Automatic root cause analysis for lte networks based on unsupervised techniques. IEEE Transactions on Vehicular Technology, 65 (4), 2369–2386.

Hecht, M., & Baum, D. (2019). Failure propagation modeling in FMEAs for reliability, safety, and cybersecurity using SysML. Procedia Computer Science, 153 , 370–377.

Hireche, C., Dezan, C., Mocanu, S., Heller, D., & Diguet, J. P. (2018). Context/resource-aware mission planning based on BNs and concurrent MDPs for autonomous UAVs. Sensors, 18 (12), 4266.

Huang, J., You, J. X., Liu, H. C., & Song, M. S. (2020). Failure mode and effect analysis improvement: A systematic literature review and future research agenda. Reliability Engineering & System Safety, 199 , 106885.

Insua, D. R., Ruggeri, F., Soyer, R., & Wilson, S. (2020). Advances in Bayesian decision making in reliability. European Journal of Operational Research, 282 (1), 1–18.

Jacobo, V., Ortiz, A., Cerrud, Y., & Schouwenaars, R. (2007). Hybrid expert system for the failure analysis of mechanical elements. Engineering Failure Analysis, 14 (8), 1435–1443.

Jarrahi, M. H., Lutz, C., & Newlands, G. (2022). Artificial intelligence, human intelligence and hybrid intelligence based on mutual augmentation. Big Data & Society, 9 (2), 20539517221142824.

Jiang, S., Qin, S., Pulsipher, J. L., & Zavala, V. M. (2024). Convolutional neural networks: Basic concepts and applications in manufacturing. Artificial Intelligence in Manufacturing, 8 , 63–102.

Jiao, J., Zhen, W., Zhu, W., & Wang, G. (2020). Quality-related root cause diagnosis based on orthogonal kernel principal component regression and transfer entropy. IEEE Transactions on Industrial Informatics, 17 (9), 6347–6356.

Johnson, K., & Khan, M. K. (2003). A study into the use of the process failure mode and effects analysis (PFMEA) in the automotive industry in the UK. Journal of Materials Processing Technology, 139 (1–3), 348–356.

Jomthanachai, S., Wong, W. P., & Lim, C. P. (2021). An application of data envelopment analysis and machine learning approach to risk management. IEEE Access, 9 , 85978–85994.

Kabir, S., & Papadopoulos, Y. (2019). Applications of Bayesian networks and Petri nets in safety, reliability, and risk assessments: A review. Safety Science, 115 , 154–175.

Khakzad, N., Khan, F., & Amyotte, P. (2012). Dynamic risk analysis using bow-tie approach. Reliability Engineering & System Safety, 104 , 36–44.

Khorshidi, H. A., Gunawan, I., & Ibrahim, M. Y. (2015). Data-driven system reliability and failure behavior modeling using FMECA. IEEE Transactions on Industrial Informatics, 12 (3), 1253–1260.

Kumar, M., & Kaushik, M. (2020). System failure probability evaluation using fault tree analysis and expert opinions in intuitionistic fuzzy environment. Journal of Loss Prevention in the Process Industries, 67 , 104236.

Lee BH (2001) Using Bayes belief networks in industrial FMEA modeling and analysis. Annual Reliability and Maintainability Symposium. 2001 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.01CH37179) , pp. 7–15.

Lee MH, Siewiorek DP, Smailagic A, Bernardino A, Bermúdez i Badia S (2020) Interactive hybrid approach to combine machine and human intelligence for personalized rehabilitation assessment. Proceedings of the ACM Conference on Health, Inference, and Learning , pp. 160–169.

Leu, S. S., & Chang, C. M. (2013). Bayesian-network-based safety risk assessment for steel construction projects. Accident Analysis & Prevention, 54 , 122–133.

Li, B., Han, T., & Kang, F. (2013). Fault diagnosis expert system of semiconductor manufacturing equipment using a Bayesian network. International Journal of Computer Integrated Manufacturing, 26 (12), 1161–1171.

Liu, C., Lore, K. G., Jiang, Z., & Sarkar, S. (2021). Root-cause analysis for time-series anomalies via spatiotemporal graphical modeling in distributed complex systems. Knowledge-Based Systems, 211 , 106527.

Liu, D. R., & Ke, C. K. (2007). Knowledge support for problem-solving in a production process: A hybrid of knowledge discovery and case-based reasoning. Expert Systems with Applications, 33 (1), 147–161.

Liu, H. C., Chen, X. Q., Duan, C. Y., & Wang, Y. M. (2019). Failure mode and effect analysis using multi-criteria decision making methods: A systematic literature review. Computers & Industrial Engineering, 135 , 881–897.

Liu, H. C., Liu, L., & Liu, N. (2013). Risk evaluation approaches in failure mode and effects analysis: A literature review. Expert Systems with Applications, 40 (2), 828–838.

Liu, L., Fan, D., Wang, Z., Yang, D., Cui, J., Ma, X., & Ren, Y. (2019). Enhanced GO methodology to support failure mode, effects and criticality analysis. Journal of Intelligent Manufacturing, 30 (3), 1451–1468.

Ma, G., & Wu, M. (2020). A big data and FMEA-based construction quality risk evaluation model considering project schedule for shanghai apartment projects. International Journal of Quality & Reliability Management, 37 (1), 18–33.

Ma, Q., Li, H., & Thorstenson, A. (2021). A big data-driven root cause analysis system: Application of machine learning in quality problem solving. Computers & Industrial Engineering, 160 , 107580.

Madzík, P., & Kormanec, P. (2020). Developing the integrated approach of Kano model and failure mode and effect analysis. Total Quality Management & Business Excellence, 31 (15–16), 1788–1810.

Mangalathu, S., Hwang, S. H., & Jeon, J. S. (2020). Failure mode and effects analysis of RC members based on machine-learning-based shapley additive explanations (shap) approach. Engineering Structures, 219 , 110927.

Mazzoleni, M., Maccarana, Y., & Previdi, F. (2017). A comparison of data-driven fault detection methods with application to aerospace electro-mechanical actuators. IFAC-PapersOnLine, 50 (1), 12797–12802.

Mirbabaie, M., Stieglitz, S., & Frick, N. R. (2021). Hybrid intelligence in hospitals: Towards a research agenda for collaboration. Electronic Markets, 31 , 365–387.

Musumeci, F., Magni, L., Ayoub, O., Rubino, R., Capacchione, M., Rigamonti, G., Milano, M., Passera, C., & Tornatore, M. (2020). Supervised and semi-supervised learning for failure identification in microwave networks. IEEE Transactions on Network and Service Management, 18 (2), 1934–1945.

Nepal, B., & Yadav, O. P. (2015). Bayesian belief network-based framework for sourcing risk analysis during supplier selection. International Journal of Production Research, 53 (20), 6114–6135.

Nie, W., Liu, W., Wu, Z., Chen, B., & Wu, L. (2019). Failure mode and effects analysis by integrating Bayesian fuzzy assessment number and extended gray relational analysis-technique for order preference by similarity to ideal solution method. Quality and Reliability Engineering International, 35 (6), 1676–1697.

Oliveira, E. E., Miguéis, V. L., & Borges, J. L. (2021). Understanding overlap in automatic root cause analysis in manufacturing using causal inference. IEEE Access, 10 , 191–201.

Oliveira, E. E., Miguéis, V. L., & Borges, J. L. (2022). On the influence of overlap in automatic root cause analysis in manufacturing. International Journal of Production Research, 60 (21), 6491–6507.

Oliveira, E. E., Miguéis, V. L., & Borges, J. L. (2023). Automatic root cause analysis in manufacturing: An overview & conceptualization. Journal of Intelligent Manufacturing, 34 , 2061–2078.

Oztemel, E., & Gursev, S. (2020). Literature review of industry 4.0 and related technologies. Journal of Intelligent Manufacturing, 31 (1), 127–182.

Palluat, N., Racoceanu, D., & Zerhouni, N. (2006). A neuro-fuzzy monitoring system: Application to flexible production systems. Computers in Industry, 57 (6), 528–538.

Pang, J., Zhang, N., Xiao, Q., Qi, F., & Xue, X. (2021). A new intelligent and data-driven product quality control system of industrial valve manufacturing process in CPS. Computer Communications, 175 , 25–34.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference . Morgan kaufmann.

Petrescu, M., & Krishen, A. S. (2023). Hybrid intelligence: Human-ai collaboration in marketing analytics. Journal of Marketing Analytics, 11 (3), 263–274.

Piller, F. T., Nitsch, V., & van der Aalst, W. (2022). Hybrid intelligence in next generation manufacturing: An outlook on new forms of collaboration between human and algorithmic decision-makers in the factory of the future (pp. 139–158). Forecasting Next Generation Manufacturing: Digital Shadows, Human-Machine Collaboration, and Data-driven Business Models.

Price, C. J., & Taylor, N. S. (2002). Automated multiple failure FMEA. Reliability Engineering & System Safety, 76 (1), 1–10.

Psarommatis, F., & Kiritsis, D. (2022). A hybrid decision support system for automating decision making in the event of defects in the era of zero defect manufacturing. Journal of Industrial Information Integration, 26 , 100263.

Rajpathak, D., & De, S. (2016). A data-and ontology-driven text mining-based construction of reliability model to analyze and predict component failures. Knowledge and Information Systems, 46 (1), 87–113.

Rastayesh, S., Bahrebar, S., Blaabjerg, F., Zhou, D., Wang, H., & Dalsgaard Sørensen, J. (2019). A system engineering approach using FMEA and Bayesian network for risk analysis-a case study. Sustainability, 12 (1), 77.

Rausand, M., & Øien, K. (1996). The basic concepts of failure analysis. Reliability Engineering & System Safety, 53 (1), 73–83.

Rivera Torres, P. J., Serrano Mercado, E. I., Llanes Santiago, O., & Anido Rifón, L. (2018). Modeling preventive maintenance of manufacturing processes with probabilistic Boolean networks with interventions. Journal of Intelligent Manufacturing, 29 (8), 1941–1952.

Rokach, L., & Hutter, D. (2012). Automatic discovery of the root causes for quality drift in high dimensionality manufacturing processes. Journal of Intelligent Manufacturing, 23 (5), 1915–1930.

Ruijters, E., & Stoelinga, M. (2015). Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools. Computer Science Review, 15 , 29–62.

Sariyer, G., Mangla, S. K., Kazancoglu, Y., Ocal Tasar, C., & Luthra, S. (2021). Data analytics for quality management in Industry 4.0 from a MSME perspective. Annals of Operations Research, 23 , 1–19.

Sayed, M. S., & Lohse, N. (2014). Ontology-driven generation of Bayesian diagnostic models for assembly systems. The International Journal of Advanced Manufacturing Technology, 74 (5), 1033–1052.

Schuh, G., Gützlaff, A., Thomas, K., & Welsing, M. (2021). Machine learning based defect detection in a low automated assembly environment. Procedia CIRP, 104 , 265–270.

Shi, H., Wang, L., Li, X. Y., & Liu, H. C. (2020). A novel method for failure mode and effects analysis using fuzzy evidential reasoning and fuzzy Petri nets. Journal of Ambient Intelligence and Humanized Computing, 11 (6), 2381–2395.

Snooke, N., & Price, C. (2012). Automated FMEA based diagnostic symptom generation. Advanced Engineering Informatics, 26 (4), 870–888.

Spreafico, C., Russo, D., & Rizzi, C. (2017). A state-of-the-art review of FMEA/FMECA including patents. Computer Science Review, 25 , 19–28.

Stamatis, D. H. (2003). Failure mode and effect analysis: FMEA from theory to execution . ASQ Quality Press.

Steenwinckel B, Heyvaert P, De Paepe D, Janssens O, Vanden Hautte S, Dimou A, De Turck F, Van Hoecke S, Ongenae F (2018) Towards adaptive anomaly detection and root cause analysis by automated extraction of knowledge from risk analyses. 9th International Semantic Sensor Networks Workshop, Co-Located with 17th International Semantic Web Conference (ISWC 2018) , Vol. 2213, pp. 17–31.

Sun, Y., Qin, W., Zhuang, Z., & Xu, H. (2021). An adaptive fault detection and root-cause analysis scheme for complex industrial processes using moving window KPCA and information geometric causal inference. Journal of Intelligent Manufacturing, 32 (7), 2007–2021.

Tari, J. J., & Sabater, V. (2004). Quality tools and techniques: Are they necessary for quality management? International Journal of Production Economics, 92 (3), 267–280.

Tay, K. M., Jong, C. H., & Lim, C. P. (2015). A clustering-based failure mode and effect analysis model and its application to the edible bird nest industry. Neural Computing and Applications, 26 (3), 551–560.

Teoh, P. C., & Case, K. (2004). Failure modes and effects analysis through knowledge modelling. Journal of Materials Processing Technology, 153 , 253–260.

Teoh, P. C., & Case, K. (2005). An evaluation of failure modes and effects analysis generation method for conceptual design. International Journal of Computer Integrated Manufacturing, 18 (4), 279–293.

Thomé, A. M. T., Scavarda, L. F., & Scavarda, A. J. (2016). Conducting systematic literature review in operations management. Production Planning & Control, 27 (5), 408–420.

Tönnes, W. (2018). Applying data of historical defects to increase efficiency of rework in assembly. Procedia CIRP, 72 , 255–260.

van der Aalst, W. M. (2021). Hybrid intelligence: To automate or not to automate, that is the question. International Journal of Information Systems and Project Management, 9 (2), 5–20.

Waghen, K., & Ouali, M. S. (2021). Multi-level interpretable logic tree analysis: A data-driven approach for hierarchical causality analysis. Expert Systems with Applications, 178 , 115035.

Wan, C., Yan, X., Zhang, D., Qu, Z., & Yang, Z. (2019). An advanced fuzzy Bayesian-based FMEA approach for assessing maritime supply chain risks. Transportation Research Part E, 125 , 222–240.

Wang, L., Li, S., Wei, O., Huang, M., & Hu, J. (2018). An automated fault tree generation approach with fault configuration based on model checking. IEEE Access, 6 , 46900–46914.

Wang, Q., Jia, G., Jia, Y., & Song, W. (2021). A new approach for risk assessment of failure modes considering risk interaction and propagation effects. Reliability Engineering & System Safety, 216 , 108044.

Williams, P. M. (2001). Techniques for root cause analysis. Baylor University Medical Center Proceedings, 14 (2), 154–157.

Wu, Z., Liu, W., & Nie, W. (2021). Literature review and prospect of the development and application of FMEA in manufacturing industry. The International Journal of Advanced Manufacturing Technology, 112 (5), 1409–1436.

Xu, Z., & Dang, Y. (2020). Automated digital cause-and-effect diagrams to assist causal analysis in problem-solving: A data-driven approach. International Journal of Production Research, 58 (17), 5359–5379.

Xu, Z., & Dang, Y. (2023). Data-driven causal knowledge graph construction for root cause analysis in quality problem solving. International Journal of Production Research, 61 (10), 3227–3245.

Xu, Z., Dang, Y., Munro, P., & Wang, Y. (2020). A data-driven approach for constructing the component-failure mode matrix for FMEA. Journal of Intelligent Manufacturing, 31 (1), 249–265.

Yang, C., Zou, Y., Lai, P., & Jiang, N. (2015). Data mining-based methods for fault isolation with validated fmea model ranking. Applied Intelligence, 43 (4), 913–923.

Yang, S., Bian, C., Li, X., Tan, L., & Tang, D. (2018). Optimized fault diagnosis based on FMEA-style CBR and BN for embedded software system. The International Journal of Advanced Manufacturing Technology, 94 (9), 3441–3453.

Yang, S., Liu, H., Zhang, Y., Arndt, T., Hofmann, C., Häfner, B., & Lanza, G. (2020). A data-driven approach for quality analytics of screwing processes in a global learning factory. Procedia Manufacturing, 45 , 454–459.

Yang, S., & Liu, T. (1998). A Petri net approach to early failure detection and isolation for preventive maintenance. Quality and Reliability Engineering International, 14 (5), 319-330.

Yang, Y. J., Xiong, Y. L., Zhang, X. Y., Wang, G. H., & Zou, B. (2022). Reliability analysis of continuous emission monitoring system with common cause failure based on fuzzy FMECA and Bayesian networks. Annals of Operations Research, 311 , 451–467.

Yang, Z. X., Zheng, Y. Y., & Xue, J. X. (2009). Development of automatic fault tree synthesis system using decision matrix. International Journal of Production Economics, 121 (1), 49–56.

Ye, P., Wang, X., Zheng, W., Wei, Q., & Wang, F. Y. (2022). Parallel cognition: Hybrid intelligence for human-machine interaction and management. Frontiers of Information Technology & Electronic Engineering, 23 (12), 1765–1779.

Yucesan, M., Gul, M., & Celik, E. (2021). A holistic FMEA approach by fuzzy-based Bayesian network and best-worst method. Complex & Intelligent Systems, 7 (3), 1547–1564.

Yuniarto, H. (2012). The shortcomings of existing root cause analysis tools. Proceedings of the World Congress on Engineering, 3 , 186–191.

Zhang, S., Xie, X., & Qu, H. (2023). A data-driven workflow for evaporation performance degradation analysis: A full-scale case study in the herbal medicine manufacturing industry. Journal of Intelligent Manufacturing, 34 , 651–668.

Zheng, T., Ardolino, M., Bacchetti, A., & Perona, M. (2021). The applications of industry 4.0 technologies in manufacturing context: a systematic literature review. International Journal of Production Research, 59 (6), 1922–1954.

Zhou, A., Yu, D., & Zhang, W. (2015). A research on intelligent fault diagnosis of wind turbines based on ontology and FMECA. Advanced Engineering Informatics, 29 (1), 115–125.

Zhu, C., & Zhang, T. (2022). A review on the realization methods of dynamic fault tree. Quality and Reliability Engineering International, 38 (6), 3233–3251.

Zhu, J. H., Chen, Z. S., Shuai, B., Pedrycz, W., Chin, K. S., & Martínez, L. (2021). Failure mode and effect analysis: A three-way decision approach. Engineering Applications of Artificial Intelligence, 106 , 104505.

Zuo, X., Li, B., & Yang, J. (2016). Error sensitivity analysis and precision distribution for multi-operation machining processes based on error propagation model. The International Journal of Advanced Manufacturing Technology, 86 (1), 269–280.

Download references

This research is funded by Flanders Make under the project AQUME_SBO, project number 2022-0151. Flanders Make is the Flemish strategic research center for the manufacturing industry in Belgium.

Author information

Authors and affiliations.

Department of Industrial Systems Engineering and Product Design, Ghent University, 9052, Ghent, Belgium

Mahdi Mokhtarzadeh, Jorge Rodríguez-Echeverría, Ivana Semanjski & Sidharta Gautama

FlandersMake@UGent–corelab ISyE, Lommel, Belgium

Escuela Superior Politécnica del Litoral, ESPOL, Facultad de Ingeniería en Electricidad y Computación, ESPOL Polytechnic University, Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, 090112, Guayaquil, Ecuador

Jorge Rodríguez-Echeverría

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mahdi Mokhtarzadeh .

Ethics declarations

Competing interest.

The authors report there are no competing interests to declare.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 137 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Mokhtarzadeh, M., Rodríguez-Echeverría, J., Semanjski, I. et al. Hybrid intelligence failure analysis for industry 4.0: a literature review and future prospective. J Intell Manuf (2024). https://doi.org/10.1007/s10845-024-02376-5

Download citation

Received : 27 June 2023

Accepted : 14 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1007/s10845-024-02376-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Automated failure analysis
  • Data-driven failure analysis
  • Human–machine cooperation
  • Find a journal
  • Publish with us
  • Track your research
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, application of mixed reality navigation technology in primary brainstem hemorrhage puncture and drainage surgery: a case series and literature review.

www.frontiersin.org

  • 1 Department of Neurosurgery, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
  • 2 Pre-hospital Emergency Department, Chongqing Emergency Medical Center, Chongqing University Central Hospital, Chongqing, China
  • 3 Qinying Technology Co., Ltd., Chongqing, China

Objective: The mortality rate of primary brainstem hemorrhage (PBH) is high, and the optimal treatment of PBH is controversial. We used mixed reality navigation technology (MRNT) to perform brainstem hematoma puncture and drainage surgery in seven patients with PBH. We shared practical experience to verify the feasibility and safety of the technology.

Method: We introduced the surgical procedure of brainstem hematoma puncture and drainage surgery with MRNT. From January 2021 to October 2022, we applied the technology to seven patients. We collected their clinical and radiographic indicators, including demographic indicators, preoperative and postoperative hematoma volume, hematoma evacuation rate, operation time, blood loss, deviation of the drainage tube target, depth of implantable drainage tube, postoperative complications, preoperative and 1-month postoperative GCS, etc.

Result: Among seven patients, with an average age of 56.71 ± 12.63 years, all had underlying diseases of hypertension and exhibited disturbances of consciousness. The average evacuation rate of hematoma was 50.39% ± 7.71%. The average operation time was 82.14 ± 15.74 min, the average deviation of the drainage tube target was 4.58 ± 0.72 mm, and the average depth of the implantable drainage tube was 62.73 ± 0.94 mm. Among all seven patients, four patients underwent external ventricular drainage first. There were no intraoperative deaths, and there was no complication after surgery in seven patients. The 1-month postoperative GCS was improved compared to the preoperative GCS.

Conclusion: It was feasible and safe to perform brainstem hematoma puncture and drainage surgery by MRNT. The technology could evacuate about half of the hematoma and prevent hematoma injury. The advantages included high precision in dual-plane navigation technology, low cost, an immersive operation experience, etc. Furthermore, improving the matching registration method and performing high-quality prospective clinical research was necessary.

Introduction

Primary brainstem hemorrhage (PBH) is spontaneous brainstem bleeding associated with hypertension unrelated to cavernous hemangioma, arteriovenous malformation, and other diseases. Hypertension is the leading risk factor for PBH, and other elements include anticoagulant therapy, cerebral amyloid angiopathy, et al. PBH is the deadliest subtype of intracerebral hemorrhage (ICH), accounting for 6%–10% of all ICH with an annual incidence of approximately 2–4/100,000 people [ 1 – 3 ]. The clinical characteristics of PBH are acute onset, rapid deterioration, poor prognosis, and high mortality (30%–90%) [ 1 , 4 , 5 ].

The inclusion criteria of previous ICH research all excluded PBH, such as STICH and MISTIE trials. There is no clear evidence for the optimal treatment of PBH, and the view of surgical treatment has noticeable regional differences. European and North American countries generally believe that severe disability or survival in a vegetative state is a high mental and economic burden for PBH patients and their families. These countries do not favor surgical treatment. However, many PBH surgical treatments have been carried out in China, Japan, and South Korea. Surgical treatment methods, surgical effects, monitoring methods, and complications have been investigated, and much experience has been accumulated.

In 1998, Korean scholars performed the first craniotomy to evacuate the brainstem hematoma [ 6 ]. However, in 1989, the Japanese scholar Takahama performed stereotactic brainstem hematoma aspiration surgery [ 7 ]. In our opinion, microsurgery craniotomy requires high electrophysiological monitoring and surgical skills, and these limitations are not conductive to popularization. Minimally invasive surgery has the characteristics of a simple operation, minimally invasive, and short operation time, and it is believed to reduce the damage to critical brainstem structures and protect brainstem function as much as possible. More and more minimally invasive treatments have been adopted to improve the precision of PBH puncture, including stereotactic frameworks, robotic-assisted navigation systems, 3D printing techniques, and even laser combined with CT navigation techniques.

Mixed reality navigation technology (MRNT) is based on virtual and augmented reality development. The technology uses CT images to construct a 3D head model and design an individual hematoma puncture trajectory. The actual environmental position is captured by a camera during surgery and was fused with 3D head model synchronously. MRNT not only display the model image combined with actual environment but also navigate the puncture trajectory in real time, allowing the surgeon to precisely control puncture angle and depth to achieve a perfect procedure. This technology makes the head utterly transparent during the surgery and brings an immersive experience to the surgeon.

MRNT has broad application prospects. However, it is still in its infancy, and its application in neurosurgery has rarely been reported. Furthermore, there is no report on application of MRNT in the surgical treatment of PBH. In this study, we used MRNT to perform brainstem hematoma puncture and drainage surgery in seven patients with PBH to share practical experience to verify the feasibility and safety of the technology.

Materials and methods

General information.

With the approval of the Ethics Committee of the Chongqing Emergency Medical Center, we included seven patients diagnosed with PBH from January 2021 to October 2022. All underwent brainstem hematoma puncture and drainage surgery with MRNT under general anesthesia. Indications for surgery were patients who 1) were 18–80 years of age; 2) had hematoma volume greater than 5 mL and less than 15 mL; 3) had a diameter of the hematoma greater than 2 cm; 4) had hematoma deviating toward one side or the dorsal side; 5) had GCS less than 8; and 6) had surgery within 6–24 h after onset. Family members were informed and signed the consent form [ 8 ]. Exclusion criteria were patients who had 1) brainstem hemorrhage caused by cavernous hemangioma, arteriovenous malformation, and other diseases; 2) GCS >12; 3) bilateral pupil dilation; 4) unstable vital signs; 5) severe underlying disease; or 6) coagulation dysfunction.

Mixed reality navigation technology (MRNT)

All patients preparing for surgery were required to wear sticky analysis markers in the parieto-occipital region and undergo a CT scan before surgery. CT image scanning was performed with a 64-slice CT scanner (Lightspeed VCT 6, General Electric Company, United States of America). The image parameters included in the exposure were 3 mAS, the thickness was 5mm, and the image size was 512 × 512. The DICOM data were analyzed to construct the 3D model of the hematoma and head, and the volume of brainstem preoperative hematoma was calculated using software (Medical Modeling and Design System). In addition, the hematoma puncture trajectory was designed according to the constructed head model.

After general anesthesia, the sticky analysis markers were replaced with bone nail markers, keeping the same position [ 9 ]. Based on the principle of near-infrared optical navigation, the camera captured the actual space position in real-time, fused it with the markers of the 3D head model (HSCM3D DICOM), and transmitted the information to the wearable device (HoloLens). During surgery, the camera continuously tracked the position of the puncture needle to achieve navigation function. In short, the image processing software matched and fused information from camera systems and wearable device through multiple markers. When controlling the movement of surgical tools, the software also processed the dynamic tool position data and fused it with the virtual model through wireless transmission.

Surgical procedures

Hydrocephalus patients were first treated with external ventricular drainage (EVD), and the frontal Kocher point was selected as the cranial entry point. The procedures were cutting the skin, drilling the skull, cutting the dura mater, puncturing in the direction of the plane of binaural connection, fixing the drainage tube, and suturing it layer by layer.

The patient was placed in a prone position with the head frame fixed. The puncture point was 2 cm below the transverse sinus and 3 cm lateral to the midline of the hematoma side. After cutting the skin, the muscle was separated. The dura mater was cut through a drilled hole. Wearing HoloLens, the surgeon synchronously observed actual head structure and fused puncture trajectory from multiple angles and used dual-plane navigation technology [ 9 ] for hematoma puncture. After watching that the drainage tube was in place, the puncture needle was removed, and a 5 mL empty syringe was connected for suction. The drainage tube was fixed and sutured layer by layer. The head CT was reviewed immediately after the surgery, and the decision whether to inject urokinase according to the drainage tube’s position and the residual hematoma volume. Urokinase was injected from a drainage tube for 2-3 w units every 12 h, usually 4–6 times, and kept for 1.5 h before opening the tube. The retention time of the drainage tube was no more than 72 h after the surgery. The surgical procedure to apply MRNT is shown in Figure 1 .

www.frontiersin.org

Figure 1 . Surgical procedure for brainstem hematoma puncture and drainage surgery with MRNT (A) Patients were required to wear sticky analysis markers in the parieto-occipital region. (B) The camera captured the real space position of the calibration plate, puncture needle, and head. (C) Wearing HoloLens, the surgeon viewed the two planes of the image. (D) MRNT displays the model image and the actual environment synchronously, allowing the surgeon to perform precise surgery. (E) The real-time navigation of MRNT showed that the puncture needle was close to the hematoma target. (F) The surgeon was aspirating the hematoma.

Clinical and radiographic indicators

The indicators for analysis included: demographic indicators, preoperative and postoperative hematoma volume, hematoma evacuation rate, operation time, blood loss, deviation of the drainage tube target, depth of implantable drainage tube, postoperative complications, and preoperative and 1-month postoperative GCS, etc.

The deviation of the drainage tube target was defined as the distance between the tip of the drainage tube and the planned puncture hematoma target. The deviation calculation was done with the BLENDER 2.93.3 software, which used the 3D global coordinate system to visualize the distance.

The head CT examination was reviewed within 24 h after surgery, and the postoperative hematoma volume was measured by non-operators using previous software (Medical Modeling and Design System). Hematoma evacuation rate = (preoperative hematoma volume - postoperative hematoma volume)/preoperative hematoma volume.

Statistical analysis

All statistical analyses were performed with SPSS (version 21, IBM, Chicago, IL, United States). Quantitative variables are presented as means ± standard deviations. The normality of quantitative variables was assessed through the Kolmogorov-Smirnov test. If the distribution was found to be normal, paired t -test were performed. The categorical variables are presented as percentages and tested by χ2 or Fisher’s test. A p -value less than 0.05 was considered statistically significant.

From January 2021 to October 2022, seven patients were diagnosed with PBH and underwent brainstem hematoma puncture and drainage surgery with MRNT. A summary of the demographic and clinical characteristics of the patients was provided in Table 1 . Among the seven patients, five were men, with an average age of 56.71 ± 12.63 years (37–74 years). The seven cases had underlying hypertension, and four cases had diabetes. The average time from onset to admission was 4.2 ± 1.47 h. Seven patients had prominent disturbances of consciousness, four required ventilator assistance, and three had a high fever.

www.frontiersin.org

Table 1 . Demographic and clinical characteristics of seven patients.

According to the brainstem hematoma classification advocated by Chung [ 10 ], 2 cases belonged to small unilateral tegmental type, 4 cases belonged to basal-tegmental type, and other 1 case belonged to bilateral tegmental type. The average volume of preoperative brainstem hematoma was 8.47 ± 2.22 mL (range, 5.45–12.2 mL), the average volume of postoperative brainstem hematoma was 4.16 ± 1.17 mL (range, 3.14–5.95 mL), and the differences were significant. The average hematoma evacuation rate was 50.39% ± 7.71% (range, 41.65%–63.23%). Four of the seven patients underwent EVD first (57.1%), and one underwent EVD 2 days after hematoma puncture and drainage surgery. The average operation time was 82.14 ± 15.74 min, the average blood loss was 32.2 ± 8.14 mL, the average deviation of the drainage tube target was 4.58 ± 0.72 mm (range, 3.36–5.32 mm), and the average depth of the implantable drainage tube was 62.73 ± 0.94 mm (range, 61.42–64.23 mm). Three patients were injected with urokinase after surgery, and the average retention time of the drainage tube was 53.56 ± 7.83 h.

There were no intraoperative deaths in seven patients. Two patients had slight intraoperative fluctuations in vital signs. The most common postoperative comorbidity was pneumonia (7/7, 100%), followed by gastrointestinal bleeding (5/7, 71.43%). There were no rebleeding incidents, ischemic stroke, intracranial infection, or epilepsy within 2 weeks after surgery. The preoperative high fever symptoms were relieved after surgery. Only one patient died due to pneumonia 12 days after surgery, one patient gave up 20 days after surgery. Two patients were conscious and three patients were still in a coma 1 month after surgery.

The average preoperative GCS was 6.57 ± 1.51, and the average postoperative GCS was 10.00 ± 2.83 1 month after surgery. The improvement was statistically significant. The representative cases are shown in Figure 2 and Figure 3 .

www.frontiersin.org

Figure 2 . The representative case 2 (A) Preoperative CT showed PBH in the axial, sagittal, and coronal planes. (B) The 3D model constructed from CT images showed hematoma and designed the puncture trajectory from the axial, sagittal, and coronary positions. (C) Postoperative CT of the axial plane showed that the drainage tube location was precise. The yellow circle indicated the tip of the drainage tube. (D) Fusion of preoperative and postoperative 3D model showed that the preoperative hematoma volume was 5.45 mL, the postoperative hematoma volume was 3.18 mL, the hematoma evacuation rate was 41.65%, the deviation of the target drainage tube was 4.22 mm, and the depth of the implantable drainage tube was 63.42 mm.

www.frontiersin.org

Figure 3 . The representative case 5. (A) Preoperative CT showed PBH in the axial, sagittal, and coronal planes. (B) The 3D model constructed from CT images showed hematoma, lateral ventricular, and a designed puncture trajectory from axial, sagittal, and coronary positions. (C) Postoperative CT of the axial plane showed that the drainage tube location was precise. The yellow circle indicated the tip of the drainage tube. (D) Fusion of the preoperative and postoperative 3D model showed that the preoperative hematoma volume was 10.21 mL, the postoperative hematoma volume was 5.95 mL, the hematoma evacuation rate was 41.72%, the deviation of the drainage tube target was 3.36 mm. The depth of the implantable drainage tube was 61.84 mm.

The brainstem is small, deep in the skull, and includes the midbrain, pons, and medulla oblongata. The brainstem is the center of life, controlling respiration, heart rate, blood pressure, and body temperature. About 60%–80% of PBH occurs in the pons due to the rupture of the perforating vessels of the basilar artery [ 1 , 2 ]. Hypertension is one of the most common causes of severe cerebrovascular disease. By causing mechanical and chemical damage to essential structures in the brainstem, such as the nucleus clusters and the reticular system, the hematoma quickly induces clinical symptoms such as coma, central hyperthermia, tachycardia, abnormal pupils, and hypotension. The prognosis is extremely poor, which presents a challenge to existing treatment methods.

The conservative treatment strategy for PBH is mainly related to the hypertensive treatment strategy for ICH [ 11 ]. Since the primary damage of PBH is irreversible, surgical treatment is believed to relieve mechanical compression of the hematoma and prevent secondary injury, improving prognosis [ 1 , 12 , 13 ]. However, there have been some controversies about surgical treatment. Due to the high mortality and disability rate of PBH, it is necessary to strictly evaluate the indications for surgery. Indications for surgery proposed by Shresha included a hematoma volume greater than 5 mL, a relatively concentrated hematoma, GCS less than 8, progressive neurological dysfunction, and uneventful vital signs, particularly requiring ventilatory assistance [ 14 ]. Huang established a brainstem hemorrhage scoring system and suggested patients with a score of 2–3 might benefit from surgical treatment. A score of 4 was a contraindication to surgical treatment [ 15 ]. A review of 10 cohort studies showed that the patients in the surgical group were 45–65 years old, unconscious, with a GCS of 3–8, and the hematoma volume was approximately 8 mL. The surgical group had a better prognosis and lower mortality than the conservative treatment group. The research also suggested that older age and coma were not contraindications for brainstem hemorrhage surgery [ 16 ]. According to the Chinese guidelines for brainstem hemorrhage, we specified the following surgical indications: age 18–80 years old, hematoma volume greater than 5 mL and less than 15 mL, hematoma diameter greater than 2 cm, hematoma deviated to one side or the dorsal side, GCS less than 8, surgery performed within 6–24 h after onset, and family consent [ 8 ].

The surgical treatments for PBH included microscopic craniotomy to evacuate the hematoma, which removed the hematoma as much as possible, performed hemostasis, and removed the fourth ventricular hematoma to smooth the circulation of cerebrospinal fluid. However, this technology required various intraoperative monitoring methods and proficient surgical skills. The most widely chosen method was stereotactic hematoma puncture and drainage surgery. To achieve precise puncture of the brainstem hematoma, surgeons had used invasive stereotaxic frames [ 17 ], robot-assisted navigation systems [ 18 ], the 3D printing technology navigation method [ 19 ], and laser combined with CT navigation technology [ 13 ]. The above techniques had shortcomings, including invasive placement positioning framework, the risk of skull bleeding and infection, expensive costs of robot-assisted and neuronavigation systems, the lengthy procedure of 3D printing technology, etc.

We innovatively used MRNT to perform brainstem hematoma puncture and drainage surgery. Our team used this technology to successfully perform intracranial foreign body removal [ 20 ] and minimally invasive puncture surgery for deep ICH, with a deviation of the drainage tube target of 5.76 ± 0.80 mm [ 9 ]. Based on previous experience and technical improvement, we applied technology to perform brainstem hematoma puncture and drainage surgery. The average volume of preoperative brainstem hematoma was 8.47 ± 2.22 mL, postoperative brainstem hematoma was 4.16 ± 1.17 mL, and the average hematoma evacuation rate was 50.39% ± 7.71%, which prevented hematoma primary compression and secondary injury. The surgical procedure under general anesthesia took an average of 82.14 ± 15.74 min, the average target deviation was 4.58 ± 0.72 mm, and the average depth of the implantable drainage tube was 62.73 ± 0.94 mm. The depth of the drainage tube was longer than that in the application of deep ICH, which required higher precision. Moreover, we found MRNT was safe in seven patients.

A comparison of the precision of augmented reality technology, mixed reality technology, and traditional stereotactic methods have been discussed in previous literature. Van Doormaal et al. conducted a holographic navigation study using augmented reality technology. They found that the fiducial registration error was 7.2 mm in a plastic head model, and the fiducial registration error was 4.4 mm in three patients [ 21 ]. A meta-analysis was conducted to systematically review the accuracy of augmented reality neuronavigation and compare it with conventional infrared neuronavigation. In 35 studies, the average target registration error of 2.5 mm in augmented reality technology was no different from that of 2.6 mm in traditional infrared navigation [ 22 ]. Moreover, In the study of neuronavigation using mixed reality technology, the researchers received a target deviation range of 4–6 mm [ 23 – 25 ].

The augmented reality technology application scenarios mainly involve intracranial tumors and rarely involve ICH. Qi et al. used mixed reality navigation technology to perform ICH surgery. They also used markers for point registration and image fusion. The results showed that the occipital hematoma puncture deviation was 5.3 mm due to the prone and supine position, and the deviation in the basal ganglia was 4.0 mm [ 26 ]. Zhou et al. also presented a novel multi-model mixed reality navigation system for hypertensive ICH surgery. The results of the phantom experiments revealed a mean registration error of 1.03 mm. The registration error was 1.94 mm in clinical use, which showed that the system was sufficiently accurate and effective for clinical application [ 27 ]. A summary of the deviations in the application of MR or AR was provided in Table 2 .

www.frontiersin.org

Table 2 . Reported cases of deviations in the application of MR or AR in neurosurgery.

In addition to precision puncture and hematoma drainage, surgical treatment of PBH also required further discussion on the timing of surgery, external ventricular drainage, and fibrinolytic drugs. Shrestha et al. found that surgical treatment within 6 h after onset was associated with a good prognosis [ 14 ]. The ultra-early operation alleviated the hematoma mass effect and reduced secondary injury. In particular, for patients with a severe condition, early hematoma aspiration could immediately eliminate harmful effects and prevent worse clinical outcomes [ 17 ] However, many primary hospitals are not equipped with PBH surgical treatment abilities. Patients have to waste a lot of time in the transfer process, which is a big challenge in clinical treatment. PBH can also cause cerebrospinal fluid circulation disorder that induces patients to become unconscious. External ventricular drainage is beneficial in improving cerebrospinal fluid circulation, managing intracranial pressure, and facilitating patient recovery [ 17 ]. In our study, external ventricular drainage was performed in five cases of seven patients. Previous research investigating the effects of rtPA on ICH and ventricular hemorrhage by MISTIE and CLAEA demonstrated that fibrinolytic drug administration did not increase the risk of hemorrhage [ 30 – 33 ]. Currently, there is no evidence and consensus to verify the effects of the thrombolytic drug used in PBH. We also found that urokinase did not increase the risk of bleeding and improve drainage efficiency, as reported in previous literature [ 13 , 18 ].

Compared with the expensive neuronavigation system, mixed reality navigation technology was an independent research and development project, the equipment of the technology was simple, and the cost was low. The effect of the technology met the clinical application of intracerebral hemorrhage surgery, and was beneficial to popularization for primary hospital.

There were also some limitations in our technology. Firstly, in order to introduce our innovative mixed reality navigation technology earlier and faster, we reported few cases, so there are not enough data to verify the advancement of the technology. At present, it was difficult to perform a cohort study because of the small number of patients enrolled. We plan to carry out clinical study with other centers in the future. Secondly, navigation technology was mainly based on point-matching technology, which enabled the fusion of the image model with the actual space through markers. Implementing invasive markers in the skull might carry potential risks of bleeding or infection. Moreover, the procedure required CT examinations before surgery, which delayed surgery time, and increased costs. Some researchers proposed the face registration plan, but the target deviation of the face registration was higher than that of the point registration, and the clinical practicability was poor [ 34 ]. Clinical practice must explore a precise, simple, fast, and noninvasive matching and fusion innovative solution.

It was feasible and safe to perform brainstem hematoma puncture and drainage by MRNT. Early minimally invasive precise surgery could prevent hematoma primary and secondary injury, and improve the prognosis of patients with PBH. The advantages included high precision in dual-plane navigation technology, low cost, an immersive operation experience, etc. Furthermore, improving the matching registration method and performing high-quality prospective clinical research was necessary.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.

Ethics statement

The studies involving humans were approved by Ethics Committee of the Chongqing Emergency Medical Center. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

XT: Writing–original draft, Data curation, Software. YaW: Writing–original draft. GT: Conceptualization, Project administration, Writing–original draft. YiW: Investigation, Resources, Software, Writing–original draft. WX: Resources, Formal Analysis, Writing–original draft, Writing–review and editing. YL: Methodology, Writing–original draft. YD: Writing–review and editing. PC: Writing–review and editing, Conceptualization, Writing–original draft.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This study was financially supported by the Fundamental Research Funds for the Central Universities (2022CDJYGRH-015) and Medical Research Project of Science and Technology Bureau and Health Commission, Chongqing, China (2023MSXM076).

Conflict of interest

Author YiW was employed by Qinying Technology Co., Ltd.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Chen P, Yao H, Tang X, Wang Y, Zhang Q, Liu Y, et al. Management of primary brainstem hemorrhage: a review of outcome prediction, surgical treatment, and animal model. Dis Markers (2022) 2022:1–8. doi:10.1155/2022/4293590

CrossRef Full Text | Google Scholar

2. Chen D, Tang Y, Nie H, Zhang P, Wang W, Dong Q, et al. Primary brainstem hemorrhage: a review of prognostic factors and surgical management. Front Neurol (2021) 12:727962. doi:10.3389/fneur.2021.727962

PubMed Abstract | CrossRef Full Text | Google Scholar

3. van Asch CJ, Luitse MJ, Rinkel GJ, van der Tweel I, Algra A, Klijn CJ. Incidence, case fatality, and functional outcome of intracerebral haemorrhage over time, according to age, sex, and ethnic origin: a systematic review and meta-analysis. Lancet Neurol (2010) 9:167–76. doi:10.1016/s1474-4422(09)70340-0

4. Behrouz R. Prognostic factors in pontine haemorrhage: a systematic review. Eur Stroke J (2018) 3:101–9. doi:10.1177/2396987317752729

5. Balci K, Asil T, Kerimoglu M, Celik Y, Utku U. Clinical and neuroradiological predictors of mortality in patients with primary pontine hemorrhage. Clin Neurol Neurosurg (2005) 108:36–9. doi:10.1016/j.clineuro.2005.02.007

6. Hong JT, Choi SJ, Kye DK, Park CK, Lee SW, Kang JK. Surgical outcome of hypertensive pontine hemorrhages: experience of 13 cases. J Korean Neurosurg Soc (1998) 27:59–65.

Google Scholar

7. Takahama H, Morii K, Sato M, Sekiguchi K, Sato S. Stereotactic aspiration in hypertensive pontine hemorrhage: comparative study with conservative therapy. No Shinkei Geka (1989) 17:733–9.

PubMed Abstract | Google Scholar

8. Chen L, Chen T, Mao G, Chen B, Li M, Zhang H, et al. Clinical neurorestorative therapeutic guideline for brainstem hemorrhage (2020 China version). J Neurorestoratology (2020) 8:232–40. doi:10.26599/jnr.2020.9040024

9. Peng C, Yang L, Yi W, Yidan L, Yanglingxi W, Qingtao Z, et al. Application of fused reality holographic image and navigation technology in the puncture treatment of hypertensive intracerebral hemorrhage. Front Neurosci (2022) 16:850179. doi:10.3389/fnins.2022.850179

10. Chung CS, Park CH. Primary pontine hemorrhage: a new CT classification. Neurology (1992) 42(4):830–4. doi:10.1212/wnl.42.4.830

11. Greenberg SM, Ziai WC, Cordonnier C, Dowlatshahi D, Francis B, Goldstein JN, et al. 2022 guideline for the management of patients with spontaneous intracerebral hemorrhage: a guideline from the American heart association/American stroke association. Stroke (2022) 53:e282–e361. doi:10.1161/str.0000000000000407

12. Balami JS, Buchan AM. Complications of intracerebral haemorrhage. Lancet Neurol (2012) 11:101–18. doi:10.1016/s1474-4422(11)70264-2

13. Wang Q, Guo W, Zhang T, Wang S, Li C, Yuan Z, et al. Laser navigation combined with XperCT technology assisted puncture of brainstem hemorrhage. Front Neurol (2022) 13:905477. doi:10.3389/fneur.2022.905477

14. Shrestha BK, Ma L, Lan Z, Li H, You C. Surgical management of spontaneous hypertensive brainstem hemorrhage. Interdiscip Neurosurg (2015) 2:145–8. doi:10.1016/j.inat.2015.06.005

15. Huang K, Ji Z, Sun L, Gao X, Lin S, Liu T, et al. Development and validation of a grading Scale for primary pontine hemorrhage. Stroke (2017) 48:63–9. doi:10.1161/strokeaha.116.015326

16. Zheng WJ, Shi SW, Gong J. The truths behind the statistics of surgical treatment for hypertensive brainstem hemorrhage in China: a review. Neurosurg Rev (2022) 45:1195–204. doi:10.1007/s10143-021-01683-2

17. Du L, Wang JW, Li CH, Gao BL. Effects of stereotactic aspiration on brainstem hemorrhage in a case series. Front Surg (2022) 9:945905. doi:10.3389/fsurg.2022.945905

18. Zhang S, Chen T, Han B, Zhu W. A retrospective study of puncture and drainage for primary brainstem hemorrhage with the assistance of a surgical robot. Neurologist (2023) 28:73–9. doi:10.1097/nrl.0000000000000445

19. Wang Q, Guo W, Liu Y, Shao W, Li M, Li Z, et al. Application of a 3D-printed navigation mold in puncture drainage for brainstem hemorrhage. J Surg Res (2020) 245:99–106. doi:10.1016/j.jss.2019.07.026

20. Li Y, Huang J, Huang T, Tang J, Zhang W, Xu W, et al. Wearable mixed-reality holographic navigation guiding the management of penetrating intracranial injury caused by a nail. J Digit Imaging (2021) 34:362–6. doi:10.1007/s10278-021-00436-3

21. van Doormaal TPC, van Doormaal JAM, Mensink T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Oper Neurosurg (Hagerstown) (2019) 17:588–93. doi:10.1093/ons/opz094

22. Fick T, van Doormaal JAM, Hoving EW, Willems PWA, van Doormaal TPC. Current accuracy of augmented reality neuronavigation systems: systematic review and meta-analysis. World Neurosurg (2021) 146:179–88. doi:10.1016/j.wneu.2020.11.029

23. Incekara F, Smits M, Dirven C, Vincent A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World Neurosurg (2018) 118:e422–7. doi:10.1016/j.wneu.2018.06.208

24. McJunkin JL, Jiramongkolchai P, Chung W, Southworth M, Durakovic N, Buchman CA, et al. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol (2018) 39:e1137–42. doi:10.1097/mao.0000000000001995

25. Li Y, Chen X, Wang N, Zhang W, Li D, Zhang L, et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J Neurosurg (2018) 1–8. doi:10.3171/2018.4.JNS18124

26. Qi Z, Li Y, Xu X, Zhang J, Li F, Gan Z, et al. Holographic mixed-reality neuronavigation with a head-mounted device: technical feasibility and clinical application. Neurosurg Focus (2021) 51:E22. doi:10.3171/2021.5.focus21175

27. Zhou Z, Yang Z, Jiang S, Zhuo J, Zhu T, Ma S. Surgical navigation system for hypertensive intracerebral hemorrhage based on mixed reality. J Digit Imaging (2022) 35:1530–43. doi:10.1007/s10278-022-00676-x

28. Zhu T, Jiang S, Yang Z, Zhou Z, Li Y, Ma S, et al. A neuroendoscopic navigation system based on dual-mode augmented reality for minimally invasive surgical treatment of hypertensive intracerebral hemorrhage. Comput Biol Med (2022) 140:105091. doi:10.1016/j.compbiomed.2021.105091

29. Hou Y, Ma L, Zhu R, Chen X, Zhang J. A low-cost iPhone-assisted augmented reality solution for the localization of intracranial lesions. PLoS One (2016) 11(7):e0159185. doi:10.1371/journal.pone.0159185

30. Hanley DF, Thompson RE, Rosenblum M, Yenokyan G, Lane K, McBee N, et al. Efficacy and safety of minimally invasive surgery with thrombolysis in intracerebral haemorrhage evacuation (MISTIE III): a randomised, controlled, open-label, blinded endpoint phase 3 trial. Lancet (2019) 393:1021–32. doi:10.1016/s0140-6736(19)30195-3

31. Hanley DF, Lane K, McBee N, Ziai W, Tuhrim S, Lees KR, et al. Thrombolytic removal of intraventricular haemorrhage in treatment of severe stroke: results of the randomised, multicentre, multiregion, placebo-controlled CLEAR III trial. Lancet (2017) 389:603–11. doi:10.1016/s0140-6736(16)32410-2

32. Montes JM, Wong JH, Fayad PB, Awad IA. Stereotactic computed tomographic-guided aspiration and thrombolysis of intracerebral hematoma: protocol and preliminary experience. Stroke (2000) 31:834–40. doi:10.1161/01.str.31.4.834

33. Vespa P, McArthur D, Miller C, O'Phelan K, Frazee J, Kidwell C, et al. Frameless stereotactic aspiration and thrombolysis of deep intracerebral hemorrhage is associated with reduction of hemorrhage volume and neurological improvement. Neurocrit Care (2005) 2:274–81. doi:10.1385/ncc:2:3:274

34. Mongen MA, Willems PWA. Current accuracy of surface matching compared to adhesive markers in patient-to-image registration. Acta Neurochir (Wien) (2019) 161:865–70. doi:10.1007/s00701-019-03867-8

Keywords: primary brainstem hemorrhage, mixed reality navigation technology, brainstem hematoma puncture and drainage surgery, neuronavigation, deviation

Citation: Tang X, Wang Y, Tang G, Wang Y, Xiong W, Liu Y, Deng Y and Chen P (2024) Application of mixed reality navigation technology in primary brainstem hemorrhage puncture and drainage surgery: a case series and literature review. Front. Phys. 12:1390236. doi: 10.3389/fphy.2024.1390236

Received: 23 February 2024; Accepted: 26 March 2024; Published: 17 April 2024.

Reviewed by:

Copyright © 2024 Tang, Wang, Tang, Wang, Xiong, Liu, Deng and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yongbing Deng, [email protected] ; Peng Chen, [email protected]

† These authors share first authorship

This article is part of the Research Topic

Multi-Sensor Imaging and Fusion: Methods, Evaluations, and Applications – Volume II

IMAGES

  1. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    research methodology review of literature pdf

  2. (PDF) Research Methodology

    research methodology review of literature pdf

  3. (PDF) LITERATURE REVIEW, SOURCES AND METHODOLOGIES

    research methodology review of literature pdf

  4. (PDF) Methodology for Systematic Literature Review applied to

    research methodology review of literature pdf

  5. (PDF) Literature Review as a Research Methodology: An overview and

    research methodology review of literature pdf

  6. (PDF) Writing a Literature Review Research Paper: A step-by-step approach

    research methodology review of literature pdf

VIDEO

  1. Literature Review Research Methodology

  2. Methodological Reviews

  3. Literature Review

  4. 12 Important Practice Questions /Research Methodology in English Education /Unit-1 /B.Ed. 4th Year

  5. Literature Review

  6. Systematic Literature Review Technique

COMMENTS

  1. (PDF) Literature Review as a Research Methodology: An overview and

    This paper draws input from a study that employed a systematic literature review as its main source of data. A systematic review can be explained as a research method and process for identifying ...

  2. Literature review as a research methodology: An overview and guidelines

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  3. PDF METHODOLOGY OF THE LITERATURE REVIEW

    In the field of research, the term method represents the specific approaches and procedures that the researcher systematically utilizes that are manifested in the research design, sampling design, data collec-tion, data analysis, data interpretation, and so forth. The literature review represents a method because the literature reviewer chooses ...

  4. (PDF) Research Methodology: Literature Review

    A literature review is going int o the depth of. the literatures surveyed. It is a process of re-examining, evaluating or assessing. the short-listed literatures [literature survey phase]. Review ...

  5. PDF Chapter 1 Introduction to Research Methodology

    1.2 Defining Research Methodology. It is an essential process of any scientific study, which serves as a framework for processing and achieving the predicted outcomes of the study. It is commonly defined as a systematic and organized process of collecting, analyzing, interpreting, and presenting information to answer specific questions or solve ...

  6. (PDF) Literature review as a research methodology: An overview and

    Literature review serves as a foundation for all types of research in building knowledge, establishing policy and practice guidelines, and generating new ideas and direction (Snyder, 2019). This ...

  7. Methodological Approaches to Literature Review

    The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature.

  8. State-of-the-art literature review methodology: A six-step ...

    Literature reviews play a foundational role in scientific research; they support knowledge advancement by collecting, describing, analyzing, and integrating large bodies of information and data [1, 2].Indeed, as Snyder [] argues, all scientific disciplines require literature reviews grounded in a methodology that is accurate and clearly reported.

  9. PDF Reviewing the research methods literature: principles and strategies

    been offered in the methods literature and propose a solu-tion for improving conceptual clarity [17]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical

  10. PDF CHAPTER 3 Conducting a Literature Review

    taken to construct a literature review is frequently incomplete or not provided at all. In short, why a literature review is needed, what a literature review is, and how to write one too frequently receive little, if any, attention in research methods texts. That is not the case in this book where we devote a full chapter to this important topic.

  11. Guidance on Conducting a Systematic Literature Review

    This article is organized as follows: The next section presents the methodology adopted by this research, followed by a section that discusses the typology of literature reviews and provides empirical examples; the subsequent section summarizes the process of literature review; and the last section concludes the paper with suggestions on how to improve the quality and rigor of literature ...

  12. PDF The Science of Literature Reviews: Searching, Identifying, Selecting

    A literature review is an evaluation of existing research works on a specific academic topic, theme or subject to identify gaps and propose future research agenda. Many postgraduate students in higher education institutions lack the necessary skills and understanding to conduct in-depth literature reviews.

  13. PDF Research Methodology and Literature Review

    The format of a review of literature may vary from discipline to discipline and from assignment to assignment. A review may be a self-contained unit -- an end in itself -- or a preface to and rationale for engaging in primary research. A review is a required part of grant and research proposals and often a chapter in theses and dissertations.

  14. Reviewing the research methods literature: principles and strategies

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand ...

  15. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  16. PDF Conducting a Literature Review

    Literature Review A literature review is a survey of scholarly sources that provides an overview of a particular topic. Literature reviews are a collection of the most relevant and significant publications regarding that topic in order to provide a comprehensive look at what has been said on the topic and by whom.

  17. PDF A Literature Review

    A literature review is a compilation, classification, and evaluation of what other researchers have written on a particular topic. A literature review normally forms part of a research thesis but it can also stand alone as a self-contained review of writings on a subject. In either case, its purpose is to: Place each work in the context of its ...

  18. Literature Review (Chapter 4)

    A literature review is a survey of scholarly sources that establishes familiarity with and an understanding of current research in a particular field. It includes a critical analysis of the relationship among different works, seeking a synthesis and an explanation of gaps, while relating findings to the project at hand.

  19. PDF UNIT 2 REVIEW OF LITERATURE

    Difference between Literature review and Academic research report The question arises as to how the literature review differs from an academic research paper. While the main focus of an academic research paper is to develop a new ... The researcher can evaluate these materials on the basis of the methodology used, the research findings arrived ...

  20. (PDF) Introduction to Research & Literature Review

    The systematic investigation into. and study of materials, sources, etc, in order to establish facts. and reach new conclusions. An endeavour to discover new or. collate old facts etc by the ...

  21. (PDF) Writing a Critical Review of Literature: A Practical Guide for

    The literature review establishes both the relevance and justifies why new research is relevant. It is through a literature review that a gap would be established, and which the new research would fix. Once the literature review sits properly in the research work, the objectives/research questions naturally fall into their proper perspective.

  22. Fostering the Cultivation of Practices in Multimodal and Culturally

    "The comprehensive Literature Review is a methodology, conducted either to stand alone or to inform primary research at multiple stages of their research process which optimally involves the use of mixed research techniques inclusive of culture, ethics, and multimodal texts and settings in a systematic, holistic, synergistic and cyclical ...

  23. Full article: Linking digital capability to small business performance

    This study makes three contributions to the literature. First, empirical evidence supports the KBV and dynamic capability theories that knowledge is the most valuable asset in the digital age. Second, this study significantly extends the relevant literature by highlighting the relationship between DC, DBT, and BP.

  24. Hybrid intelligence failure analysis for industry 4.0: a literature

    We follow Thomé et al. 8-step literature review methodology to assure a rigorous literature review of intelligence, automated/data-driven, failure analysis methodology for Industry 4.0. In Step 1, our (hybrid) intelligence failure analysis problem is planned and formulated by identifying the needs, scope, and questions for this research.

  25. (PDF) Writing a Literature Review Research Paper: A step-by-step approach

    A literature review is a surveys scholarly articles, books and other sources relevant to a particular. issue, area of research, or theory, and by so doing, providing a description, summary, and ...

  26. Counter-Disinformation Literature Review

    First, the team compiled a guide on the goal, objectives, and timeline of the literature review. Next, along with an internal dive into existing GEC research and literature products, the GEC collaborated with the Department's Bunche Library to build a reading list consisting of over 100 leading articles by think tanks, governments, and scholars on propaganda and disinformation threats and ...

  27. Frontiers

    Objective:The mortality rate of primary brainstem hemorrhage (PBH) is high, and the optimal treatment of PBH is controversial. We used mixed reality navigation technology (MRNT) to perform brainstem hematoma puncture and drainage surgery in seven patients with PBH. We shared practical experience to verify the feasibility and safety of the technology.Method:We introduced the surgical procedure ...

  28. (PDF) Research Methodology: Literature Review (Revised)

    is an essential step in research and publication. Literature reviews. two major forms: (1) The usual "literature review" or. "background" section within a journal paper or. a chapter in a ...