American University

Critical Thinking and Transferability [a visual guide for writing literature reviews] : A Review of the Literature

This meta-literature review includes an actual literature review as well as margin notes identifying important concepts and structural elements that are usually found in literature reviews.

Usage metrics

University Library

  • Research article
  • Open access
  • Published: 17 January 2020

The TRANSFER Approach for assessing the transferability of systematic review findings

  • Heather Munthe-Kaas 1 ,
  • Heid Nøkleby 1 ,
  • Simon Lewin 1 , 2 &
  • Claire Glenton 1 , 3  

BMC Medical Research Methodology volume  20 , Article number:  11 ( 2020 ) Cite this article

14k Accesses

30 Citations

36 Altmetric

Metrics details

Systematic reviews are a key input to health and social welfare decisions. Studies included in systematic reviews often vary with respect to contextual factors that may impact on how transferable review findings are to the review context. However, many review authors do not consider the transferability of review findings until the end of the review process, for example when assessing confidence in the evidence using GRADE or GRADE-CERQual. This paper describes the TRANSFER Approach, a novel approach for supporting collaboration between review authors and stakeholders from the beginning of the review process to systematically and transparently consider factors that may influence the transferability of systematic review findings.

We developed the TRANSFER Approach in three stages: (1) discussions with stakeholders to identify current practices and needs regarding the use of methods to consider transferability, (2) systematic search for and mapping of 25 existing checklists related to transferability, and (3) using the results of stage two to develop a structured conversation format which was applied in three systematic review processes.

None of the identified existing checklists related to transferability provided detailed guidance for review authors on how to assess transferability in systematic reviews, in collaboration with decision makers. The content analysis uncovered seven categories of factors to consider when discussing transferability. We used these to develop a structured conversation guide for discussing potential transferability factors with stakeholders at the beginning of the review process. In response to feedback and trial and error, the TRANSFER Approach has developed, expanding beyond the initial conversation guide, and is now made up of seven stages which are described in this article.

Conclusions

The TRANSFER Approach supports review authors in collaborating with decision makers to ensure an informed consideration, from the beginning of the review process, of the transferability of the review findings to the review context. Further testing of TRANSFER is needed.

Peer Review reports

Evidence-informed decision making has become a common ideal within healthcare, and increasingly also within social welfare. Consequently, systematic reviews of research evidence (sometimes called evidence syntheses) have become an expected basis for practice guidelines and policy decisions in these sectors. Methods for evidence synthesis have matured, and there is now an increasing focus on considering the transferability of evidence to end users’ settings (context) in order to make systematic reviews more useful in decision making [ 1 , 2 , 3 , 4 ]. End users can include individual, or groups of, decision makers who commission or use the findings from a systematic review, such as policymakers, health/welfare systems managers, and policy analysts [ 3 ]. The term stakeholders in this paper may also refer to potential stakeholders, or those individuals who have knowledge of, or experience with, the intervention being reviewed and whose input may be considered valuable where the review includes a wide range of contexts, not all of which are well understood by the review team.

Concerns regarding the interaction between context and the effect of interventions are not new: the realist approach to systematic reviews emerged in order to address this issue [ 5 ]. However, while there appears to be an increasing amount of interest, and literature, related to context and its role in systematic reviews, it has been noted that “the importance of context in principle has not yet been translated into widespread good practice” within systematic reviews [ 6 ]. Context has been defined in a number of different ways, with the common characteristic being a set of factors external to an intervention (but which may interact with the intervention) that may influence the effects of the intervention [ 6 , 7 , 8 , 9 ]. Within the TRANSFER Approach, and this paper, “context” refers to the multi-level environment (not just the physical setting) in which an intervention is developed, implemented and assessed: the circumstances that interact, influence and even modify the implementation of an intervention and its effects.

Responding to an identified need from end users

We began this project in response to concerns from end users regarding the relevance of the systematic reviews they had commissioned from us. Many of our systematic reviews deal with questions within the field of social welfare and health systems policy and practice. Interventions in this area tend to be complex in a number of ways – for example, they may include multiple components and be context-dependent [ 10 ]. Commissioners have at times expressed frustration with reviews that (a) did not completely address the question in which they were originally interested, or (b) included few studies that came from seemingly very different settings. In one case, the commissioners wished to limit the review to only include primary studies from their own geographical area (Scandinavia) because of doubts regarding the relevance of studies coming from other settings despite the fact that there was no clear evidence that this intervention would have different effects across settings. Although we regularly engage in dialogue with stakeholders (including commissioners, decision makers, clients/patients) at the beginning of each review process, including a discussion of the review question and context, these discussions have varied in how structured and systematic they have been, and the degree to which they have influenced the final review question and inclusion criteria.

For the purpose of this paper, we will define stakeholders as anyone who has an interest in the findings from a systematic review, including client/patients, practitioners, policy/decision makers, commissioners of systematic reviews and other end users. Furthermore, we will define transferability as an assessment of the degree to which the context of the review question and the context of studies contributing data to the review finding differ according to a priori identified characteristics (transfer factors). This is similar to the definition proposed by Wang and colleagues (2006) whereby transferability is the extent to which the measured effectiveness of an applicable intervention could be achieved in another setting ([ 11 ] p. 77). Other terms related to transferability include applicability, generalizability, transportability and relevance and are discussed at length elsewhere [ 12 , 13 , 14 ].

Context matters

Context is important for making decisions about the feasibility and acceptability of an intervention. Systematic reviews typically include studies from many contexts and then draw conclusions, for example about the effects of an intervention, based on the total body of evidence. When context – including that of both the contributing studies and the end user – is not considered, there can be serious, costly and potentially even fatal consequences.

The case of antenatal corticosteroids for women at risk of pre-term birth illustrates the importance of context: a Cochrane review published in 2006 concluded that “A single course of antenatal corticosteroids should be considered routine for preterm delivery with few exceptions” [ 15 ]. However, a large multi-site cluster randomized implementation trial looking at interventions to increase antenatal corticosteroid use in six low- and middle-income countries, and published in 2015, showed contrasting results. The trial found that: “Despite increased use of antenatal corticosteroids in low-birthweight infants in the intervention groups, neonatal mortality did not decrease in this group, and increased in the population overall” [ 16 ]. The trial authors concluded that “the beneficial effects of antenatal corticosteroids in preterm neonates seen in the efficacy trials when given in hospitals with newborn intensive care were not confirmed in our study in low-income and middle-income countries” and hypothesized that this could be due to, among other things, a lack of neonatal intensive care for the majority of preterm/small babies in the study settings [ 16 ]. While there are multiple possible explanations for these two contrasting conclusions (see Vogel 2017 [ 17 ];), the issue of context seems to be critical: “It seems reasonable to assume that the level of maternal and newborn care provided reflected the best available at the time the studies were conducted, including the accuracy of gestational age estimation for recruited women. Comparatively, no placebo-controlled efficacy trials of ACS have been conducted in low-income countries, where the rates of maternal and newborn mortality and morbidity are higher, and the level of health and human resources available to manage pregnant women and preterm infants substantially lower” [ 17 ]. The results from the Althabe (2015) trial highlighted that (in retrospect) the lack of efficacy trials of ACS from low-resource settings was a major limitation of the evidence base.

An updated version of the Cochrane review was published in 2017, and includes a discussion on the importance of context when interpreting the results: “The issue of generalisability of the current evidence has also been highlighted in the recent cluster-randomised trial (Althabe [2015]). This trial suggested harms from better compliance with antenatal corticosteroid administration in women at risk of delivering preterm in communities of low-resource settings” [ 18 ]. The WHO guidelines on interventions to improve preterm birth outcomes (2015) also include a number of issues to be considered before recommendations in the guideline are applied, that were developed by the Guideline Development Group and informed by both the Roberts (2006) review and the Althabe (2015) trial [ 19 ]. This example illustrates the importance of considering and discussing context when interpreting the findings of systematic reviews and using these findings to inform decision making.

Considering context – current approaches

Studies included in a systematic review may vary considerably in terms of who was involved, where the studies took place and when they were conducted; or according to broader factors such as the political environment, organization of the health or social welfare system, or organization of the society or family. These factors may impact how transferable the studies are to the context specified in the review, and how transferable the review findings are to the end users’ context [ 20 ]. Transferability is often assessed by end users based on the information provided in a systematic review, and tools such as the one proposed by Schloemer and Schröeder-Bäck (2018) can assist them in doing so [ 21 ]. However, review authors can also assist in making such assessments by addressing issues related to context in a systematic review.

There are currently two main approaches for review authors to address issues related to context and the relevance of primary studies to a context specified in the review. One approach to responding to stakeholders’ questions about transferability is to highlight these concerns in the final review product or summaries of the review findings. Cochrane recommends that review authors “describe the relevance of the evidence to the review question” [ 22 ] in the review section entitled Overall completeness and applicability of evidence , which is written at the end of the review process. Consideration of issues related to applicability (transferability) is thus only done at a late stage of the review process. SUPPORT summaries are an example of a product intended to present summaries of review findings [ 23 ] and were originally designed to present the results of systematic reviews to decision makers in low and middle income countries. The summaries examine explicitly whether there are differences between the studies included in the review that is the focus of the summary and low- and middle-income settings [ 23 ]. These summaries have been received positively by decision makers, particularly this section on the relevance of the review findings [ 23 ]. In evaluations of other, similar products, such as Evidence Aid summaries for decision makers in emergency contexts, and evidence summaries created by The National Institute for Health and Care Excellence (NICE) [ 24 , 25 , 26 , 27 ], content related to context and applicability were reported as being especially valuable [ 28 , 29 ].

While these products are useful, the authors of such review summaries would be better able to summarize issues related to context and applicability if these assessments were already present in the systematic review being summarized rather than needing to be made post hoc by the summary authors. However, many reviews often only include relatively superficial discussions of context, relevance or applicability, and do not present systematic assessments of how these factors could influence the transferability of findings.

There are potential challenges related to considering issues related to context and relevance after the review is finished, or even after the analysis is concluded. Firstly, if review authors have not considered factors related to context at the review protocol stage, they may not have defined potential subgroup analyses and explanatory factors which could be used to explain heterogeneity of results from a meta-analysis. Secondly, relevant contextual information that could inform the review authors’ discussion of relevance may not have been extracted from included primary studies. To date, though, there is little guidance for a review author on how to systematically or transparently consider applicability of the evidence to the review context [ 30 ]. Not surprisingly, a review of 98 systematic reviews showed that only one in ten review teams discussed the applicability of results [ 31 ].

The second approach, which also comes late in the review process, is to consider relevance as part of an overall assessment of confidence in review findings. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Approach for effectiveness evidence and the corresponding GRADE-CERQual approach for qualitative evidence [ 32 , 33 ] both support review authors in making judgments about how confident they are that the review finding is “true” (GRADE: “the true effect lies within a particular range or on one side of a threshold”; GRADE-CERQual: “the review finding a reasonable representation of the phenomenon of interest” [ 33 , 34 ]). GRADE and GRADE-CERQual involve an assessment of a number of domains or components, including methodological strengths and weaknesses of the evidence base, and heterogeneity or coherence, among others [ 32 , 33 ]. However, the domain related to relevance of the evidence base to the review context (GRADE indirectness domain, GRADE-CERQual relevance component) appears to be of special concern for decision makers [ 3 , 35 ]. Too often these assessments of indirectness or relevance that the review team makes may be relatively crude – for example, based on the age of participants or the countries where the studies were carried out, features that are usually easy to assess but not necessarily the most important. This may be due to a lack of guidance for review authors on which factors to consider and how to assess them.

Furthermore, many review authors only first begin to consider indirectness and relevance once the review findings have been developed. An earlier systematic and transparent consideration of transferability could influence many stages of the systematic review process and, in collaboration with stakeholders, could lead to a more thoughtful assessment of the GRADE indirectness domain and GRADE-CERQual relevance component. In Table  1 we describe a scenario where issues related to transferability are not adequately considered during the review process.

By engaging with stakeholders at an early stage of planning the review, review authors could ascertain what factors stakeholders judge to be important for their context and use this knowledge throughout the review process. Previous research indicates that decision makers’ perceptions of the relevance of the results and its applicability to policy facilitates the ultimate use of findings from a review [ 3 , 23 ]. These decision makers explicitly stated that summaries of reviews should include sections on relevance, impact and applicability for decision making [ 3 , 23 ]. Stakeholders are not the only source for identifying transferability factors, as other systematic reviews, implementation studies and qualitative studies may also provide relevant information regarding transferability of findings to specific contexts. However, this paper and the TRANSFER Approach focus on stakeholders specifically as it is our experience that stakeholders are often an underused resource for identifying and discussing transferability.

Working toward collaboration

Involving stakeholders in systematic review processes has long been advocated by research institutions and stakeholders alike as a necessary step in producing relevant and timely systematic reviews [ 36 , 37 , 38 ]. Dialogue with stakeholders is key for (a) defining a clear review question, (b) developing a common understanding of, for instance, the population, intervention, comparison and outcomes of interest, (c) understanding the review context, and (d) increasing acceptance among stakeholders of evidence-informed practice and of systematic reviews as methods for producing evidence [ 38 ]. Stakeholders themselves have indicated that improved collaboration with researchers could facilitate the (increased) use of review findings in decision making [ 3 ]. However, in practice, few review teams actively seek collaboration with relevant stakeholders [ 39 ]. This could be due to time or resource constraints or access issues [ 40 ]. There is currently work underway looking at how to identify and engage relevant stakeholders in the systematic review process (for example, Haddaway 2017 [ 41 ];).

For those review teams who do seek collaboration, there is little guidance available on how to collaborate in a structured manner, and we are not aware of any guidance specifically focussed on considering transferability of review findings [ 42 ]. We are unaware of any guidance intended to support systematic review authors in considering transferability of review findings from the beginning of the review process (i.e. before the findings have been developed). The guidance that is available either focuses on a narrow subset of research questions (e.g. healthcare), is intended to be used at the end of a review process [ 12 , 43 ], focuses on primary research rather than systematic reviews [ 44 ], or is theoretical in nature without any concrete stepwise guidance for review authors on how to consider and assess transferability [ 21 ]. Previous work has pointed out that stakeholders “need systematic and practically relevant knowledge on transferability. This may be supported through more practical tools, useful information about transferability, and close collaboration between research, policy, and practice” [ 21 ]. Other studies have also discussed the need for such practical tools, including more guidance for review authors that focuses on methods for (1) collaborating with end users to develop more precise and relevant review questions and identify a priori factors related to the transferability of review findings, and (2) systematically and transparently assessing the transferability of review findings to the review context, or a specific stakeholders’ context, as part of the review process [ 12 , 45 , 46 ].

The aim of the TRANSFER Approach is to support review authors in developing systematic reviews that are more useful for decision makers. TRANSFER provides guidance for review authors on how to consider and assess the transferability of review findings by collaborating with stakeholders to (a) define the review question, (b) identify factors a priori which may influence the transferability of review findings, and (c) define the characteristics of the context specified in the review with respect to the identified transferability factors.

The aim of this paper is to describe the development and application of the TRANSFER Approach, a novel approach for supporting collaboration between review authors and stakeholders from the beginning of the review process to systematically and transparently consider factors that may influence the transferability of systematic review findings.

We developed the TRANSFER Approach in three stages. In the first stage we held informal discussions with stakeholders to ascertain the usefulness of, guidance on assessing and considering the transferability of review findings. An email invitation to participate in a focus group discussion was sent to nine representatives from five Norwegian directorates that regularly commission systematic reviews from the Norwegian Institute of Public Health. In the email we described that the aim of the discussion would be to discuss the possible usefulness of a tool to assess applicability of systematic review findings to the Norwegian context. Four representatives attended the meeting from three directorates. The agenda for the discussion was a brief introduction to the terms and concepts, “transferability” and “applicability”, followed by an overview of the TRANSFER Approach as a method for addressing transferability and applicability. Finally we undertook an exercise to brainstorm transferability factors that may influence the transferability of a specific intervention to the Norwegian context. Participants provided verbal consent to participate in the discussion. We did not use a structured conversation guide. We took notes from the meeting, and collated the transferability issues that were discussed. We also collated responses regarding the usefulness of using time to discuss transferability with review authors during a project as simple yes or no responses (as well as any details provided with responses).

In the second stage we conducted a systematic mapping to uncover any existing checklists or other guidance for assessing the transferability of review findings, and conducted a content analysis of the identified checklists. We began by consulting systematic review authors in our network in March 2016 to get suggestions as to existing checklists or tools to assess transferability. In June 2016 we designed and conducted a systematic search of eight databases using search terms such as terms “transferability”, “applicability”, “generalizability”, etc. and “checklist”, “guideline”, “tool”, “criteria”, etc. We also conducted a grey literature search and searched the EQUATOR repository of checklists for relevant documents. Documents were included if they described a checklist or tool to assess transferability (or other related terms such as e.g., applicability, generalizability, etc.). We had no limitations related to publication type/status, language or date of publication. Documents that discussed transferability at a theoretical level or assessed the transferability of guidelines to local contexts were not included. The methods and results of this work are described in detail elsewhere [ 30 ]. The output from this stage was a list of transferability factors, which became the basis for the initial version of a ‘conversation guide’ for use with stakeholders in identifying and prioritizing factors related to transferability.

In the third stage, we undertook meetings with stakeholders to explore the use of a structured conversation guide (based on results of the second stage) to discuss the transferability of review findings. We used the draft guide in meetings with stakeholders in three separate systematic review processes. We became aware of redundancies in the conversation guide through these meetings, and also of confusing language in the conversation guide. Based on this feedback and our notes from these meetings we then revised the conversation guide. The result of this process was a refined conversation guide as well as guidance for review authors on how to improve collaboration with stakeholders to consider transferability, and guidance on how to assess and present assessments of transferability.

In this section we begin by presenting the results of the exploratory work around transferability, including the discussions with stakeholders, and experiences of using a structured conversation guide in meeting with stakeholders. We then present the TRANSFER Approach that we subsequently developed including the purpose of the TRANSFER Approach, how to use TRANSFER, and a worked example of TRANSFER in action.

Findings of the exploratory work to develop the TRANSFER Approach

Discussions with stakeholders.

The majority of the 3 h discussion with stakeholders was spent on the exercise. We described for participants a systematic review that had recently been commissioned (by one of the directorates represented) on the effect of supported employment interventions for disabled people on employment outcomes. The participants brainstormed the potential differences between the Norwegian context and other contexts and how these differences might influence how the review findings could be used in the Norwegian context. The participants identified a number of issues related to the population (e.g., proportion of immigrants, education level, etc.), the intervention (the length of the intervention, etc.), the social setting (e.g., work culture, union culture, rural versus urban, etc.) and the comparison interventions (e.g., components of interventions given as part of “usual services”). After the exercise was completed, the participants debriefed on the usefulness of such an approach for thinking about the transferability of review findings at the beginning of the review process, in a meeting setting with review authors. All participants agreed that the discussion was (a) useful, and (b) worth a 2 to 3 h meeting at the beginning of the review process. There was discussion regarding the terminology, however, related to transferability, specifically who is responsible for determining transferability. One participant felt that the “applicability” of review findings should be determined by stakeholders, including decision makers, while “transferability” was a question that can be assessed by review authors. There was no consensus among participants regarding the most appropriate terms to use. We believe that opinions expressed within this discussion may be related to language, for instance, how the Norwegian terms for ‘applicability’ and ‘transferability’ are used and interpreted. The main findings from the focus group discussion were that stakeholders considered meeting with review authors early in the review process to discuss transferability factors to be a good use of time and resources.

Systematic mapping and content analysis of existing checklists

We identified 25 existing checklists that assess transferability or related concepts. Only four of these were intended for use in the context of a systematic review [ 14 , 43 , 45 , 47 ]. We did not identify any existing tools that covered our specific aims. Our analysis of the existing checklists identified seven overarching categories of factors related to transferability in the included checklists: population, intervention, implementation context (immediate), comparison condition, outcomes, environmental context, and researcher conduct [ 30 ]. The results of this mapping are reported elsewhere [ 30 ].

Using a structured conversation guide to discuss transferability

Both the review authors and stakeholders involved in the three systematic review processes where an early version of the conversation guide was piloted were favorable to the idea of using a structured approach to discussing transferability. The initial conversation guide that was used in meetings with the stakeholders was found to be too long and repetitive to use easily. The guide was subsequently refined to be shorter and to better reflect the natural patterns of discussion with stakeholders around a systematic review question (i.e. population, intervention, comparison, outcome).

The TRANSFER Approach: purpose

The exploratory work described above resulted in the TRANSFER Approach. The TRANSFER Approach aims to support review authors in systematically and transparently considering transferability of review findings from the beginning of the review process. It does this by providing review authors with structured guidance on how to collaborate with stakeholders to identify transferability factors, and how to assess the transferability of the review findings to the review context or other local contexts (see Fig.  1 ).

figure 1

TRANSFER diagram

The TRANSFER Approach is intended for use in all types of reviews. However, as of now, it has only been tested in reviews of effectiveness related to population level interventions.

How to use TRANSFER in a systematic review

The TRANSFER Approach is divided into seven stages that mirror the systematic review process. Table  2 outlines the stages of the TRANSFER Approach and the corresponding guidance and templates that support review authors in considering transferability at each stage (see Table  3 ). During these seven stages, review authors make use of the two main components of the TRANSFER Approach: (1) guidance for review authors on how to consider and assess transferability of review findings (including templates), and (2) a Conversation Guide to use with stakeholders in identifying and prioritizing factors related to transferability.

Once systematic review authors have gone through the seven stages outlined in Table  3 , they come up with assessments of concern regarding each transferability factor. This assessment should be expressed as no, minor, moderate or serious concerns regarding the influence of each transferability factor for an individual review finding. This assessment is made for each individual review finding because TRANSFER assessments are intended to support GRADE/−CERQual assessments of indirectness /relevance, and the GRADE/−CERQual approaches require the review author to make assessments for each individual outcome (for effectiveness reviews) or review finding (for qualitative evidence syntheses). Assessments must be done for each review finding individually because assessments may vary across outcomes. One transferability factor may affect a number of review findings (e.g., years of experience of mentors in a mentoring program), in the same way that one risk of bias factor (e.g., selection bias as a consequence of inadequate concealment of allocations before assignment) may affect multiple review findings. However, it is also the case that one transferability factor can affect these review findings differently (e.g., average education level of the population may influence on finding and not another) in the same way that one risk of bias factor may affect review findings differently (e.g., detection bias, due to lack of blinding of outcome assessment, may be less important for objective finding, such as death). An overall TRANSFER assessment of transferability is then made by the review authors (also expressed as no, minor, moderate or serious concerns), based on the assessment(s) for each transferability factor(s). Review authors should then provide an explanation for the overall TRANSFER assessment and an indication of how each transferability factor may influence the finding (e.g. direction and/or size of the effect estimate). Guidance on making assessments is discussed in greater detail below. In this paper, we have, for simplicity, described transferability factors as individual and mutually exclusive constructs. Through our experience in applying TRANSFER, however, we have seen that transferability factors can influence and amplify each other. While the current paper does not address these potential interactions, other review authors will need to consider when transferability factors influence each other or when one factor amplifies the influence of another factor, for example, primary care health facilities in rural settings may both have both fewer resources and poorer access to referral centres, both of which may interact to negatively impact on health outcomes.

TRANSFER in action

In the following section we present the stages of the TRANSFER Approach using a worked example. The scenario is based on a real review [ 48 ]. However, the TRANSFER Approach was not available when this review started and thus the conversation with decision makers was conducted post hoc. Furthermore, while the TRANSFER factors are those that the stakeholders identified, details related to both the review finding and the assessment of transferability were adapted for the purposes of this worked example in order to illustrate how TRANSFER could be applied to a review process.” The scenario focuses on a situation where a review is commissioned and the stakeholders’ context is known. In the case where the decision makers and/or their context is not well understood to the review team, the review team can still engage potential stakeholders with knowledge/experience related to the intervention being reviewed and the relevant contexts. .

Stage 1: Establish the need for a systematic review

Either stakeholders (in commissioning a review) or a review team (if initiating a review themselves) can establish the need for a systematic review (see example provided in Table  4 ). The process of defining the review question and context begins only after some need for a systematic review is established.

Stage 2a: Collaborate with stakeholders to refine the review question

After defining the need for a systematic review, the review team, together with stakeholders need to meet to refine the review question (see example provided in Table  5 ). Part of this discussion will need to focus on establishing the type of review question being asked, and the corresponding review methodology that will be used (e.g., a review to examine intervention effectiveness or a qualitative evidence synthesis to examine barriers and facilitators to implementing an intervention). The group will then need to define the review question including, for example, the population, intervention, comparison and outcomes. A secondary objective of this discussion is to ensure common understanding of the review question, including how the systematic review is intended to be used. During this meeting the review team and stakeholders can discuss and agree upon, for example, the type of population and intervention(s) they are interested in, the comparison(s) they think are the most relevant, and the outcomes they think are the most important. By using a structured template to guide this discussion, the review team can be sure they cover all topics and questions in a systematic fashion. We have developed and used a basic template for reviews of intervention effectiveness that review authors can use to lead this type of discussion with stakeholders (see Appendix 1 ). Future work will involve adapting this template to different types of review questions and processes.

In some situations, such as in the example we provide, the scope of the review is broader (in this case, global) than the actual context specified in the review (in this case, Norway). The review may therefore include a broader set of interventions, population groups, or settings than the decision making context. Where the review scope is broader than the context specified in the review, a secondary review question can be added – for example, How do the results from this review transfer to a pre-specified context? Alternatively, where the context specified in the review context is the same as the end users’ context, such a secondary question would be unnecessary. When the review context or the local context is defined at a country level, the review authors and stakeholders will likely be aware of heterogeneity within that context (e.g., states, neighbourhoods, etc.). However, it is still often possible (and necessary) to ascertain and describe a national context. We need to further explore how decision makers apply review findings to the multitude of local contexts within, for example, their national context. Finally, in a global review initiated by a review team rather than commissioned for a specific context, a secondary question on the transferability of the review findings to a pre-specified context is unlikely to be needed.

Stage 2b. Identify and prioritize TRANSFER factors

In the scenario discussed in Table  6 , stakeholders are invited to identify transferability factors through a structured discussion using the TRANSFER Conversation Guide (see Appendix 2 ). The identified factors are essentially hypotheses which need to be tested later in the review process. The aim of the type of consultation described above is to gather input from stakeholders regarding which contextual factors are believed to influence how/whether an intervention works. Where the review is initiated by the review team, the same process would be used, but with experts and people who are thought to represent stakeholders, rather than actual commissioners.

The review authors may identify and use an existing logic model describing how the intervention under review works or another framework to initiate the discussion on transferability, for example to identify components of the intervention that could be especially susceptible to transferability factors or to highlight at what point in the course of the intervention transferability may become an issue [ 49 , 50 ]. More work is needed to examine how logic models can be used at the beginning of the systematic review in order to identify potential transferability factors.

During this stage, the group may identify multiple transferability factors. However, we suggest that the review team, together with stakeholders, prioritize these factors and only include the most important three to five factors in order to keep data extraction and subgroup analyses manageable. Limiting the number of factors to be examined is based on our experience of piloting the framework in systematic reviews, as well as on guidance for conducting and reporting subgroup analyses [ 51 ]. Guidance on prioritizing transferability factors is still to be developed.

In accordance with guidance for conducting subgroup analyses in effectiveness reviews, the review team should search for evidence to support the hypotheses that these factors influence transferability, and indicate what effect they are hypothesised to have on the review outcomes [ 51 ]. We do not yet know how best to do this in an efficient way. To date, the search for evidence to support hypothetical transferability factors has involved a grey literature search of key terms related to the identified TRANSFER factors together with key terms related to the intervention, as well as searching Epistemonikos for qualitative systematic reviews on the intervention being studied. Other approaches, however, may include searching databases such as Epistemonikos for systematic reviews related to the hypotheses, and/or focused searches of databases of primary studies such as MEDLINE, EMBASE, etc. Assistance of an information specialist may be helpful in designing these searches and it may be possible to focus down on specific contexts, which would reduce the number of records that need to be searched. The efforts made will need to be calibrated to the resources available and the approach used should be described clearly to enhance transparency. In the case where no evidence is available for a transferability factor that stakeholders believe to be important, the review team will need to decide whether or not to include that transferability factor (depending, for example, on how many other factors have been identified), and provide justification for its inclusion in the protocol. The identified factors should be included in the review protocol as the basis for potential subgroup analyses. Such subgroup analyses will assist the review team in determining whether or not, or to what extent, differences with respect to the identified factor influence the effect of the intervention. This is discussed in more detail under Stage 4. In qualitative evidence syntheses, the review team may predefine subgroups according to transferability factors and contrast and compare perceptions/experiences/barriers/facilitators of different groups of participants according to the transferability factors.

Stage 2c: Define characteristics of the review context related to TRANSFER factors

In an intervention effectiveness review, the review context is typically defined in the review question according to inclusion criteria related to the population, intervention, comparison and outcomes (see example provided in Table  7 ). We recommend that this be extended to include the transferability factors identified in Stage 2, so that an assessment of transferability can be made later in the review process. If the review context does not include details related to the transferability factors, the review authors will be unable to assess whether or not the included studies are transferable to the review context. In this stage the review team works with the stakeholders to specify how the identified transferability factors manifest themselves in the context specified in the review (e.g., global context and Norwegian context).

In cases where the review context is global, it may be challenging to specify characteristics of the global context for each transferability factor. In that case, the focus may be on assessing whether a sufficiently wide range of contexts are represented with respect to each transferability factor. Using the example above, the stakeholders and review team could decide that the transferability of the review findings would be strengthened if studies represented a range of usual housing services conditions in terms of quality and comprehensiveness, or if studies from both warm and cold climate settings are included.

Stage 3: conduct the systematic review

Several stages of the systematic review process may be influenced by discussions with stakeholders that took place in Stage 2 and the transferability factors that have been identified (see example in Table  8 ). These include defining the inclusion criteria, developing the search strategy and developing the data extraction form. In addition to standard data extraction fields, the review authors will need to extract data related to the identified transferability factors. This is done in a systematic manner where review authors also note where the information is not reported. For some transferability factors, such as environmental context, additional information may be identified through external sources. For other types of factors it may be necessary to contact study authors for further information.

Stage 4: compare the included studies to the context specified in the review (global and/or local) with respect to TRANSFER factors

This stage is about organizing the included studies according to their characteristics related to the identified transferability factors. The review authors should record these characteristics in a table – this makes it easy to get an overview of the contexts of the studies included in the review (see example in Table  9 ). There are many ways to organize and present such an overview. In the scenario above, the review authors created simple dichotomous subcategories for each transferability factor, which was related to the local context specified in the secondary review question.

Stage 5: assess the transferability of review findings

Review authors should assess the transferability of a review finding to the review context, and in some cases may also consider a local context (see example in Table  10 ). When a review context is global, the review team may have fewer concerns regarding transferability if the data come from studies from a range of contexts, and the results from the individual studies are consistent. If there is an aspect of context for which there is no evidence, this can be highlighted in the discussion.

In summary, when assessing transferability to a secondary context, the review team may:

Consider conducting a subgroup, or regression, analysis for each transferability factor to explore the extent to which this is likely to influence the transferability of the review finding. The review team should follow standards for conducting subgroup analyses [ 51 , 53 , 54 ].

Interpret the results of the subgroup or regression analysis for each transferability factor and record whether they have no, minor, moderate or serious concerns regarding the transferability of the review finding to the local context.

Make an overall assessment (no, minor, moderate or serious concerns) regarding the transferability of the review finding based on the concerns identified for each individual transferability factor. At the time of publication, we are developing more examples for review authors and guidance on how to make this overall assessment.

The overall TRANSFER assessment involves subjective judgements and it is therefore important for review authors to be consistent and transparent in how they make these assessments (see Appendix 4 ).

Stage 6: Apply GRADE for effectiveness or GRADE-CERQual to assess certainty/confidence in review findings

TRANSFER assessments can be used alone to present assessments of the transferability of a review finding in cases where the review authors have chosen not to assess certainty in the evidence. However, we propose that TRANSFER assessments can also be used to support indirectness assessments in GRADE (see example in Table  11 ). Similar to how the Risk of Bias tool or other critical appraisal tools support the assessment of Risk of Bias in GRADE, the TRANSFER Approach can be used to increase the transparency of judgements made for the indirectness domain [ 55 ]. The advantages to using the TRANSFER Approach to support this assessment are:

Factors that may influence transferability are carefully considered a priori, in collaboration with stakeholders;

The GRADE table is supported by a transparent and systematic assessment of these transferability factors for each outcome, and the evidence available for these;

Stakeholders in other contexts are able to clearly see the basis for the indirectness assessment, make an informed decision regarding whether the indirectness assessment would change for their context, and make their own assessment of transferability related to these factors. In some cases the transferability factors identified and assessed in the systematic review may differ from factors which may be considered important to other stakeholders adapting the review findings to their local context (e.g., in the scenario described above, stakeholders using the review findings in a low income, warmer country with a less comprehensive welfare system).

Future work will be needed to develop methods of communicating the transferability assessment, how it is expressed in relation to a GRADE assessment and how to ensure that a clear distinction is made between TRANSFER assessments for a global context and, where relevant, a pre-specified local context.

Stage 7: Discuss transferability of review findings

In some instances it will be possible to discuss the transferability of the review findings with stakeholders prior to publication of the systematic review in order to ensure that the review team has adequately considered the TRANSFER factors as they relate to the context specified in the review (see example in Table  12 ). In many cases this will not be possible, and any input from stakeholders will be post-publication, if at all.

To our knowledge, the TRANSFER Approach is the first attempt to consider the transferability of review findings to the context(s) specified in the review) in a systematic and transparent way from the beginning of the review process through to supporting assessments of certainty and confidence in the evidence for a review finding. Furthermore, it is the only known framework that gives clear guidance on how to collaborate with stakeholders to assess transferability. This guidance can be used in systematic reviews of effectiveness and qualitative evidence syntheses and could be applied to any kind of decision making [ 43 ].

The framework is under development and more user testing is needed to refine the conversation guide, transferability assessment methods, and presentation. Furthermore, it has not yet been applied in a qualitative evidence synthesis, and further guidance may be needed in order to support that process.

Using TRANSFER in a systematic review

We have divided the framework into seven stages, and have provided guidance and templates for review authors for each stage. The first two stages are intended to support the development of the protocol, while stages three through seven are intended to be incorporated into the systematic review process.

The experience of review teams in the three reviews where TRANSFER has been applied (at the time this article is published) has uncovered potential challenges when applying TRANSFER. One challenge is related to reporting: the detail in which interventions, context and population characteristics are reported in primary studies is not always sufficient enough for the purpose of TRANSFER, as has been noted by others [ 56 , 57 ]. With the availability of tools such as the TIDieR checklist and a number of CONSORT extensions, we hope that this improves and that the information that review authors seek is more readily available [ 58 , 59 , 60 ].

Our experience thus far has been that details concerning many of the TRANSFER factors prioritized by the stakeholders are not reported in the studies included in systematic reviews. In one systematic review on the effect of digital couples therapy compared to in-person therapy or no therapy, digital competence was identified as a TRANSFER factor [ 61 ]. The individual studies did not report this, so the review team examined national statistics for each of the studies included and reported this in the data extraction form [ 61 ]. The review team was unable to conduct a subgroup analysis for the TRANSFER factor. However, by comparing Norway’s national level of digital competence to that of the countries where the included studies were conducted, the authors were able to discuss transferability with respect to digital competence in the discussion section of the review [ 61 ]. They concluded that since the level of digital competence was similar in the countries of the included studies and Norway, the review authors had few concerns that this would be likely to influence the transferability of the review findings [ 61 ]. Without having identified this with stakeholders at the beginning of the process, there likely would have been no discussion of transferability, specifically the importance of digital competence in the population. Thus, even when it is not possible to do a subgroup analysis using TRANSFER factors, or even extract data related to these factors, the act of identifying these factors can contribute meaningfully to subsequent discussions of transferability.

Using TRANSFER in a qualitative evidence synthesis

Although we have not yet used TRANSFER as part of a qualitative evidence synthesis, we believe that the process would be similar to that described above. The overall TRANSFER assessment could inform the GRADE-CERQual component relevance . A research agenda is in place to examine this further.

TRANSFER for decision making

he TRANSFER Approach has two important potential impacts for stakeholders, especially decision makers: an assessment of transferability of review findings, and a close(r) collaboration review authors in refining the systematic review question and scope. A TRANSFER assessment provides stakeholders with (a) an overall assessment of the transferability of the review finding to the context(s) of interest in the review, and details regarding (b) whether and how the studies contributing data to the review finding differ from the context(s) of interest in the review, and (c) how any differences between the contexts of the included studies and the context(s) of interest in the review could influence the transferability of the review finding(s) to the context(s) of interest in the review (e.g. direction or size of effect). The TRANSFER assessment can also be used by stakeholders from other contexts to make an assessment of the transferability of the review findings to their own local context. Linked to this, TRANSFER assessments provide systematic and transparent support for assessments of the indirectness domain within GRADE and the relevance component within GRADE-CERQual. TRANSFER is a work in progress, and there are numerous avenues which need to be further investigated (see Table  13 ).

The TRANSFER Approach also supports a closer collaboration between review authors and stakeholders early in the review process, which may result in more relevant and precise review questions, greater consideration of issues important to the decision maker, and better buy-in from stakeholders in the use of systematic reviews in evidence-based decision making [ 2 ].

The TRANSFER Approach is intended to support review authors in collaborating with stakeholders to ensure that review questions are framed in a way that is most relevant for decision making and to systematically and transparently consider transferability of review findings. Many review authors already consider issues related to the transferability of findings, especially review authors applying the GRADE for effectiveness ( indirectness domain) or GRADE-CERQual ( relevance domain) approaches, and many review authors may engage with stakeholders. However current approaches to considering and assessing transferability appear to be ad hoc at best. Consequently, it often remains unclear to stakeholders how issues related to transferability were considered by review authors. By collaborating with stakeholders early in the systematic review process, reviews authors can ensure more precise and relevant review questions and an informed consideration of issues related to the transferability of the review findings. The TRANSFER Approach may therefore help to ensure that systematic reviews are relevant to and useful for decision making.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

The Grading of Recommendations Assessment, Development and Evaluation approach

The Confidence in Evidence from Reviews of Qualitative research approach

The National Institute for Health and Care Excellence

Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt P, Korevaar D, Graham I, Ravaud P, Boutron I. Increasing value and reducing waste in biomedical research: who’s listening? Lancet. 2016;387:1573–86.

PubMed   Google Scholar  

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

PubMed   PubMed Central   Google Scholar  

Tricco A, Cardoso R, Thomas S, Motiwala S, Sullivan S, Kealey M, Hemmelgarn B, Ouimet M, Hillmer M, Perrier L, et al. Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci. 2016;11:4.

Wallace J, Byrne C, Clarke M. Improving the uptake of systematic reviews: a systematic review of intervention effectiveness and relevance. BMJ Open. 2014;4:e005834.

Pawson R, Tilley N. Realist Evaluation. London: Sage; 1997.

Google Scholar  

Craig P, Di Ruggiero E, Frohlich K, Mykhalovskiy E, White M, On behalf of the Canadian Institutes of Health Research (CIHR)-National Institute for Health Research (NIHR) Context Guidance Authors Group: Taking account of context in population health intervention research: guidance for producers, users and funders of research. In. Southamptom, UK: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Damschroder L, Aron D, Keith R, Krish S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;7:50.

Moore G, Audrey S, Barker M, Bond L, Bonell C, et al. Process evaluation of complex interventions: Medical Research Council Guidance. Bmj. 2015;350:h1258.

Pfadenhauer L, Gerhardus A, Mozygemba K, Lysdahl K, Booth A, et al. Making sense of complexity in context and implementation: The context and implementation of Complex Interventions (CICI) Framework. Implement Sci. 2017;12(1):21.

Lewin S, Hendry M, Chandler J, Oxman A, Michie S, Shepperd S, Reeves B, Tugwell P, Hannes K, Rehfuess E, et al. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool ( iCAT_SR). BMC Med Res Methodol. 2017;17:76.

Wang S, Moss JR, Hiller JE. Applicability and transferability of interventions in evidence-based public health. Health Promot Int. 2006;21(1):76–83.

Burford B, Lewin S, Welch V, Rehfuess E, Waters E. Assessing the applicability of findings in systematic reviews of complex interventions can enhance the utility of reviews for decision making. J Clin Epidemiol. 2013;66(11):1251–61.

Cambon L, Minary L, Ridde V, Alla F. Transferability of interventions in health education: a review. BMC Public Health. 2012;12:497.

Schunemann H, Tugwell P, Reeves B, Akl E, Santesso N, Spencer F, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Res Synth Methods. 2013;4:49–62.

Roberts D, Dalziel S. Antenatal corticosteroids for accelerating fetal lung maturation for women at risk of preterm birth. Cochrane Database Syst Rev. 2006;19:3.

Althabe F, Belizán J, McClure E, Hemingway-Foday J, Berrueta M, Mazzoni A, et al. A population-based, multifaceted strategy to implement antenatal corticosteroid treatment versus standard care for the reduction of neonatal mortality due to preterm birth in low-income and middle-income countries: the ACT cluster-randomised trial. Lancet. 2015;385(9968):629–39.

Vogel J, Oladapo O, Pileggi-Castro C, et al. Antenatal corticosteroids for women at risk of imminent preterm birth in low-resource countries: the case for equipoise and the need for efficacy trials. BMJ Glob Health. 2017;2:e000398.

Roberts D, Brown J, Medley N, Dalziel S. Antenatal corticosteroids for accelerating fetal lung maturation for women at risk of preterm birth. Cochrane Database Syst Rev. 2017;21:3.

Organization. WH. WHO recommendations on interventions to improve preterm birth outcomes. 2015.

Guyatt G, Oxman A, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, et al. GRADE guidelines: 8. Rating the quality of evidence—indirectness. J Clin Epidemiol. 2011;64(12):1303–10.

Schloemer T, Schröeder-Bäck P. Criteria for evaluating transferability of health interventions: a systematic review and thematic synthesis. Implement Sci. 2018;13:88.

Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011. Available from www.handbook.cochrane.org .

Rosenbaum S, Glenton C, Wiysonge C, Abalos E, Migini L, Young T, Althabe F, Ciapponi A, Marti S, Meng Q, et al. Evidence summaries tailored to health policy-makers in low- and middle-income countries. Bull World Health Organ. 2011;89(1):54–61.

Evidence Aid. Evidence Aid Resources. 2017. http://www.evidenceaid.org/ . Accessed 23 June 2017.

National Institute for Health and Care Excellence. Evidence summaries: process guide. National Institute for Health and Care Excellence 2017; 2017. https://www.nice.org.uk/process/pmg31/chapter/introduction . Accessed 23 June 2017.

Lavis J. How can we suppot the use of systematic reviews in policymakign? PLoS Med. 2009;6:e1000141.

Robeson P, Dobbins M, DeCorby K, Tirilis D. Facilitating access to pre-processed research evidence in public health. BMC Public Health. 2010;10:95.

Turner T, Green S, Harris C. Supporting evidence-based health care in crises: what information do humanitarian organisations need? Disaster Med Public Health Prep. 2011;5(1):69–72.

Lavis J, Wilson M, Grimshaw J, Haynes R, Ouimet M, Raina P, Gruen R, Graham I. Supporting the use of health technology assessments in policy making about health systems. Int J Technol Assess Health Care. 2010;26(4):405–14.

Munthe-Kaas H, Nøkleby H, Nguyen L. Systematic mapping of checklists for assessing transferability. Syst Rev. 2019;8(1):22.

Ahmad N, Boutron I, Dechartres A, Durieux P, Ravaud P. Applicability and generalisability of the results of systematic reviews to public health practice and policy: a systematic review. Trials. 2010;11(1):20.

Guyatt G, Oxman A, Kunz R, Vist G, Falck-Ytter Y, Schunemann H, Group FtGW. What is “quality of evidence” and why is it important to clinicians? Bmj. 2008;336:995–8.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin C, Gülmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

Hultcrantz M, Rind D, Akl EA, Treweek S, Mustafa RA, Iorio A, Alper BS, Meerpohl JJ, Murad MH, Ansari MT, et al. The GRADE working group clarifies the construct of certainty of evidence. J Clin Epidemiol. 2017;87:4.

Wallace J, Nwosu B, Clarke M. Barriers to the uptake of evidence from systematic reviews and meta-analyses: a systematic review of decision makers’ perceptions. BMJ Open. 2012;2:e001220.

Sakala C, Gyte G, Henderson S, Nieilson J, Horey D. Consumer-professional partnership to improve research: the experience of the Cochrane Collaboration's pregnancy and childbirth group. Birth Issues Perinat Care. 2001;28(2):133–7.

CAS   Google Scholar  

Wale J, Colombi C, Belizan M, Nadel J. Internation health consumers in the Cochrane collaboration: fifteen years on. J Ambul Care Manage. 2010;33(3):182–9.

Cottrell E, Whitlock E, Kato E, Uhl S, Belinson S, Chang C, Hoomans T, Meltzer D, Noorani H, Robinson K, et al. Defining the benefits of stakeholder engagement in systematic reviews. In: Research White Papers. Rockville: Agency for Healthcare Research and Quality; 2014.

Boote J, Baird W, Sutton A. Public involvement in the systematic review process in health and social care: a narrative review of case examples. Health Policy. 2011;102(2–3):105–16.

Kreis J, Puhan M, Schunemann H, Dickersin K. Consumer involvement in systematic reviews of comparative effectiveness research. Health Expect. 2013;16(4):323–37.

Haddaway N, Kohl C, Rebelo da Silva N, Sciemann J, Spök A, Stewart R, Sweet J, Wilhelm R. A framework for stakeholder engagement during systematic reviews and maps in environmental management. Environ Evid. 2017;6:11.

Pollock A, Campbell P, Baer G, Choo P, Morris J, Forster A. User involvement in a Cochrane systematic review: using structured methods to enhance the clinical relevance, usefulness and usability of a systematic review update. Syst Rev. 2015;4:55.

Atkins D, Chang S, Gartlehner G, Buckley DI, Whitlock EP, Berliner E, Matchar DB. Assessing the applicability of studies when comparing medical interventions. In: Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Rockville: Agency for Healthcare Research and Quality (US); 2010. Available from: http://www.ncbi.nlm.nih.gov/books/NBK53480/ .

Craig P, Di Ruggiero E, Frohlich K, et al. Chapter 3, taking account of context in the population health intervention research process. In: Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Journals Library: Southampton; 2018.

Gruen R, Morris P, McDonald E, Bailie R. Making systematic reviews more useful for policy-makers. Bull World Health Organ. 2005;83(6):480–1.

Burchett HED, Blanchard L, Kneale D, Thomas H. Assessing the applicability of public health intervention evaluations from one setting to another: a methodological study of the usability and usefulness of assessment tools and frameworks. Health Res Policy Syst. 2018;16:88.

Taylor B, Dempster M, Donnelly M. Grading gems: appraising the quality of research for social work and social care. Br J Soc Work. 2007;37:335–54.

Munthe-Kaas H, Berg R, Blaasvær N. Effectiveness of interventions to reduce homelessness. A systematic review. Oslo: Norwegian Institute of Public Health; 2016.

Anderson L, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Res Synth Methods. 2011;2(1):33–42.

Kneale D, Thomas J, Harris K. Developing and optimising the use of logic models in systematic reviews: exploring practice and good practice in the use of programme theory in reviews. PLoS One. 2015;10(11):e0142187.

Oxman A, Guyatt GH. A consumer's guide to subgroup analyses. Ann Intern Med. 1992;116:78–84.

CAS   PubMed   Google Scholar  

Dyb E, Johannessen K. Bostedsløse i Norge 2012 - en kartlegging. Norsk Institutt for by- og regionforskning: Oslo; 2013.

Sun X, Ioannidis J, Agoritsas T, Alba A, Guyatt G. How to use a subgroup analysis: Users’ guides to the medical literature. JAMA. 2014;311(4):405–11.

Sun X, Briel M, Walter S, Guyatt G. Is a subgroup effect believable? Updating criteria to evaluate the credibility of subgroup analyses. BMJ Open. 2010;340:c117.

Higgins J, Altman D, Gøtzsche P, Jüni P, Moher D, Oxman A, Savović J, Schulz K, Weeks L, Sterne J, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. Bmj. 2011;343:d5928.

Glasziou P, Chalmers I, Altman D, Bastian H, Boutron I, Brice A, Jamtvedt G, Garmer A, Ghersi D, Groves T, et al. Taking healthcare interventions from trial to practice. Bmj. 2010;13(341):c3852.

Harper R, Lewin S, Lenton C, Peña-Rosas J. Completeness of reporting of setting and health worker cadre among trials on antenatal iron and folic acid supplementation in pregnancy: An assessment based on two Cochrane reviews. Syst Rev. 2013;2:42.

Hoffman T, Glasziou P, Milne R, Moher D, Altman D, Barbour V, Macdonald H, Johnston M, Lamb S, Dixon-Woods M, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Bmj. 2014;348:g1687.

Montgomery P, Grant S, Mayo-Wilson E, Macdonald G, Michie S, Hopewell S, Moher D. Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 extension. Trials. 2018;19(1):407.

Zwarenstein M, Treweek S, Gagnier J, Altman D, Tunis S, Haynes B, Oxman A, Moher D. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. Bmj. 2008;11(337):a2390.

Nøkleby H, Flodgren G, Munthe-Kaas H, Said M. Digital interventions for couples with relationship problems: a systematic review. Oslo: Norwegian Institute of Public Health; 2018.

Cochrane Effective Practice and Organisation of Care (EPOC). Synthesizing results when it does not make sense to do a meta-analysis. EPOC Resources for review authors, 2017. epoc.cochrane.org/resources/epoc-resources-review-authors (accessed 02 January 2020).

Download references

Acknowledgements

We would like to acknowledge the intellectual support and guidance of Rigmor Berg from the Norwegian Institute of Public Health, as well as researchers from the Division for Health Services and representatives from various Norwegian welfare directorates who participated in the focus group for stakeholders. We would also like to acknowledge Eva Rehfuess who identified the example used under “Context matters” and Josh Vogel for his assistance regarding the example. We would like to thank Sarah Rosenbaum from the Norwegian Institute of Public Health for designing the diagram in Fig. 1 .

The authors’ research time was funded by the Norwegian Institute of Public Health. The authors also received funding from the Campbell Collaboration Methods Grant to support the development of the TRANSFER Approach. SL receives additional funding from the South African Medical Research Council. The funding bodies played no role in the design of the study, the collection, analysis, or interpretation of data or in writing the manuscript.

Author information

Authors and affiliations.

Norwegian Institute of Public Health, Oslo, Norway

Heather Munthe-Kaas, Heid Nøkleby, Simon Lewin & Claire Glenton

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Simon Lewin

Cochrane Norway, Oslo, Norway

Claire Glenton

You can also search for this author in PubMed   Google Scholar

Contributions

HMK and HN developed the framework and wrote the manuscript. CG and SL provided guidance on the development of the framework and gave feedback on the manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Heather Munthe-Kaas .

Ethics declarations

Ethics approval and consent to participate.

Not applicable. This study did not undertake any formal data collection involving humans or animals. Participants in the informal focus group discussion provided verbal consent to participate. According to the Norwegian Centre for Research Data (nsd.no) online portal for determining registration of studies, this study did not warrant application for review by an ethical committee https://nsd.no/personvernombud/en/notify/index.html .

Consent for publication

Not applicable.

Competing interests

HMK, CG and SL are co-authors of the GRADE-CERQual approach and lead co-ordinators of the GRADE-CERQual coordinating group. HN has no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

TRANSFER characteristics of context

Transfer table of included studies, transfer assessment, rights and permissions.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Munthe-Kaas, H., Nøkleby, H., Lewin, S. et al. The TRANSFER Approach for assessing the transferability of systematic review findings. BMC Med Res Methodol 20 , 11 (2020). https://doi.org/10.1186/s12874-019-0834-5

Download citation

Received : 30 November 2018

Accepted : 12 September 2019

Published : 17 January 2020

DOI : https://doi.org/10.1186/s12874-019-0834-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Transferability
  • Applicability
  • Indirectness
  • Systematic review methodology
  • GRADE-CERQual
  • Stakeholder engagement

BMC Medical Research Methodology

ISSN: 1471-2288

critical thinking and transferability a review of the literature

Advertisement

Advertisement

To Clarity and Beyond: Situating Higher-Order, Critical, and Critical-Analytic Thinking in the Literature on Learning from Multiple Texts

  • REVIEW ARTICLE
  • Published: 24 March 2023
  • Volume 35 , article number  40 , ( 2023 )

Cite this article

critical thinking and transferability a review of the literature

  • Alexandra List   ORCID: orcid.org/0000-0003-1125-9811 1 &
  • Yuting Sun 2  

1069 Accesses

5 Citations

1 Altmetric

Explore all metrics

For this systematic review, learning from multiple texts served as the specific context for investigating the constructs of higher-order (HOT), critical (CT), and critical-analytic (CAT) thinking. Examining the manifestations of HOT, CT, and CAT within the specific context of learning from multiple texts allowed us to clarify and disentangle these valued modes of thought. We begin by identifying the mental activities underlying the processes and outcomes of learning from multiple texts. We then juxtapose these mental activities with definitions of HOT, CT, and CAT drawn from the literature. Through this juxtaposition, we define HOT as multi-componential, including evaluation; CT as requiring both evaluation and its justification or substantiation; and CAT as considering the extent to which evaluation and justification may be consistently and systematically applied. We further generate a number of insights, described in the final section of this article. These include the frequent manifestations of HOT, CT, and CAT within the context of students learning from multiple texts and the co-occurring demand for these valued modes of thinking. We propose an additional mode of valued thought, that we refer to as devising , when learners synthetically and systematically use knowledge and strategies gained within one multiple text learning situation to produce an original product or solution in another novel learning situation. We consider such devising to demand HOT, CT, and CAT.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

critical thinking and transferability a review of the literature

Similar content being viewed by others

critical thinking and transferability a review of the literature

Promoting Integration of Multiple Texts: a Review of Instructional Approaches and Practices

critical thinking and transferability a review of the literature

Situating Higher-Order, Critical, and Critical-Analytic Thinking in Problem- and Project-Based Learning Environments: A Systematic Review

critical thinking and transferability a review of the literature

Reasoning beyond history: examining students’ strategy use when completing a multiple text task addressing a controversial topic in education

Data availability.

Data available upon request.

Although comprehension has been referred to as the result of both bottom-up (or passive) and top-down (i.e., purposeful and active) knowledge activation processes (Kurby et al., 2005 ; Wolfe & Goldman, 2005 ), we refer here to top-down processes or students’ deliberate engagement of prior knowledge, as reflective of HOT (Richter & Maier, 2017 ).

Bloom et al. ( 1956 ), in their original introduction of this taxonomy, did not refer to higher vis-à-vis lower levels.

This study was beyond the scope of our review as it was a dissertation; however, serves as an illustrative example here.

The multiple text literature can also benefit from considering objectives specified within Marzano and Kendall ( 2008 ) self-system. These include asking students to consider the importance of tasks to them , their efficacy for task completion, emotional responses to tasks or texts, as well as overall motivation. As suggested by a recent review from Anmarkrud et al. ( 2021 ), rarely have these self-system components been examined in the literature on learning from multiple texts, with interest constituting the motivational construct most analyzed. Recent work has started to look at the role of epistemic emotions in learning from multiple texts (Danielson et al., 2022 ; Muis et al., 2015 ).

This paper was excluded from our review as it did not have a learning outcome.

Kammerer et al. ( 2013 ) were unique in asking students to apprise their certainty in a solution to a medical controversy, described across multiple texts. We considered students’ certainty appraisals to reflect metacognition; however this was a mental activity not included among the multiple text outcomes we coded for (i.e., and was placed into the Other category), as Kammerer et al. ( 2013 ) were unique in including this as an assessment.

*Indicate references included in the review

Adams, N. E. (2015). Bloom’s Taxonomy of cognitive learning objectives. Journal of the Medical Library Association, 103 (3), 152–153. https://doi.org/10.3163/1536-5050.103.3.010

Article   Google Scholar  

Afflerbach, P., Cho, B.-Y., & Kim, J.-Y. (2011). The assessment of higher order thinking in reading. In G. Schraw & D. R. Robinson (Eds.), Assessment of higher order thinking skills (pp. 185–217). IAP Information Age Publishing.

Google Scholar  

Afflerbach, P., Cho, B.-Y., & Kim, J.-Y. (2015). Conceptualizing and assessing higher-order thinking in reading. Theory into Practice, 54 (3), 203–212. https://doi.org/10.1080/00405841.2015.1044367

Alexander, P. A. (2014). Thinking critically and analytically about critical-analytic thinking: An introduction. Educational Psychology Review, 26 , 469–476. https://doi.org/10.1007/s10648-014-9283-1

Alexander, P. A., Dinsmore, D. L., Fox, E., Grossnickle, E. M., Loughlin, S. M., Maggioni, L., Parkinson, M. M., & Winters, F. I. (2011). Higher order thinking and knowledge: Domain-general and domain-specific trends and future directions. In G. Schraw & D. R. Robinson (Eds.), Assessment of higher order thinking skills (pp. 47–88). IAP Information Age Publishing.

Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives . Longman.

Anmarkrud, Ø., Bråten, I., Florit, E., & Mason, L. (2021). The role of individual differences in sourcing: A systematic review.  Educational Psychology Review , 1–44. https://doi.org/10.1007/s10648-021-09640-7

*Anmarkrud, Ø., Bråten, I., & Strømsø, H. I. (2014). Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents.  Learning and Individual Differences ,  30 , 64–76. https://doi.org/10.1016/j.lindif.2013.01.007

*Anmarkrud, Ø., McCrudden, M. T., Bråten, I., & Strømsø, H. I. (2013). Task-oriented reading of multiple documents: Online comprehension processes and offline products.  Instructional Science ,  41 (5), 873–894. https://doi.org/10.1007/s11251-013-9263-8

*Barzilai, S., Tal-Savir, D., Abed, F., Mor-Hagani, S., & Zohar, A. R. (2021). Mapping multiple documents: From constructing multiple document models to argumentative writing.  Reading and Writing: An Interdisciplinary Journal . https://doi.org/10.1007/s11145-021-10208-8

*Barzilai, S., Tzadok, E., & Eshet-Alkalai, Y. (2015). Sourcing while reading divergent expert accounts: Pathways from views of knowing to written argumentation.  Instructional Science ,  43 (6), 737–766. https://doi.org/10.1007/s11251-015-9359-4

Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30 (1), 39–85. https://doi.org/10.1080/07370008.2011.636495

Barzilai, S., Zohar, A. R., & Mor-Hagani, S. (2018). Promoting integration of multiple texts: A review of instructional approaches and practices. Educational Psychology Review, 30 (3), 973–999. https://doi.org/10.1007/s10648-018-9436-8

Bloom, B. S. (Ed.), Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain . David McKay.

*Brand‐Gruwel, S., Kammerer, Y., van Meeuwen, L., & van Gog, T. (2017). Source evaluation of domain experts and novices during Web search.  Journal of Computer Assisted Learning ,  33 (3), 234–251. https://doi.org/10.1111/jcal.12162

*Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill.  Computers in Human Behavior ,  21 (3), 487–508. https://doi.org/10.1016/j.chb.2004.10.005

Brand-Gruwel, S., Wopereis, I., & Walraven, A. (2009). A descriptive model of information problem solving while using internet. Computers & Education, 53 (4), 1207–1217. https://doi.org/10.1016/j.compedu.2009.06.004

Brante, E. W., & Strømsø, H. I. (2018). Sourcing in text comprehension: A review of interventions targeting sourcing skills. Educational Psychology Review, 30 (3), 773–799. https://doi.org/10.1007/s10648-017-9421-7

Bråten, I., Britt, M. A., Strømsø, H. I., & Rouet, J.-F. (2011). The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model. Educational Psychologist, 46 (1), 48–70. https://doi.org/10.1080/00461520.2011.538647

*Bråten, I., Ferguson, L. E., Anmarkrud, Ø., & Strømsø, H. I. (2013). Prediction of learning and comprehension when adolescents read multiple texts: The roles of word-level processing, strategic approach, and reading motivation.  Reading and Writing: An Interdisciplinary Journal ,  26 (3), 321–348. https://doi.org/10.1007/s11145-012-9371-x

*Bråten, I., Ferguson, L. E., Strømsø, H. I., & Anmarkrud, Ø. (2014). Students working with multiple conflicting documents on a scientific issue: Relations between epistemic cognition while reading and sourcing and argumentation in essays.  British Journal of Educational Psychology ,  84 (1), 58–85. https://doi.org/10.1111/bjep.12005

*Bråten, I., & Strømsø, H. I. (2003). A longitudinal think-aloud study of spontaneous strategic processing during the reading of multiple expository texts.  Reading and Writing ,  16 :195–218. https://doi.org/10.1023/A:1022895207490

Bråten, I., & Strømsø, H. I. (2011). Measuring strategic processing when students read multiple texts. Metacognition and Learning, 6 (2), 111–130. https://doi.org/10.1007/s11409-011-9075-7

*Britt, M. A., & Aglinskas, C. (2002). Improving students’ ability to identify and use source information.  Cognition and Instruction ,  20 (4), 485–522. https://doi.org/10.1207/S1532690XCI2004_2

Britt, M. A., Perfetti, C. A., Sandak, R., & Rouet, J. F. (1999). Content integration and source separation in learning from multiple texts. In S. R. Goldman, A. C. Graesser, & P. van den Broek (Eds.), Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso (pp. 209–233). Lawrence Erlbaum Associates.

Brown, N. J., Afflerbach, P. P., & Croninger, R. G. (2014). Assessment of critical-analytic thinking. Educational Psychology Review, 26 (4), 543–560. https://doi.org/10.1007/s10648-014-9280-4

Buehl, M. M., & Alexander, P. A. (2005). Motivation and performance differences in students’ domain-specific epistemological belief profiles. American Educational Research Journal, 42 (4), 697–726. https://doi.org/10.3102/00028312042004697

Butterfuss, R., & Kendeou, P. (2021). KReC-MD: Knowledge revision with multiple documents. Educational Psychology Review, 33 (4), 1475–1497. https://doi.org/10.1007/s10648-021-09603-y

Byrnes, J. P., & Dunbar, K. N. (2014). The nature and development of critical-analytic thinking. Educational Psychology Review, 26 (4), 477–493. https://doi.org/10.1007/s10648-014-9284-0

*Cerdán, R., & Vidal-Abarca, E. (2008). The effects of tasks on integrating information from multiple documents.  Journal of Educational Psychology ,  100 (1), 209–222. https://doi.org/10.1037/0022-0663.100.1.209

*Cho, B.-Y., Han, H., & Kucan, L. L. (2018). An exploratory study of middle-school learners’ historical reading in an Internet environment.  Reading and Writing: An Interdisciplinary Journal ,  31 (7), 1525–1549. https://doi.org/10.1007/s11145-018-9847-4

*Cho, B.-Y., Woodward, L., Li, D., & Barlow, W. (2017). Examining adolescents’ strategic processing during online reading with a question-generating task.  American Educational Research Journal ,  54 (4), 691–724

Cleary, T. J., Callan, G. L., & Zimmerman, B. J. (2012). Assessing self-regulation as a cyclical, context-specific phenomenon: Overview and analysis of SRL microanalytic protocols. Education Research International, 2012 , 428639. https://doi.org/10.1155/2012/428639

*Daher, T. A., & Kiewra, K. A. (2016). An investigation of SOAR study strategies for learning from multiple online resources.  Contemporary Educational Psychology ,  46 , 10–21. https://doi.org/10.1016/j.cedpsych.2015.12.004

Danielson, R. W., Sinatra, G. M., Trevors, G., Muis, K. R., Pekrun, R., & Heddy, B. C. (2022). Can multiple texts prompt causal thinking? The role of epistemic emotions.  The Journal of Experimental Education , 1–15. https://doi.org/10.1080/00220973.2022.2107604

Danvers, E. C. (2016). Criticality’s affective entanglements: Rethinking emotion and critical thinking in higher education. Gender and Education, 28 (2), 282–297. https://doi.org/10.1080/09540253.2015.1115469

Dinsmore, D. L., & Alexander, P. A. (2012). A critical discussion of deep and surface processing: What it means, how it is measured, the role of context, and model specification. Educational Psychology Review, 24 (4), 499–567. https://doi.org/10.1007/s10648-012-9198-7

*Du, H., & List, A. (2020). Researching and writing based on multiple texts. Learning and Instruction, 66 , 101297. https://doi.org/10.1016/j.learninstruc.2019.101297

*Du, H., & List, A. (2021). Evidence use in argument writing based on multiple texts.  Reading Research Quarterly ,  56 (4), 715–735.  https://doi.org/10.1002/rrq.366

Dumas, D. , & Dong, Y. (2021). Focusing the relational lens on critical thinking: How can relational reasoning support critical and analytic thinking? In D. Fasko & F. Fair (Eds.) Critical thinking and reasoning: Theory development, instruction, and assessment (pp. 47–63) . Brill. https://doi.org/10.1163/9789004444591_004

Dwyer, C. P., Hogan, M. J., & Stewart, I. (2014). An integrated critical thinking framework for the 21st century. Thinking Skills and Creativity, 12 , 43–52. https://doi.org/10.1016/j.tsc.2013.12.004

Eber, P. A., & Parker, T. S. (2007). Assessing student learning: Applying Bloom's Taxonomy. Human Service Education , 27 (1), 45–53. Retrieved February 20, 2023, from https://go.gale.com/ps/i.do?id=GALE%7CA280993786&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=08905428&p=AONE&sw=w&userGroupName=anon%7E9c8f4eb

Elder, L., & Paul, R. W. (2013). Critical thinking: Intellectual standards essential to reasoning well within every domain of thought. Journal of Developmental Education, 36 (3), 34–35. Retrieved February 20, 2023, from  https://files.eric.ed.gov/fulltext/EJ1067273.pdf

Ennis, R. H. (1962). A concept of critical thinking. Harvard Educational Review, 32 (1), 81–111.

Ennis, R. H. (1993). Critical thinking assessment. Theory into Practice, 32 (3), 179–186.

Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction . Executive Summary: The Delphi Report. The California Academic Press.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34 (10), 906–911. https://doi.org/10.1037/0003-066X.34.10.906

Frerejean, J., Velthorst, G. J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel, S. (2019). Embedded instruction to learn information problem solving: Effects of a whole task approach. Computers in Human Behavior, 90 , 117–130. https://doi.org/10.1016/j.chb.2018.08.043

*Gerjets, P., Kammerer, Y., & Werner, B. (2011). Measuring spontaneous and instructed evaluation processes during Web search: Integrating concurrent thinking-aloud protocols and eye-tracking data.  Learning and Instruction ,  21 (2), 220–231. https://doi.org/10.1016/j.learninstruc.2010.02.005

Gil, L., Bråten, I., Vidal-Abarca, E., & Strømsø, H. I. (2010a). Summary versus argument tasks when working with multiple documents: Which is better for whom? Contemporary Educational Psychology, 35 (3), 157–173. https://doi.org/10.1016/j.cedpsych.2009.11.002

Gil, L., Bråten, I., Vidal-Abarca, E., & Strømsø, H. I. (2010b). Understanding and integrating multiple science texts: Summary tasks are sometimes better than argument tasks. Reading Psychology, 31 (1), 30–68. https://doi.org/10.1080/02702710902733600

*Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska, K. (2012). Comprehending and learning from internet sources: Processing patterns of better and poorer learners.  Reading Research Quarterly ,  47 (4), 356–381. https://doi.org/10.1002/RRQ.027

Granello, D. H. (2001). Promoting cognitive complexity in graduate written work: Using Bloom’s taxonomy as a pedagogical tool to improve literature reviews. Counselor Education and Supervision, 40 (4), 292–307. https://doi.org/10.1002/j.1556-6978.2001.tb01261.x

Greene, J. A., Muis, K. R., & Pieschl, S. (2010). The role of epistemic beliefs in students’ self-regulated learning with computer-based learning environments: Conceptual and methodological issues. Educational Psychologist, 45 (4), 245–257. https://doi.org/10.1080/00461520.2010.515932

*Grossnickle Peterson, E., & Alexander, P. A. (2020). Navigating print and digital sources: Students’ selection, use, and integration of multiple sources across mediums.  Journal of Experimental Education ,  88 (1), 27–46. https://doi.org/10.1080/00220973.2018.1496058

*Hagen, Å. M., Braasch, J. L. G., & Bråten, I. (2014). Relationships between spontaneous note-taking, self-reported strategies and comprehension when reading multiple texts in different task conditions.  Journal of Research in Reading ,  37 (S1), S141–S157. https://doi.org/10.1111/j.1467-9817.2012.01536.x

Hofer, B. K., & Bendixen, L. D. (2012). Personal epistemology: Theory, research, and future directions. In K. R. Harris, S. Graham, T. Urdan, C. B. McCormick, G. M. Sinatra, & J. Sweller (Eds.), APA educational psychology handbook, Vol. 1. Theories, constructs, and critical issues (pp. 227–256). American Psychological Association. https://doi.org/10.1037/13273-009

Jiménez-Aleixandre, M. P., & Puig, B. (2012). Argumentation, evidence evaluation and critical thinking. In B. J. Fraser, K. Tobin, & C. J. McRobbie (Eds.), Second international handbook of science education (pp. 1001–1015). Springer. https://doi.org/10.1007/978-1-4020-9041-7_66

Jones, K. O., Harland, J., Reid, J. M. V.,& Bartlett, R. (2009). Relationship between examination questions and bloom's taxonomy. IEEE Frontiers in Education Conference (pp. 1–6). San Antonio, Texas. https://doi.org/10.1109/FIE.2009.5350598

*Kammerer, Y., Bråten, I., Gerjets, P., & Strømsø, H. I. (2013). The role of Internet-specific epistemic beliefs in laypersons’ source evaluations and decisions during Web search on a medical issue.  Computers in human behavior ,  29 (3), 1193–1203. https://doi.org/10.1016/j.chb.2012.10.012

*Kammerer, Y., Gottschling, S., & Bråten, I. (2021). The role of internet-specific justification beliefs in source evaluation and corroboration during web search on an unsettled socio-scientific issue.  Journal of Educational Computing Research ,  59 (2), 342–378. https://doi.org/10.1177/0735633120952731

*Kammerer, Y., Kalbfell, E., & Gerjets, P. (2016). Is this information source commercially biased? How contradictions between web pages stimulate the consideration of source information.  Discourse Processes ,  53 (5-6), 430–456. https://doi.org/10.1080/0163853X.2016.1169968

Kiili, C., & Leu, D. J. (2019). Exploring the collaborative synthesis of information during online reading. Computers in Human Behavior, 95 , 146–157. https://doi.org/10.1016/j.chb.2019.01.033

Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95 (2), 163–182. https://doi.org/10.1037/0033-295X.95.2.163

Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85 (5), 363–394. https://doi.org/10.1037/0033-295X.85.5.363

*Kobayashi, K. (2009a). Comprehension of relations among controversial texts: Effects of external strategy use.  Instructional Science ,  37 (4), 311–324. https://doi.org/10.1007/s11251-007-9041-6

*Kobayashi, K. (2009b). The influence of topic knowledge, external strategy use, and college experience on students’ comprehension of controversial texts.  Learning and Individual Differences ,  19 (1), 130–134. https://doi.org/10.1016/j.lindif.2008.06.001

*Kobayashi, K. (2014). Students’ consideration of source information during the reading of multiple texts and its effect on intertextual conflict resolution.  Instructional Science ,  42 (2), 183–205. https://doi.org/10.1007/s11251-013-9276-3

Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1964). Taxonomy of educational objectives: The classification of educational goals. Handbook II: The affective domain . David McKay.

Kuhn, D. (2019). Critical thinking as discourse. Human Development, 62 (3), 146–164. https://doi.org/10.1159/000500171

Kurby, C. A., Britt, M. A., & Magliano, J. P. (2005). The role of top-down and bottom-up processes in between-text integration. Reading Psychology, 26 (4–5), 335–362. https://doi.org/10.1080/02702710500285870

Lai, E. R. (2011). Critical thinking: A literature review. Pearson’s Research Reports, 6 (1), 40–41.

Lee, Y. (2022). Examining students’ help-seeking when learning from multiple texts . Pennsylvania State University.

Lewis, A., & Smith, D. (1993). Defining higher order thinking. Theory into Practice, 32 (3), 131–137. https://doi.org/10.1080/00405849309543588

*Linderholm, T., Therriault, D. J., & Kwon, H. (2014). Multiple science text processing: Building comprehension skills for college student readers.  Reading Psychology ,  35 (4), 332–356. https://doi.org/10.1080/02702711.2012.726696

List, A., & Alexander, P. A. (2015). Examining response confidence in multiple text tasks. Metacognition and Learning, 10 , 407–436. https://doi.org/10.1007/s11409-015-9138-2

List, A., & Alexander, P. A. (2017). Analyzing and integrating models of multiple text comprehension. Educational Psychologist, 52 (3), 143–147. https://doi.org/10.1080/00461520.2017.1328309

List, A., & Alexander, P. A. (2018). Corroborating students’ self-reports of source evaluation. Behaviour & Information Technology, 37 (3), 198–216. https://doi.org/10.1080/0144929X.2018.1430849

List, A., & Alexander, P. A. (2019). Toward an integrated framework of multiple text use. Educational Psychologist, 54 (1), 20–39. https://doi.org/10.1080/00461520.2018.1505514

*List, A., Alexander, P. A., & Stephens, L. A. (2017). Trust but verify: Examining the association between students’ sourcing behaviors and ratings of text trustworthiness.  Discourse Processes ,  54 (2), 83–104. https://doi.org/10.1080/0163853X.2016.1174654

*List, A., Campos Oaxaca, G. S., Lee, E., Du, H., & Lee, H. Y. (2021). Examining perceptions, selections, and products in undergraduates’ learning from multiple resources.  British Journal of Educational Psychology ,  91 (4), 1555–1584. https://doi.org/10.1111/bjep.12435

*List, A., & Du, H. (2021). Reasoning beyond history: Examining students’ strategy use when completing a multiple text task addressing a controversial topic in education. Reading and Writing: An Interdisciplinary Journal . https://doi.org/10.1007/s11145-020-10095-5

List, A., Du, H., & Wang, Y. (2019a). Understanding students’ conceptions of task assignments. Contemporary Educational Psychology, 59 , 101801. https://doi.org/10.1016/j.cedpsych.2019.101801

*List, A., Du, H., Wang, Y., & Lee, H. Y. (2019b). Toward a typology of integration: Examining the documents model framework. Contemporary Educational Psychology , 58 , 228–242. https://doi.org/10.1016/j.cedpsych.2019.03.003

*List, A., Grossnickle, E. M., & Alexander, P. A. (2016a). Profiling students’ multiple source use by question type.  Reading Psychology ,  37 (5), 753–797. https://doi.org/10.1080/02702711.2015.1111962

*List, A., Grossnickle, E. M., & Alexander, P. A. (2016b). Undergraduate students’ justifications for source selection in a digital academic context.  Journal of Educational Computing Research ,  54 (1), 22–61. https://doi.org/10.1177/0735633115606659

Marzano, R. J., & Kendall, J. S. (2008). Designing and assessing educational objectives: Applying the new taxonomy . Corwin Press.

Mason, L., Boldrin, A., & Ariasi, N. (2010). Searching the Web to learn about a controversial topic: Are students epistemically active? Instructional Science, 38 , 607–633. https://doi.org/10.1007/s11251-008-9089-y

*Mason, L., Junyent, A. A., & Tornatora, M. C. (2014). Epistemic evaluation and comprehension of web-source information on controversial science-related topics: Effects of a short-term instructional intervention.  Computers & Education ,  76 , 143–157. https://doi.org/10.1016/j.compedu.2014.03.016

*Mason, L., Zaccoletti, S., Scrimin, S., Tornatora, M. C., Florit, E., & Goetz, T. (2020). Reading with the eyes and under the skin: Comprehending conflicting digital texts.  Journal of Computer Assisted Learning ,  36 (1), 89–101. https://doi.org/10.1111/jcal.12399

*Mateos, M., & Solé, I. (2009). Synthesising information from various texts: A study of procedures and products at different educational levels.  European Journal of Psychology of Education ,  24 (4), 435–451. https://doi.org/10.1007/BF03178760

Mayer, R. E. (2002). A taxonomy for computer-based assessment of problem solving. Computers in Human Behavior, 18 (6), 623–632. https://doi.org/10.1016/S0747-5632(02)00020-1

*McCrudden, M. T., Kulikowich, J. M., Lyu, B., & Huynh, L. (2022). Promoting integration and learning from multiple complementary texts. Journal of Educational Psychology. Advance online publication. https://doi.org/10.1037/edu0000746

Miri, B., David, B. C., & Uri, Z. (2007). Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking. Research in Science Education, 37 (4), 353–369. https://doi.org/10.1007/s11165-006-9029-2

Muis. K. R., Chevrier, M., Denton, C. A., & Losenno, K. M. (2021). Epistemic emotions and epistemic cognition predict critical thinking about socio-scientific issues. Frontiers in Education, 6, Article 669908. https://doi.org/10.3389/feduc.2021.669908

*Muis, K. R., Pekrun, R., Sinatra, G. M., Azevedo, R., Trevors, G., Meier, E., & Heddy, B. C. (2015). The curious case of climate change: Testing a theoretical model of epistemic beliefs, epistemic emotions, and complex learning.  Learning and Instruction ,  39 , 168–183. https://doi.org/10.1016/j.learninstruc.2015.06.003

Murphy, P. K., Rowe, M. L., Ramani, G., & Silverman, R. (2014). Promoting critical-analytic thinking in children and adolescents at home and in school. Educational Psychology Review, 26 (4), 561–578. https://doi.org/10.1007/s10648-014-9281-3

Newmann, F. M. (1991). Promoting higher order thinking in social studies: Overview of a study of 16 high school departments. Theory & Research in Social Education, 19 (4), 324–340. https://doi.org/10.1080/00933104.1991.10505645

Paul, R., & Elder, L. (2006). The miniature guide to critical thinking concepts and tools (4th ed.) . The Foundation for Critical Thinking. Retrieved February 20, 2023, from https://www.criticalthinking.org/files/Concepts_Tools.pdf

Paul, R. W., & Nosich, G. M. (1991). A proposal for the national assessment of higher-order thinking at the community college, college, and university levels . National Center for Education Statistics, Office of Educational Research and Improvement, the United States Department of Education. Retrieved February 20, 2023, from https://files.eric.ed.gov/fulltext/ED340762.pdf

Perfetti, C. A., Rouet, J.-F., & Britt, M. A. (1999). Towards a theory of documents representation. In H. van Oostendorp & S. R. Goldman (Eds.), The Construction of mental representations during reading (pp. 99–122). Lawrence Erlbaum Associates.

Petty, R. E., & Briñol, P. (2015). Emotion and persuasion: Cognitive and meta-cognitive processes impact attitudes. Cognition and Emotion, 29 (1), 1–26. https://doi.org/10.1080/02699931.2014.967183

Rapp, D. N., & Mensink, M. C. (2011). Focusing effects from online and offline reading tasks. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp. 141–164). IAP Information Age Publishing.

Resnick, L. B. (1987). Education and learning to think . National Academies Press. https://doi.org/10.17226/1032

Book   Google Scholar  

Reznitskaya, A., Anderson, R. C., Dong, T., Li, Y., Kim, I.-H., & Kim, S.-Y. (2008). Learning to think well: Application of argument schema theory to literacy instruction. In C. C. Block & S. R. Parris (Eds.), Comprehension instruction: Research-based best practices (pp. 196–213). The Guilford Press.

Richland, L. E., & Simms, N. (2015). Analogy, higher order thinking, and education. Wiley Interdisciplinary Reviews: Cognitive Science, 6 (2), 177–192. https://doi.org/10.1002/wcs.1336

Richter, T., & Maier, J. (2017). Comprehension of multiple documents with conflicting information: A two-step model of validation. Educational Psychologist, 52 (3), 148–166. https://doi.org/10.1080/00461520.2017.1322968

*Rodicio, H. G. (2015). Students’ evaluation strategies in a Web research task: Are they sensitive to relevance and reliability?  Journal of Computing in Higher Education ,  27 (2), 134–157. https://doi.org/10.1007/s12528-015-9098-1

Roeser, S., & Todd, C. (2015). Emotion and value: Introduction. In S. Roeser & C. Todd (Eds.), Emotion and Value (pp. 1–4). Oxford University Press.

Rouet, J.-F. (2006). The skills of document use: From text comprehension to web-based learning . Lawrence Erlbaum Associates.

Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Relevance instructions and goal-focusing in text learning (pp. 19–52). Information Age.

*Rouet, J.-F., Britt, M. A., Mason, R. A., & Perfetti, C. A. (1996). Using multiple sources of evidence to reason about history.  Journal of Educational Psychology ,  88 (3), 478–493. https://doi.org/10.1037/0022-0663.88.3.478

*Rouet, J.-F., Favart, M., Britt, M. A., & Perfetti, C. A. (1997). Studying and using multiple documents in history: Effects of discipline expertise.  Cognition and Instruction ,  15 (1), 85–106. https://doi.org/10.1207/s1532690xci1501_3

Rudd, R., Baker, M., & Hoover, T. (2000). Undergraduate agriculture student learning styles and critical thinking abilities: Is there a relationship? Journal of Agricultural Education, 41 (3), 2–12.

Sadler, T. D., & Zeidler, D. L. (2004). The morality of socioscientific issues: Construal and resolution of genetic engineering dilemmas. Science Education, 88 (1), 4–27. https://doi.org/10.1002/sce.10101

*Salmerón, L., Gil, L., Bråten, I., & Strømsø, H. (2010). Comprehension effects of signalling relationships between documents in search engines.  Computers in Human Behavior ,  26 (3), 419–426. https://doi.org/10.1016/j.chb.2009.11.013

Samuelstuen, M. S., & Bråten, I. (2007). Examining the validity of self-reports on scales measuring students’ strategic processing. British Journal of Educational Psychology, 77 (2), 351–378. https://doi.org/10.1348/000709906X106147

Schoor, C., Rouet, J. F., Artelt, C., Mahlow, N., Hahnel, C., Kroehne, U., & Goldhammer, F. (2021). Readers’ perceived task demands and their relation to multiple document comprehension strategies and outcome. Learning and Individual Differences, 88 , 102018. https://doi.org/10.1016/j.lindif.2021.102018

Schraw, G., & Robinson, D. R. (2011). Conceptualizing and assessing higher order thinking skills. In G. Schraw & D. R. Robinson (Eds.), Assessment of higher order thinking skills (pp. 47–88). IAP Information Age Publishing.

Scriven, M., & Paul, R. (1987, August). Critical thinking as defined by the National Council for Excellence in Critical Thinking. In  8th Annual International Conference on Critical Thinking and Education Reform, Rohnert Park, CA  (pp. 25 – 30)

Seaman, M. (2011). Bloom’s Taxonomy: Its evolution, revision, and use in the field of education. In D. J. Flinders & P. B. Uhrmacher (Eds.), Curriculum & Teaching Dialogue (pp. 29–45). Information Age Publishing Inc.

Sockett, H. (1971). Bloom’s Taxonomy: A philosophical critique (I). Cambridge Journal of Education, 1 (1), 16–25. https://doi.org/10.1080/0305764710010103

*Solé, I., Miras, M., Castells, N., Espino, S., & Minguela, M. (2013). Integrating information: An analysis of the processes involved and the products generated in a written synthesis task.  Written Communication ,  30 (1), 63–90. https://doi.org/10.1177/0741088312466532

Stromer-Galley, J., & Muhlberger, P. (2009). Agreement and disagreement in group deliberation: Effects on deliberation satisfaction, future engagement, and decision legitimacy. Political Communication, 26 (2), 173–192. https://doi.org/10.1080/10584600902850775

*Strømsø, H. I., Bråten, I., Britt, M. A., & Ferguson, L. E. (2013). Spontaneous sourcing among students reading multiple documents.  Cognition and Instruction ,  31 (2), 176–203. https://doi.org/10.1080/07370008.2013.769994

Tarchi, C., & Mason, L. (2020). Effects of critical thinking on multiple-document comprehension. European Journal of Psychology of Education, 35 (2), 289–313. https://doi.org/10.1007/s10212-019-00426-8

Tsai, C.-C. (2004). Beyond cognitive and metacognitive tools: The use of the Internet as an “epistemological” tool for instruction. British Journal of Educational Technology, 35 (5), 525–536. https://doi.org/10.1111/j.0007-1013.2004.00411.x

*Tsai, M.-J., & Wu, A.-H. (2021). Visual search patterns, information selection strategies, and information anxiety for online information problem solving.  Computers & Education ,  172 , 104236. https://doi.org/10.1016/j.compedu.2021.104236

*van Strien, J. L. H., Kammerer, Y., Brand-Gruwel, S., & Boshuizen, H. P. A. (2016). How attitude strength biases information processing and evaluation on the web.  Computers in Human Behavior ,  60 , 245–252. https://doi.org/10.1016/j.chb.2016.02.057

*Vandermeulen, N., van den Broek, B., van Steendam, E., & Rijlaarsdam, G. (2020). In search of an effective source use pattern for writing argumentative and informative synthesis texts.  Reading and Writing ,  33 (2), 239–266. https://doi.org/10.1007/s11145-019-09958-3

Vijayaratnam, P. (2012). Developing higher order thinking skills and team commitment via group problem solving: A bridge to the real world. Procedia-Social and Behavioral Sciences, 66 , 53–63. https://doi.org/10.1016/j.sbspro.2012.11.247

*Walraven, A., Brand-Gruwel, S., & Boshuizen, H. P. A. (2009). How students evaluate information and sources when searching the World Wide Web for information.  Computers & Education ,  52 (1), 234–246. https://doi.org/10.1016/j.compedu.2008.08.003

*Walraven, A., Brand-Gruwel, S., & Boshuizen, H. P. A. (2010). Fostering transfer of websearchers’ evaluation skills: A field test of two transfer theories.  Computers in Human Behavior ,  26 (4), 716–728. https://doi.org/10.1016/j.chb.2010.01.008

Wang, Y., & List, A. (2019). Calibration in multiple text use. Metacognition and Learning, 14 (2), 131–166. https://doi.org/10.1007/s11409-019-09201-y

Wentzel, K. R. (2014). Commentary: The role of goals and values in critical-analytic thinking. Educational Psychology Review, 26 (4), 579–582. https://doi.org/10.1007/s10648-014-9285-z

Weiss, R. E. (2003). Designing problems to promote higher-order thinking. New Directions for Teaching and Learning, 2003 (95), 25–31. https://doi.org/10.1002/tl.109

*Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks.  American Educational Research Journal ,  46 (4), 1060–1106. https://doi.org/10.3102/0002831209333183

Wiley, J., Griffin, T. D., Steffens, B., & Britt, M. A. (2020). Epistemic beliefs about the value of integrating information across multiple documents in history. Learning and Instruction, 65 , 101266. https://doi.org/10.1016/j.learninstruc.2019.101266

*Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that promote understanding and not just memory for text.  Journal of Educational Psychology ,  91 (2), 301–311. https://doi.org/10.1037/0022-0663.91.2.301

Willingham, D. T. (2007). Critical thinking: Why it is so hard to teach? American Educator, 31 (2), 8–19. https://www.aft.org/sites/default/files/media/2014/Crit_Thinking.pdf

Wolf, A. B. (2017). “Tell me how that makes you feel”: Philosophy’s reason/emotion divide and epistemic pushback in philosophy classrooms. Hypatia, 32 (4), 893–910. https://doi.org/10.1111/hypa.12378

*Wolfe, M. B. W., & Goldman, S. R. (2005). Relations between adolescents’ text processing and reasoning.  Cognition and Instruction ,  23 (4), 467–502. https://doi.org/10.1207/s1532690xci2304_2

*Yang, F. (2017). Examining the reasoning of conflicting science information from the information processing perspective—An eye movement analysis.  Journal of Research in Science Teaching ,  54 (10), 1347–1372. https://doi.org/10.1002/tea.21408

Zeidler, D.L., & Lewis, J. (2003). Unifying themes in moral reasoning on socioscientific issues and discourse. In D. L. Zeidler (ed.), The Role of Moral Reasoning on Socioscientific Issues and Discourse in Science Education (pp. 289–306). Springer. https://doi.org/10.1007/1-4020-4996-X_15

Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81 (3), 329–339. https://doi.org/10.1037/0022-0663.81.3.329

Zimmerman, B. J. (2013). From cognitive modeling to self-regulation: A social cognitive career path. Educational Psychologist, 48 (3), 135–147. https://doi.org/10.1080/00461520.2013.794676

Zohar, A., & Dori, Y. J. (2003). Higher-order thinking and low-achieving students: Are they mutually exclusive? Journal of the Learning Sciences, 12 (2), 145–181. https://doi.org/10.1207/S15327809JLS1202_1

Braasch, J. L., Rouet, J. F., Vibert, N., & Britt, M. A. (2012). Readers’ use of source information in text comprehension. Memory & Cognition, 40 , 450–465. https://doi.org/10.3758/s13421-011-0160-6

Muis, K. R. (2007). The role of epistemic beliefs in selfregulated learning. Educational Psychologist, 42 (3), 173–190. https://doi.org/10.1080/00461520701416306

Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20 (1), 61–84. https://doi.org/10.22329/il.v20i1.2254

Download references

Author information

Authors and affiliations.

Department of Educational Psychology, Counseling, and Special Education, The Pennsylvania State University, 227 Cedar Building, PA, 16820, University Park, USA

Alexandra List

Department of Human Development and Quantitative Methodology, University of Maryland, 3242 Benjamin Building, College Park, MD, 20742, College Park, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Alexandra List .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 2. Literature search and screening process

Literature search.

We conducted keyword searches in the PsycINFO and Educational Resources Information Center (ERIC) databases. We restricted the searches to empirical, quantitative studies published in peer-reviewed, English-language journals. To capture examinations of HOT, CT, and CAT within the multiple text literature, we combined two sets of search terms. The first set included keywords or phrases for multiple text research, such as “multiple source*”, “multiple text*”, “multiple document*”, “intertext*”, “conflicting texts”, “complementary texts”, and “information problem solving”. In the second set, we selected a range of keywords and phrases pertaining to HOT, CT, and CAT. These included broader terms such as “higher order thinking”, “ critical thinking ”, “critical analytic*”, “reasoning”, “metacognit*”, as well as more specific cognitive processes relevant for multiple text learning that potentially manifest HOT, CT, or CAT, such as “integrat*”, “synthes*”, “analys*”, “corroborat*”, “validat*”, “evaluat*”, “justif*”, “sourcing”, “argument*”, and “refutation*”. A full list of search terms is as follows:

(multiple source* OR multiple text* OR multiple document* OR intertext* OR conflicting information OR conflicting views OR conflicting texts OR conflicting sources OR complementary text* OR information problem solving) AND (higher order thinking OR critical thinking OR critical analytic OR metacogni* OR analys* OR analytical OR reasoning OR analog* OR refutation* OR integrat* OR synthes* OR corroborat* OR argument* OR validat* OR justif* OR evaluat* OR sourcing)

These initial searches yielded 1814 non-duplicate records. We narrowed down this initial pool by applying research area classification filters to constrain the literature to areas more relevant for educational and academic contexts (e.g., curriculum & programs & teaching methods, academic learning & achievement, cognitive processes, educational psychology, human experimental psychology, learning and memory). The remaining 482 records after this narrowing were then subject to title and abstract screening.

Title and Abstract Screening

We screened the titles and abstracts based on the following inclusion criteria: The study (a) involved a main task that required reading two or more texts, and (b) was conducted in an educational or academically relevant context. (c) The participants were proficient in the language in which the task was conducted, and (d) had no psychological, physiological, or neurological disorders or disabilities that could affect their text processing. This screening left us with 161 records.

Additional Searches

From these 161 records, we identified authors with five or more included articles on multiple text learning and journals that published five or more included articles. We then did a physical search of additional studies by going through these authors’ Google Scholar pages and the tables of contents of the selected journals in the last five years. We also identified additional relevant studies from the works referenced in these remaining studies following a backward snowballing procedure. These additional searches led us to another 30 non-duplicate records, making for a total of 191 records assessed in full-text form for eligibility.

Full-Text Assessment

As we read the full texts of these remaining 191 articles, we ensured the studies met the following eligibility criteria: (a) The multiple-text task was completed by participants independently rather than as a collaborative group activity; (b) The study included at least one process measure, that was not a self-report strategy question, and (c) one (non-self-report) outcome measure that assessed some form of higher-order cognition potentially reflective of HOT, CT, or CAT. Studies employing self-report questionnaire measures, to capture process data, were only included if these also used other forms of process or outcome measures (e.g., think-alouds, eye-tracking). This left us with a final set of 54 records, with 57 studies eligible for the systematic review.

Appendix 3. Coding for processes and outcomes of multiple text learning

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

List, A., Sun, Y. To Clarity and Beyond: Situating Higher-Order, Critical, and Critical-Analytic Thinking in the Literature on Learning from Multiple Texts. Educ Psychol Rev 35 , 40 (2023). https://doi.org/10.1007/s10648-023-09756-y

Download citation

Accepted : 23 February 2023

Published : 24 March 2023

DOI : https://doi.org/10.1007/s10648-023-09756-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical thinking
  • Critical-analytic thinking
  • Higher-order thinking
  • Multiple text comprehension
  • Bloom et al.’s Taxonomy
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

The TRANSFER Approach for assessing the transferability of systematic review findings

Heather munthe-kaas.

1 Norwegian Institute of Public Health, Oslo, Norway

Heid Nøkleby

Simon lewin.

2 Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Claire Glenton

3 Cochrane Norway, Oslo, Norway

Associated Data

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Systematic reviews are a key input to health and social welfare decisions. Studies included in systematic reviews often vary with respect to contextual factors that may impact on how transferable review findings are to the review context. However, many review authors do not consider the transferability of review findings until the end of the review process, for example when assessing confidence in the evidence using GRADE or GRADE-CERQual. This paper describes the TRANSFER Approach, a novel approach for supporting collaboration between review authors and stakeholders from the beginning of the review process to systematically and transparently consider factors that may influence the transferability of systematic review findings.

We developed the TRANSFER Approach in three stages: (1) discussions with stakeholders to identify current practices and needs regarding the use of methods to consider transferability, (2) systematic search for and mapping of 25 existing checklists related to transferability, and (3) using the results of stage two to develop a structured conversation format which was applied in three systematic review processes.

None of the identified existing checklists related to transferability provided detailed guidance for review authors on how to assess transferability in systematic reviews, in collaboration with decision makers. The content analysis uncovered seven categories of factors to consider when discussing transferability. We used these to develop a structured conversation guide for discussing potential transferability factors with stakeholders at the beginning of the review process. In response to feedback and trial and error, the TRANSFER Approach has developed, expanding beyond the initial conversation guide, and is now made up of seven stages which are described in this article.

Conclusions

The TRANSFER Approach supports review authors in collaborating with decision makers to ensure an informed consideration, from the beginning of the review process, of the transferability of the review findings to the review context. Further testing of TRANSFER is needed.

Evidence-informed decision making has become a common ideal within healthcare, and increasingly also within social welfare. Consequently, systematic reviews of research evidence (sometimes called evidence syntheses) have become an expected basis for practice guidelines and policy decisions in these sectors. Methods for evidence synthesis have matured, and there is now an increasing focus on considering the transferability of evidence to end users’ settings (context) in order to make systematic reviews more useful in decision making [ 1 – 4 ]. End users can include individual, or groups of, decision makers who commission or use the findings from a systematic review, such as policymakers, health/welfare systems managers, and policy analysts [ 3 ]. The term stakeholders in this paper may also refer to potential stakeholders, or those individuals who have knowledge of, or experience with, the intervention being reviewed and whose input may be considered valuable where the review includes a wide range of contexts, not all of which are well understood by the review team.

Concerns regarding the interaction between context and the effect of interventions are not new: the realist approach to systematic reviews emerged in order to address this issue [ 5 ]. However, while there appears to be an increasing amount of interest, and literature, related to context and its role in systematic reviews, it has been noted that “the importance of context in principle has not yet been translated into widespread good practice” within systematic reviews [ 6 ]. Context has been defined in a number of different ways, with the common characteristic being a set of factors external to an intervention (but which may interact with the intervention) that may influence the effects of the intervention [ 6 – 9 ]. Within the TRANSFER Approach, and this paper, “context” refers to the multi-level environment (not just the physical setting) in which an intervention is developed, implemented and assessed: the circumstances that interact, influence and even modify the implementation of an intervention and its effects.

Responding to an identified need from end users

We began this project in response to concerns from end users regarding the relevance of the systematic reviews they had commissioned from us. Many of our systematic reviews deal with questions within the field of social welfare and health systems policy and practice. Interventions in this area tend to be complex in a number of ways – for example, they may include multiple components and be context-dependent [ 10 ]. Commissioners have at times expressed frustration with reviews that (a) did not completely address the question in which they were originally interested, or (b) included few studies that came from seemingly very different settings. In one case, the commissioners wished to limit the review to only include primary studies from their own geographical area (Scandinavia) because of doubts regarding the relevance of studies coming from other settings despite the fact that there was no clear evidence that this intervention would have different effects across settings. Although we regularly engage in dialogue with stakeholders (including commissioners, decision makers, clients/patients) at the beginning of each review process, including a discussion of the review question and context, these discussions have varied in how structured and systematic they have been, and the degree to which they have influenced the final review question and inclusion criteria.

For the purpose of this paper, we will define stakeholders as anyone who has an interest in the findings from a systematic review, including client/patients, practitioners, policy/decision makers, commissioners of systematic reviews and other end users. Furthermore, we will define transferability as an assessment of the degree to which the context of the review question and the context of studies contributing data to the review finding differ according to a priori identified characteristics (transfer factors). This is similar to the definition proposed by Wang and colleagues (2006) whereby transferability is the extent to which the measured effectiveness of an applicable intervention could be achieved in another setting ([ 11 ] p. 77). Other terms related to transferability include applicability, generalizability, transportability and relevance and are discussed at length elsewhere [ 12 – 14 ].

Context matters

Context is important for making decisions about the feasibility and acceptability of an intervention. Systematic reviews typically include studies from many contexts and then draw conclusions, for example about the effects of an intervention, based on the total body of evidence. When context – including that of both the contributing studies and the end user – is not considered, there can be serious, costly and potentially even fatal consequences.

The case of antenatal corticosteroids for women at risk of pre-term birth illustrates the importance of context: a Cochrane review published in 2006 concluded that “A single course of antenatal corticosteroids should be considered routine for preterm delivery with few exceptions” [ 15 ]. However, a large multi-site cluster randomized implementation trial looking at interventions to increase antenatal corticosteroid use in six low- and middle-income countries, and published in 2015, showed contrasting results. The trial found that: “Despite increased use of antenatal corticosteroids in low-birthweight infants in the intervention groups, neonatal mortality did not decrease in this group, and increased in the population overall” [ 16 ]. The trial authors concluded that “the beneficial effects of antenatal corticosteroids in preterm neonates seen in the efficacy trials when given in hospitals with newborn intensive care were not confirmed in our study in low-income and middle-income countries” and hypothesized that this could be due to, among other things, a lack of neonatal intensive care for the majority of preterm/small babies in the study settings [ 16 ]. While there are multiple possible explanations for these two contrasting conclusions (see Vogel 2017 [ 17 ];), the issue of context seems to be critical: “It seems reasonable to assume that the level of maternal and newborn care provided reflected the best available at the time the studies were conducted, including the accuracy of gestational age estimation for recruited women. Comparatively, no placebo-controlled efficacy trials of ACS have been conducted in low-income countries, where the rates of maternal and newborn mortality and morbidity are higher, and the level of health and human resources available to manage pregnant women and preterm infants substantially lower” [ 17 ]. The results from the Althabe (2015) trial highlighted that (in retrospect) the lack of efficacy trials of ACS from low-resource settings was a major limitation of the evidence base.

An updated version of the Cochrane review was published in 2017, and includes a discussion on the importance of context when interpreting the results: “The issue of generalisability of the current evidence has also been highlighted in the recent cluster-randomised trial (Althabe [2015]). This trial suggested harms from better compliance with antenatal corticosteroid administration in women at risk of delivering preterm in communities of low-resource settings” [ 18 ]. The WHO guidelines on interventions to improve preterm birth outcomes (2015) also include a number of issues to be considered before recommendations in the guideline are applied, that were developed by the Guideline Development Group and informed by both the Roberts (2006) review and the Althabe (2015) trial [ 19 ]. This example illustrates the importance of considering and discussing context when interpreting the findings of systematic reviews and using these findings to inform decision making.

Considering context – current approaches

Studies included in a systematic review may vary considerably in terms of who was involved, where the studies took place and when they were conducted; or according to broader factors such as the political environment, organization of the health or social welfare system, or organization of the society or family. These factors may impact how transferable the studies are to the context specified in the review, and how transferable the review findings are to the end users’ context [ 20 ]. Transferability is often assessed by end users based on the information provided in a systematic review, and tools such as the one proposed by Schloemer and Schröeder-Bäck (2018) can assist them in doing so [ 21 ]. However, review authors can also assist in making such assessments by addressing issues related to context in a systematic review.

There are currently two main approaches for review authors to address issues related to context and the relevance of primary studies to a context specified in the review. One approach to responding to stakeholders’ questions about transferability is to highlight these concerns in the final review product or summaries of the review findings. Cochrane recommends that review authors “describe the relevance of the evidence to the review question” [ 22 ] in the review section entitled Overall completeness and applicability of evidence , which is written at the end of the review process. Consideration of issues related to applicability (transferability) is thus only done at a late stage of the review process. SUPPORT summaries are an example of a product intended to present summaries of review findings [ 23 ] and were originally designed to present the results of systematic reviews to decision makers in low and middle income countries. The summaries examine explicitly whether there are differences between the studies included in the review that is the focus of the summary and low- and middle-income settings [ 23 ]. These summaries have been received positively by decision makers, particularly this section on the relevance of the review findings [ 23 ]. In evaluations of other, similar products, such as Evidence Aid summaries for decision makers in emergency contexts, and evidence summaries created by The National Institute for Health and Care Excellence (NICE) [ 24 – 27 ], content related to context and applicability were reported as being especially valuable [ 28 , 29 ].

While these products are useful, the authors of such review summaries would be better able to summarize issues related to context and applicability if these assessments were already present in the systematic review being summarized rather than needing to be made post hoc by the summary authors. However, many reviews often only include relatively superficial discussions of context, relevance or applicability, and do not present systematic assessments of how these factors could influence the transferability of findings.

There are potential challenges related to considering issues related to context and relevance after the review is finished, or even after the analysis is concluded. Firstly, if review authors have not considered factors related to context at the review protocol stage, they may not have defined potential subgroup analyses and explanatory factors which could be used to explain heterogeneity of results from a meta-analysis. Secondly, relevant contextual information that could inform the review authors’ discussion of relevance may not have been extracted from included primary studies. To date, though, there is little guidance for a review author on how to systematically or transparently consider applicability of the evidence to the review context [ 30 ]. Not surprisingly, a review of 98 systematic reviews showed that only one in ten review teams discussed the applicability of results [ 31 ].

The second approach, which also comes late in the review process, is to consider relevance as part of an overall assessment of confidence in review findings. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) Approach for effectiveness evidence and the corresponding GRADE-CERQual approach for qualitative evidence [ 32 , 33 ] both support review authors in making judgments about how confident they are that the review finding is “true” (GRADE: “the true effect lies within a particular range or on one side of a threshold”; GRADE-CERQual: “the review finding a reasonable representation of the phenomenon of interest” [ 33 , 34 ]). GRADE and GRADE-CERQual involve an assessment of a number of domains or components, including methodological strengths and weaknesses of the evidence base, and heterogeneity or coherence, among others [ 32 , 33 ]. However, the domain related to relevance of the evidence base to the review context (GRADE indirectness domain, GRADE-CERQual relevance component) appears to be of special concern for decision makers [ 3 , 35 ]. Too often these assessments of indirectness or relevance that the review team makes may be relatively crude – for example, based on the age of participants or the countries where the studies were carried out, features that are usually easy to assess but not necessarily the most important. This may be due to a lack of guidance for review authors on which factors to consider and how to assess them.

Furthermore, many review authors only first begin to consider indirectness and relevance once the review findings have been developed. An earlier systematic and transparent consideration of transferability could influence many stages of the systematic review process and, in collaboration with stakeholders, could lead to a more thoughtful assessment of the GRADE indirectness domain and GRADE-CERQual relevance component. In Table  1 we describe a scenario where issues related to transferability are not adequately considered during the review process.

The need for contextualizing evidence

By engaging with stakeholders at an early stage of planning the review, review authors could ascertain what factors stakeholders judge to be important for their context and use this knowledge throughout the review process. Previous research indicates that decision makers’ perceptions of the relevance of the results and its applicability to policy facilitates the ultimate use of findings from a review [ 3 , 23 ]. These decision makers explicitly stated that summaries of reviews should include sections on relevance, impact and applicability for decision making [ 3 , 23 ]. Stakeholders are not the only source for identifying transferability factors, as other systematic reviews, implementation studies and qualitative studies may also provide relevant information regarding transferability of findings to specific contexts. However, this paper and the TRANSFER Approach focus on stakeholders specifically as it is our experience that stakeholders are often an underused resource for identifying and discussing transferability.

Working toward collaboration

Involving stakeholders in systematic review processes has long been advocated by research institutions and stakeholders alike as a necessary step in producing relevant and timely systematic reviews [ 36 – 38 ]. Dialogue with stakeholders is key for (a) defining a clear review question, (b) developing a common understanding of, for instance, the population, intervention, comparison and outcomes of interest, (c) understanding the review context, and (d) increasing acceptance among stakeholders of evidence-informed practice and of systematic reviews as methods for producing evidence [ 38 ]. Stakeholders themselves have indicated that improved collaboration with researchers could facilitate the (increased) use of review findings in decision making [ 3 ]. However, in practice, few review teams actively seek collaboration with relevant stakeholders [ 39 ]. This could be due to time or resource constraints or access issues [ 40 ]. There is currently work underway looking at how to identify and engage relevant stakeholders in the systematic review process (for example, Haddaway 2017 [ 41 ];).

For those review teams who do seek collaboration, there is little guidance available on how to collaborate in a structured manner, and we are not aware of any guidance specifically focussed on considering transferability of review findings [ 42 ]. We are unaware of any guidance intended to support systematic review authors in considering transferability of review findings from the beginning of the review process (i.e. before the findings have been developed). The guidance that is available either focuses on a narrow subset of research questions (e.g. healthcare), is intended to be used at the end of a review process [ 12 , 43 ], focuses on primary research rather than systematic reviews [ 44 ], or is theoretical in nature without any concrete stepwise guidance for review authors on how to consider and assess transferability [ 21 ]. Previous work has pointed out that stakeholders “need systematic and practically relevant knowledge on transferability. This may be supported through more practical tools, useful information about transferability, and close collaboration between research, policy, and practice” [ 21 ]. Other studies have also discussed the need for such practical tools, including more guidance for review authors that focuses on methods for (1) collaborating with end users to develop more precise and relevant review questions and identify a priori factors related to the transferability of review findings, and (2) systematically and transparently assessing the transferability of review findings to the review context, or a specific stakeholders’ context, as part of the review process [ 12 , 45 , 46 ].

The aim of the TRANSFER Approach is to support review authors in developing systematic reviews that are more useful for decision makers. TRANSFER provides guidance for review authors on how to consider and assess the transferability of review findings by collaborating with stakeholders to (a) define the review question, (b) identify factors a priori which may influence the transferability of review findings, and (c) define the characteristics of the context specified in the review with respect to the identified transferability factors.

The aim of this paper is to describe the development and application of the TRANSFER Approach, a novel approach for supporting collaboration between review authors and stakeholders from the beginning of the review process to systematically and transparently consider factors that may influence the transferability of systematic review findings.

We developed the TRANSFER Approach in three stages. In the first stage we held informal discussions with stakeholders to ascertain the usefulness of, guidance on assessing and considering the transferability of review findings. An email invitation to participate in a focus group discussion was sent to nine representatives from five Norwegian directorates that regularly commission systematic reviews from the Norwegian Institute of Public Health. In the email we described that the aim of the discussion would be to discuss the possible usefulness of a tool to assess applicability of systematic review findings to the Norwegian context. Four representatives attended the meeting from three directorates. The agenda for the discussion was a brief introduction to the terms and concepts, “transferability” and “applicability”, followed by an overview of the TRANSFER Approach as a method for addressing transferability and applicability. Finally we undertook an exercise to brainstorm transferability factors that may influence the transferability of a specific intervention to the Norwegian context. Participants provided verbal consent to participate in the discussion. We did not use a structured conversation guide. We took notes from the meeting, and collated the transferability issues that were discussed. We also collated responses regarding the usefulness of using time to discuss transferability with review authors during a project as simple yes or no responses (as well as any details provided with responses).

In the second stage we conducted a systematic mapping to uncover any existing checklists or other guidance for assessing the transferability of review findings, and conducted a content analysis of the identified checklists. We began by consulting systematic review authors in our network in March 2016 to get suggestions as to existing checklists or tools to assess transferability. In June 2016 we designed and conducted a systematic search of eight databases using search terms such as terms “transferability”, “applicability”, “generalizability”, etc. and “checklist”, “guideline”, “tool”, “criteria”, etc. We also conducted a grey literature search and searched the EQUATOR repository of checklists for relevant documents. Documents were included if they described a checklist or tool to assess transferability (or other related terms such as e.g., applicability, generalizability, etc.). We had no limitations related to publication type/status, language or date of publication. Documents that discussed transferability at a theoretical level or assessed the transferability of guidelines to local contexts were not included. The methods and results of this work are described in detail elsewhere (Munthe-Kaas H, Nøkleby H: The TRAN SFER Framework for assessing transferability of systematic review findings, forthcoming). The output from this stage was a list of transferability factors, which became the basis for the initial version of a ‘conversation guide’ for use with stakeholders in identifying and prioritizing factors related to transferability.

In the third stage, we undertook meetings with stakeholders to explore the use of a structured conversation guide (based on results of the second stage) to discuss the transferability of review findings. We used the draft guide in meetings with stakeholders in three separate systematic review processes. We became aware of redundancies in the conversation guide through these meetings, and also of confusing language in the conversation guide. Based on this feedback and our notes from these meetings we then revised the conversation guide. The result of this process was a refined conversation guide as well as guidance for review authors on how to improve collaboration with stakeholders to consider transferability, and guidance on how to assess and present assessments of transferability.

In this section we begin by presenting the results of the exploratory work around transferability, including the discussions with stakeholders, and experiences of using a structured conversation guide in meeting with stakeholders. We then present the TRANSFER Approach that we subsequently developed including the purpose of the TRANSFER Approach, how to use TRANSFER, and a worked example of TRANSFER in action.

Findings of the exploratory work to develop the TRANSFER Approach

Discussions with stakeholders.

The majority of the 3 h discussion with stakeholders was spent on the exercise. We described for participants a systematic review that had recently been commissioned (by one of the directorates represented) on the effect of supported employment interventions for disabled people on employment outcomes. The participants brainstormed the potential differences between the Norwegian context and other contexts and how these differences might influence how the review findings could be used in the Norwegian context. The participants identified a number of issues related to the population (e.g., proportion of immigrants, education level, etc.), the intervention (the length of the intervention, etc.), the social setting (e.g., work culture, union culture, rural versus urban, etc.) and the comparison interventions (e.g., components of interventions given as part of “usual services”). After the exercise was completed, the participants debriefed on the usefulness of such an approach for thinking about the transferability of review findings at the beginning of the review process, in a meeting setting with review authors. All participants agreed that the discussion was (a) useful, and (b) worth a 2 to 3 h meeting at the beginning of the review process. There was discussion regarding the terminology, however, related to transferability, specifically who is responsible for determining transferability. One participant felt that the “applicability” of review findings should be determined by stakeholders, including decision makers, while “transferability” was a question that can be assessed by review authors. There was no consensus among participants regarding the most appropriate terms to use. We believe that opinions expressed within this discussion may be related to language, for instance, how the Norwegian terms for ‘applicability’ and ‘transferability’ are used and interpreted. The main findings from the focus group discussion were that stakeholders considered meeting with review authors early in the review process to discuss transferability factors to be a good use of time and resources.

Systematic mapping and content analysis of existing checklists

We identified 25 existing checklists that assess transferability or related concepts. Only four of these were intended for use in the context of a systematic review [ 14 , 43 , 45 , 47 ]. We did not identify any existing tools that covered our specific aims. Our analysis of the existing checklists identified seven overarching categories of factors related to transferability in the included checklists: population, intervention, implementation context (immediate), comparison condition, outcomes, environmental context, and researcher conduct [ 30 ]. The results of this mapping are reported elsewhere [ 30 ].

Using a structured conversation guide to discuss transferability

Both the review authors and stakeholders involved in the three systematic review processes where an early version of the conversation guide was piloted were favorable to the idea of using a structured approach to discussing transferability. The initial conversation guide that was used in meetings with the stakeholders was found to be too long and repetitive to use easily. The guide was subsequently refined to be shorter and to better reflect the natural patterns of discussion with stakeholders around a systematic review question (i.e. population, intervention, comparison, outcome).

The TRANSFER Approach: purpose

The exploratory work described above resulted in the TRANSFER Approach. The TRANSFER Approach aims to support review authors in systematically and transparently considering transferability of review findings from the beginning of the review process. It does this by providing review authors with structured guidance on how to collaborate with stakeholders to identify transferability factors, and how to assess the transferability of the review findings to the review context or other local contexts (see Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 12874_2019_834_Fig1_HTML.jpg

TRANSFER diagram

The TRANSFER Approach is intended for use in all types of reviews. However, as of now, it has only been tested in reviews of effectiveness related to population level interventions.

How to use TRANSFER in a systematic review

The TRANSFER Approach is divided into seven stages that mirror the systematic review process. Table  2 outlines the stages of the TRANSFER Approach and the corresponding guidance and templates that support review authors in considering transferability at each stage (see Table  3 ). During these seven stages, review authors make use of the two main components of the TRANSFER Approach: (1) guidance for review authors on how to consider and assess transferability of review findings (including templates), and (2) a Conversation Guide to use with stakeholders in identifying and prioritizing factors related to transferability.

What is new and what are the implications of the TRANSFER Approach?

TRANSFER Approach in the systematic review process – overview of relevant people and components involved in each stage

Once systematic review authors have gone through the seven stages outlined in Table  3 , they come up with assessments of concern regarding each transferability factor. This assessment should be expressed as no, minor, moderate or serious concerns regarding the influence of each transferability factor for an individual review finding. This assessment is made for each individual review finding because TRANSFER assessments are intended to support GRADE/−CERQual assessments of indirectness /relevance, and the GRADE/−CERQual approaches require the review author to make assessments for each individual outcome (for effectiveness reviews) or review finding (for qualitative evidence syntheses). Assessments must be done for each review finding individually because assessments may vary across outcomes. One transferability factor may affect a number of review findings (e.g., years of experience of mentors in a mentoring program), in the same way that one risk of bias factor (e.g., selection bias as a consequence of inadequate concealment of allocations before assignment) may affect multiple review findings. However, it is also the case that one transferability factor can affect these review findings differently (e.g., average education level of the population may influence on finding and not another) in the same way that one risk of bias factor may affect review findings differently (e.g., detection bias, due to lack of blinding of outcome assessment, may be less important for objective finding, such as death). An overall TRANSFER assessment of transferability is then made by the review authors (also expressed as no, minor, moderate or serious concerns), based on the assessment(s) for each transferability factor(s). Review authors should then provide an explanation for the overall TRANSFER assessment and an indication of how each transferability factor may influence the finding (e.g. direction and/or size of the effect estimate). Guidance on making assessments is discussed in greater detail below. In this paper, we have, for simplicity, described transferability factors as individual and mutually exclusive constructs. Through our experience in applying TRANSFER, however, we have seen that transferability factors can influence and amplify each other. While the current paper does not address these potential interactions, other review authors will need to consider when transferability factors influence each other or when one factor amplifies the influence of another factor, for example, primary care health facilities in rural settings may both have both fewer resources and poorer access to referral centres, both of which may interact to negatively impact on health outcomes.

TRANSFER in action

In the following section we present the stages of the TRANSFER Approach using a worked example. The scenario is based on a real review [ 48 ]. However, the TRANSFER Approach was not available when this review started and thus the conversation with decision makers was conducted post hoc. Furthermore, while the TRANSFER factors are those that the stakeholders identified, details related to both the review finding and the assessment of transferability were adapted for the purposes of this worked example in order to illustrate how TRANSFER could be applied to a review process.” The scenario focuses on a situation where a review is commissioned and the stakeholders’ context is known. In the case where the decision makers and/or their context is not well understood to the review team, the review team can still engage potential stakeholders with knowledge/experience related to the intervention being reviewed and the relevant contexts. .

Stage 1: Establish the need for a systematic review

Either stakeholders (in commissioning a review) or a review team (if initiating a review themselves) can establish the need for a systematic review (see example provided in Table  4 ). The process of defining the review question and context begins only after some need for a systematic review is established.

Scenario – establishing the need for a systematic review

Stage 2a: Collaborate with stakeholders to refine the review question

After defining the need for a systematic review, the review team, together with stakeholders need to meet to refine the review question (see example provided in Table  5 ). Part of this discussion will need to focus on establishing the type of review question being asked, and the corresponding review methodology that will be used (e.g., a review to examine intervention effectiveness or a qualitative evidence synthesis to examine barriers and facilitators to implementing an intervention). The group will then need to define the review question including, for example, the population, intervention, comparison and outcomes. A secondary objective of this discussion is to ensure common understanding of the review question, including how the systematic review is intended to be used. During this meeting the review team and stakeholders can discuss and agree upon, for example, the type of population and intervention(s) they are interested in, the comparison(s) they think are the most relevant, and the outcomes they think are the most important. By using a structured template to guide this discussion, the review team can be sure they cover all topics and questions in a systematic fashion. We have developed and used a basic template for reviews of intervention effectiveness that review authors can use to lead this type of discussion with stakeholders (see Appendix 1 ). Future work will involve adapting this template to different types of review questions and processes.

Scenario – refining the review question

In some situations, such as in the example we provide, the scope of the review is broader (in this case, global) than the actual context specified in the review (in this case, Norway). The review may therefore include a broader set of interventions, population groups, or settings than the decision making context. Where the review scope is broader than the context specified in the review, a secondary review question can be added – for example, How do the results from this review transfer to a pre-specified context? Alternatively, where the context specified in the review context is the same as the end users’ context, such a secondary question would be unnecessary. When the review context or the local context is defined at a country level, the review authors and stakeholders will likely be aware of heterogeneity within that context (e.g., states, neighbourhoods, etc.). However, it is still often possible (and necessary) to ascertain and describe a national context. We need to further explore how decision makers apply review findings to the multitude of local contexts within, for example, their national context. Finally, in a global review initiated by a review team rather than commissioned for a specific context, a secondary question on the transferability of the review findings to a pre-specified context is unlikely to be needed.

Stage 2b. Identify and prioritize TRANSFER factors

In the scenario discussed in Table  6 , stakeholders are invited to identify transferability factors through a structured discussion using the TRANSFER Conversation Guide (see Appendix 2 ). The identified factors are essentially hypotheses which need to be tested later in the review process. The aim of the type of consultation described above is to gather input from stakeholders regarding which contextual factors are believed to influence how/whether an intervention works. Where the review is initiated by the review team, the same process would be used, but with experts and people who are thought to represent stakeholders, rather than actual commissioners.

Scenario – identifying TRANSFER factors

The review authors may identify and use an existing logic model describing how the intervention under review works or another framework to initiate the discussion on transferability, for example to identify components of the intervention that could be especially susceptible to transferability factors or to highlight at what point in the course of the intervention transferability may become an issue [ 49 , 50 ]. More work is needed to examine how logic models can be used at the beginning of the systematic review in order to identify potential transferability factors.

During this stage, the group may identify multiple transferability factors. However, we suggest that the review team, together with stakeholders, prioritize these factors and only include the most important three to five factors in order to keep data extraction and subgroup analyses manageable. Limiting the number of factors to be examined is based on our experience of piloting the framework in systematic reviews, as well as on guidance for conducting and reporting subgroup analyses [ 51 ]. Guidance on prioritizing transferability factors is still to be developed.

In accordance with guidance for conducting subgroup analyses in effectiveness reviews, the review team should search for evidence to support the hypotheses that these factors influence transferability, and indicate what effect they are hypothesised to have on the review outcomes [ 51 ]. We do not yet know how best to do this in an efficient way. To date, the search for evidence to support hypothetical transferability factors has involved a grey literature search of key terms related to the identified TRANSFER factors together with key terms related to the intervention, as well as searching Epistemonikos for qualitative systematic reviews on the intervention being studied. Other approaches, however, may include searching databases such as Epistemonikos for systematic reviews related to the hypotheses, and/or focused searches of databases of primary studies such as MEDLINE, EMBASE, etc. Assistance of an information specialist may be helpful in designing these searches and it may be possible to focus down on specific contexts, which would reduce the number of records that need to be searched. The efforts made will need to be calibrated to the resources available and the approach used should be described clearly to enhance transparency. In the case where no evidence is available for a transferability factor that stakeholders believe to be important, the review team will need to decide whether or not to include that transferability factor (depending, for example, on how many other factors have been identified), and provide justification for its inclusion in the protocol. The identified factors should be included in the review protocol as the basis for potential subgroup analyses. Such subgroup analyses will assist the review team in determining whether or not, or to what extent, differences with respect to the identified factor influence the effect of the intervention. This is discussed in more detail under Stage 4. In qualitative evidence syntheses, the review team may predefine subgroups according to transferability factors and contrast and compare perceptions/experiences/barriers/facilitators of different groups of participants according to the transferability factors.

Stage 2c: Define characteristics of the review context related to TRANSFER factors

In an intervention effectiveness review, the review context is typically defined in the review question according to inclusion criteria related to the population, intervention, comparison and outcomes (see example provided in Table  7 ). We recommend that this be extended to include the transferability factors identified in Stage 2, so that an assessment of transferability can be made later in the review process. If the review context does not include details related to the transferability factors, the review authors will be unable to assess whether or not the included studies are transferable to the review context. In this stage the review team works with the stakeholders to specify how the identified transferability factors manifest themselves in the context specified in the review (e.g., global context and Norwegian context).

Scenario – defining characteristics of the review context related to TRANSFER factors

In cases where the review context is global, it may be challenging to specify characteristics of the global context for each transferability factor. In that case, the focus may be on assessing whether a sufficiently wide range of contexts are represented with respect to each transferability factor. Using the example above, the stakeholders and review team could decide that the transferability of the review findings would be strengthened if studies represented a range of usual housing services conditions in terms of quality and comprehensiveness, or if studies from both warm and cold climate settings are included.

Stage 3: conduct the systematic review

Several stages of the systematic review process may be influenced by discussions with stakeholders that took place in Stage 2 and the transferability factors that have been identified (see example in Table  8 ). These include defining the inclusion criteria, developing the search strategy and developing the data extraction form. In addition to standard data extraction fields, the review authors will need to extract data related to the identified transferability factors. This is done in a systematic manner where review authors also note where the information is not reported. For some transferability factors, such as environmental context, additional information may be identified through external sources. For other types of factors it may be necessary to contact study authors for further information.

Scenario – conducting the systematic review

Stage 4: compare the included studies to the context specified in the review (global and/or local) with respect to TRANSFER factors

This stage is about organizing the included studies according to their characteristics related to the identified transferability factors. The review authors should record these characteristics in a table – this makes it easy to get an overview of the contexts of the studies included in the review (see example in Table  9 ). There are many ways to organize and present such an overview. In the scenario above, the review authors created simple dichotomous subcategories for each transferability factor, which was related to the local context specified in the secondary review question.

Scenario – Comparing the contexts of the included studies to the context specified in the review

Stage 5: assess the transferability of review findings

Review authors should assess the transferability of a review finding to the review context, and in some cases may also consider a local context (see example in Table  10 ). When a review context is global, the review team may have fewer concerns regarding transferability if the data come from studies from a range of contexts, and the results from the individual studies are consistent. If there is an aspect of context for which there is no evidence, this can be highlighted in the discussion.

Scenario – assessing the transferability of review findings to the context specified in the review

An external file that holds a picture, illustration, etc.
Object name is 12874_2019_834_Tab10_HTML.jpg

In summary, when assessing transferability to a secondary context, the review team may:

  • Consider conducting a subgroup, or regression, analysis for each transferability factor to explore the extent to which this is likely to influence the transferability of the review finding. The review team should follow standards for conducting subgroup analyses [ 51 , 53 , 54 ].
  • Interpret the results of the subgroup or regression analysis for each transferability factor and record whether they have no, minor, moderate or serious concerns regarding the transferability of the review finding to the local context.
  • Make an overall assessment (no, minor, moderate or serious concerns) regarding the transferability of the review finding based on the concerns identified for each individual transferability factor. At the time of publication, we are developing more examples for review authors and guidance on how to make this overall assessment.

The overall TRANSFER assessment involves subjective judgements and it is therefore important for review authors to be consistent and transparent in how they make these assessments (see Appendix 4 ).

Stage 6: Apply GRADE for effectiveness or GRADE-CERQual to assess certainty/confidence in review findings

TRANSFER assessments can be used alone to present assessments of the transferability of a review finding in cases where the review authors have chosen not to assess certainty in the evidence. However, we propose that TRANSFER assessments can also be used to support indirectness assessments in GRADE (see example in Table  11 ). Similar to how the Risk of Bias tool or other critical appraisal tools support the assessment of Risk of Bias in GRADE, the TRANSFER Approach can be used to increase the transparency of judgements made for the indirectness domain [ 55 ]. The advantages to using the TRANSFER Approach to support this assessment are:

  • Factors that may influence transferability are carefully considered a priori, in collaboration with stakeholders;
  • The GRADE table is supported by a transparent and systematic assessment of these transferability factors for each outcome, and the evidence available for these;
  • Stakeholders in other contexts are able to clearly see the basis for the indirectness assessment, make an informed decision regarding whether the indirectness assessment would change for their context, and make their own assessment of transferability related to these factors. In some cases the transferability factors identified and assessed in the systematic review may differ from factors which may be considered important to other stakeholders adapting the review findings to their local context (e.g., in the scenario described above, stakeholders using the review findings in a low income, warmer country with a less comprehensive welfare system).

Scenario – assessing certainty in the review findings

An external file that holds a picture, illustration, etc.
Object name is 12874_2019_834_Tab11_HTML.jpg

Future work will be needed to develop methods of communicating the transferability assessment, how it is expressed in relation to a GRADE assessment and how to ensure that a clear distinction is made between TRANSFER assessments for a global context and, where relevant, a pre-specified local context.

Stage 7: Discuss transferability of review findings

In some instances it will be possible to discuss the transferability of the review findings with stakeholders prior to publication of the systematic review in order to ensure that the review team has adequately considered the TRANSFER factors as they relate to the context specified in the review (see example in Table  12 ). In many cases this will not be possible, and any input from stakeholders will be post-publication, if at all.

Scenario – discussing transferability of the review findings

To our knowledge, the TRANSFER Approach is the first attempt to consider the transferability of review findings to the context(s) specified in the review) in a systematic and transparent way from the beginning of the review process through to supporting assessments of certainty and confidence in the evidence for a review finding. Furthermore, it is the only known framework that gives clear guidance on how to collaborate with stakeholders to assess transferability. This guidance can be used in systematic reviews of effectiveness and qualitative evidence syntheses and could be applied to any kind of decision making [ 43 ].

The framework is under development and more user testing is needed to refine the conversation guide, transferability assessment methods, and presentation. Furthermore, it has not yet been applied in a qualitative evidence synthesis, and further guidance may be needed in order to support that process.

Using TRANSFER in a systematic review

We have divided the framework into seven stages, and have provided guidance and templates for review authors for each stage. The first two stages are intended to support the development of the protocol, while stages three through seven are intended to be incorporated into the systematic review process.

The experience of review teams in the three reviews where TRANSFER has been applied (at the time this article is published) has uncovered potential challenges when applying TRANSFER. One challenge is related to reporting: the detail in which interventions, context and population characteristics are reported in primary studies is not always sufficient enough for the purpose of TRANSFER, as has been noted by others [ 56 , 57 ]. With the availability of tools such as the TIDieR checklist and a number of CONSORT extensions, we hope that this improves and that the information that review authors seek is more readily available [ 58 – 60 ].

Our experience thus far has been that details concerning many of the TRANSFER factors prioritized by the stakeholders are not reported in the studies included in systematic reviews. In one systematic review on the effect of digital couples therapy compared to in-person therapy or no therapy, digital competence was identified as a TRANSFER factor [ 61 ]. The individual studies did not report this, so the review team examined national statistics for each of the studies included and reported this in the data extraction form [ 61 ]. The review team was unable to conduct a subgroup analysis for the TRANSFER factor. However, by comparing Norway’s national level of digital competence to that of the countries where the included studies were conducted, the authors were able to discuss transferability with respect to digital competence in the discussion section of the review [ 61 ]. They concluded that since the level of digital competence was similar in the countries of the included studies and Norway, the review authors had few concerns that this would be likely to influence the transferability of the review findings [ 61 ]. Without having identified this with stakeholders at the beginning of the process, there likely would have been no discussion of transferability, specifically the importance of digital competence in the population. Thus, even when it is not possible to do a subgroup analysis using TRANSFER factors, or even extract data related to these factors, the act of identifying these factors can contribute meaningfully to subsequent discussions of transferability.

Using TRANSFER in a qualitative evidence synthesis

Although we have not yet used TRANSFER as part of a qualitative evidence synthesis, we believe that the process would be similar to that described above. The overall TRANSFER assessment could inform the GRADE-CERQual component relevance . A research agenda is in place to examine this further.

TRANSFER for decision making

he TRANSFER Approach has two important potential impacts for stakeholders, especially decision makers: an assessment of transferability of review findings, and a close(r) collaboration review authors in refining the systematic review question and scope. A TRANSFER assessment provides stakeholders with (a) an overall assessment of the transferability of the review finding to the context(s) of interest in the review, and details regarding (b) whether and how the studies contributing data to the review finding differ from the context(s) of interest in the review, and (c) how any differences between the contexts of the included studies and the context(s) of interest in the review could influence the transferability of the review finding(s) to the context(s) of interest in the review (e.g. direction or size of effect). The TRANSFER assessment can also be used by stakeholders from other contexts to make an assessment of the transferability of the review findings to their own local context. Linked to this, TRANSFER assessments provide systematic and transparent support for assessments of the indirectness domain within GRADE and the relevance component within GRADE-CERQual. TRANSFER is a work in progress, and there are numerous avenues which need to be further investigated (see Table  13 ).

TRANSFER in progress – priorities for further research

The TRANSFER Approach also supports a closer collaboration between review authors and stakeholders early in the review process, which may result in more relevant and precise review questions, greater consideration of issues important to the decision maker, and better buy-in from stakeholders in the use of systematic reviews in evidence-based decision making [ 2 ].

The TRANSFER Approach is intended to support review authors in collaborating with stakeholders to ensure that review questions are framed in a way that is most relevant for decision making and to systematically and transparently consider transferability of review findings. Many review authors already consider issues related to the transferability of findings, especially review authors applying the GRADE for effectiveness ( indirectness domain) or GRADE-CERQual ( relevance domain) approaches, and many review authors may engage with stakeholders. However current approaches to considering and assessing transferability appear to be ad hoc at best. Consequently, it often remains unclear to stakeholders how issues related to transferability were considered by review authors. By collaborating with stakeholders early in the systematic review process, reviews authors can ensure more precise and relevant review questions and an informed consideration of issues related to the transferability of the review findings. The TRANSFER Approach may therefore help to ensure that systematic reviews are relevant to and useful for decision making.

Acknowledgements

We would like to acknowledge the intellectual support and guidance of Rigmor Berg from the Norwegian Institute of Public Health, as well as researchers from the Division for Health Services and representatives from various Norwegian welfare directorates who participated in the focus group for stakeholders. We would also like to acknowledge Eva Rehfuess who identified the example used under “Context matters” and Josh Vogel for his assistance regarding the example. We would like to thank Sarah Rosenbaum from the Norwegian Institute of Public Health for designing the diagram in Fig. ​ Fig.1 1 .

Abbreviations

Sample PICO clarification template

TRANSFER Conversation Guide

This is the most current version of the conversation guide and was developed based on feedback from review teams and stakeholders who used the previous tested version. Further testing of this version is planned

TRANSFER characteristics of context

TRANSFER Characteristics of review context

TRANSFER Characteristics of secondary context

TRANSFER Table of Included Studies

TRANSFER Table of Included Studies for review context

TRANSFER Table of Included Studies for secondary context

TRANSFER assessment

TRANSFER Assessment –context specified in the review

TRANSFER Assessment – secondary context

Authors’ contributions

HMK and HN developed the framework and wrote the manuscript. CG and SL provided guidance on the development of the framework and gave feedback on the manuscript. All authors have read and approved the manuscript.

The authors’ research time was funded by the Norwegian Institute of Public Health. The authors also received funding from the Campbell Collaboration Methods Grant to support the development of the TRANSFER Approach. SL receives additional funding from the South African Medical Research Council. The funding bodies played no role in the design of the study, the collection, analysis, or interpretation of data or in writing the manuscript.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable. This study did not undertake any formal data collection involving humans or animals. Participants in the informal focus group discussion provided verbal consent to participate. According to the Norwegian Centre for Research Data (nsd.no) online portal for determining registration of studies, this study did not warrant application for review by an ethical committee https://nsd.no/personvernombud/en/notify/index.html .

Consent for publication

Not applicable.

Competing interests

HMK, CG and SL are co-authors of the GRADE-CERQual approach and lead co-ordinators of the GRADE-CERQual coordinating group. HN has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Heather Munthe-Kaas, Email: [email protected] .

Heid Nøkleby, Email: [email protected] .

Simon Lewin, Email: [email protected] .

Claire Glenton, Email: [email protected] .

Banner

Types of academic writing

  • What's expected in your written assignment?

Literature reviews

  • PowerPoint presentations
  • Reflective thinking
  • Reflective writing
  • Resources on other types of academic writing
  • Writing and thinking checklist

Get help from the Writing Centre

Search  WriteAnswers   for FAQs on your topic:

Can't find what you need?

RRU community members can use the button below to send your questions directly to the Writing Centre. We'll send you a private reply as soon as we can (typically within one business day of receiving the message).

A standard section of a thesis or major project is the literature review. As the name suggests, if you're completing a literature review, you will be examining the existing literature on a chosen topic, which will allow you:

to identify gaps in current knowledge;

to avoid reinventing the wheel (at the very least this will save time and it can stop you from making the same mistakes as others);

to carry on from where others have already reached (reviewing the field allows you to build on the platform of existing knowledge and ideas);

to identify other people working in the same and related fields (they provide you with a researcher network, which is a valuable resource indeed);

to increase your breadth of knowledge of the area in which your subject is located;

to identify seminal works in your area;

to provide the intellectual context for your own work (this will enable you to position your project in terms of related work);

to identify opposing views;

to put your own work in perspective;

to provide evidence that you can access the previous significant work in an area;

to discover transferable information and ideas (information and insights that may be relevant to your own project); and 

to discover transferable research methods (research methods that could be relevant to your own project). (Bourner & Greener, 2016, pp. 8-9) 

This narrated whiteboard video aims to demystify the process of writing a literature review and provide suggestions for how to get organized to write. The video uses a cocktail party analogy to illustrate the approach. Click for transcript .

Bourner, T., & Greener, S. (2016). The research journey: Four steps to success. In T. Greenfield & S. Greener (Eds.), Research methods for postgraduates (pp. 7-12). John Wiley & Sons.

Literature review resources

Get Lit: The Literature Review (Dr. Candace Hastings, Texas A&M University Writing Center)

  • As suggested in the video, visit Theses and Projects for RRU examples.

Key Takeaways From the Psi Chi Webinar : So You Need to Write a Literature Review (APA Style)

Literature Review Tutorial (American University Library)

Literature Reviews (University of Waterloo)

SAGE Research Methods

  • Reviewing the Literature
  • Writing Up: How Do I Write My Literature Review? (Log in with RRU username and password)

Sample literature review: Critical Thinking and Transferability: A Review of the Literature (Reece, 2002)

Sample outline: See the sections relating to the literature review section of a major research paper in  Outlining a Research Paper  (©2011  Amy L. Stuart , Associate Professor, University of South Florida)

Synthesis Table for Literature Reviews (CSUMB Library)

  • << Previous: Abstracts
  • Next: PowerPoint presentations >>
  • Last Updated: Mar 27, 2024 1:32 PM
  • URL: https://libguides.royalroads.ca/typesacademicwriting

Critical Thinking and Transferability: A Review of the ...

Critical Thinking and transferability : A Review of the literature By Gwendolyn Reece April 9, 2002 Since the 1960s, concern that American students may not be capable of transferring the skills they have gained from their education to the practical problems of life has troubled educators. Of greatest concern is whether students have mastered Critical Thinking or higher order Thinking skills and can apply them outside of school curricula. These concerns have given rise to the Critical Thinking movement. To demonstrate that the movement is successful, it must prove that its efforts not only increase the Critical Thinking of students in school, but that students can transfer Critical Thinking to novel situations, including those encountered in daily life. The primary purpose of this Review is to ascertain if there is compelling evidence that efforts to teach Critical Thinking have had this result.

supplies the structure for this review of the relevant literature . The scope of this review is limited. Most critical thinking literature provides program and instructional technique description. This material is out of scope for this review except as it bears directly upon the question concerning subject-dependence in relation to critical ...

  Critical , Review , Literature , Thinking , A review , Transferability , Critical thinking and transferability

Information

Link to this page:

Please notify us if you found a problem with this document:

Thank you for your participation!

Transcription of Critical Thinking and Transferability: A Review of the ...

1 Critical Thinking and transferability : A Review of the literature By Gwendolyn Reece April 9, 2002 Since the 1960s, concern that American students may not be capable of transferring the skills they have gained from their education to the practical problems of life has troubled educators. Of greatest concern is whether students have mastered Critical Thinking or higher order Thinking skills and can apply them outside of school curricula. These concerns have given rise to the Critical Thinking movement. To demonstrate that the movement is successful, it must prove that its efforts not only increase the Critical Thinking of students in school, but that students can transfer Critical Thinking to novel situations, including those encountered in daily life. The primary purpose of this Review is to ascertain if there is compelling evidence that efforts to teach Critical Thinking have had this result.

2 What became apparent in the process of this Review , however, was that several subsidiary problems must first be answered before the problem of evaluating the effectiveness of Critical Thinking transfer can be approached. The first of these problems is whether the movement has a common theme or definition of Critical Thinking . Second is the question, does Critical Thinking encompass creative Thinking or is it antithetical to it. The third problem might be formulated thus: is Critical Thinking generalizable or is it tied to subject matter? The fourth problem is whether adequate evaluative measures of Critical Thinking are available to measure the effectiveness of efforts to teach Critical Thinking . Answering these prior question is essential before inquiring whether there is compelling evidence that teaching Critical Thinking results in a transfer of skills or dispositions that students can use in other arenas.

3 This line of inquiry supplies the structure for this Review of the relevant literature . The scope of this Review is limited. Most Critical Thinking literature provides program and instructional technique description. This material is out of scope for this Review except as it bears directly upon the question concerning subject-dependence in relation to Critical Thinking . Furthermore, although this Review addresses the works of most seminal thinkers in the Critical Thinking movement, constraints and limited access to information means that some major figures, such as Harvey Siegel, have not been included. Finally, although philosophical literature on this subject abounds, evaluative studies using either qualitative or quantitative methods to measure the effectiveness of whole programs are comparatively scarce.

4 I have included relevant examples of these studies, yet it can be said at the outset that the dearth such studies needs to be redressed by the research community. The Common Theme of the Critical Thinking Movement The first step in understanding the Critical Thinking Movement is to uncover the essential characteristics of Critical thought and examine the commonality of agendas for States why subject is important. States purpose of this Review . Establishes limits of project s scope. Identifies gaps in existing research. Previews logical order the Review will follow. Long reviews are often broken by headings and subheadings corresponding to the organization outlined in the introduction. Gwendolyn Reece might have written a traditional research paper describing the history of Critical Thinking in American education and containing quotations from scholarly sources to support her ideas.

5 Instead, she presents a literature Review focusing on the scholarly sources themselves. Reece explains how scholarly opinion has developed over time, where leading scholars agree and disagree, what sources carry more authority than others, and finally where more research is needed. 2the Critical Thinking Movement. Proponents of the Critical Thinking movement posit numerous reasons for teaching Critical Thinking . A common reason is a reflection of the shift in economic patterns away from an industrial society into arenas in which laborers must solve complex problems (Bloom; Reich; Paul; Nickerson). Another reason frequently proposed is that Critical Thinking skills are necessary for effective citizenship in a democracy, for example, in selecting leaders and being a juror (Ennis, Taxonomy ; Paul; Nickerson).

6 Paul and Nickerson also call attention to the capacity of human beings for self-delusion and note that irrational human behavior causes great suffering in the world. They see Critical Thinking the antidote. Finally, both these thinkers uphold the notion that Thinking is a significant part of being human; therefore, mastery of Critical Thinking is a necessary for being a fully developed human being. Another premise of the proponents of the Critical Thinking movement is that Critical Thinking does not always unfold naturally as a part of growth. Furthermore, Critical Thinking is not effectively taught in traditional school settings that rely heavily upon rote memorization and didactic teaching methods (Kennedy; Paul; Nickerson; Schrag). Therefore, leaders of the movement have developed numerous programs to teach Critical Thinking .

7 The common theme of the Critical Thinking movement is that Critical Thinking skills involve the ability to make reasonable decisions in complex situations, such as those found in a rapidly changing and complex society. The movement emphasizes knowing how more than knowing that (Roland). Furthermore, helping individuals gain these abilities, requires a self-conscious attempt on the part of educators to address the cultivation of Critical Thinking by utilizing methods other than simply rote memorization and didactic instruction. What is Critical Thinking ? The unity of the movement disintegrates once the question what do you mean by Critical Thinking ? is asked. There is a significant divergence of opinion about what constitutes Critical Thinking . Some scholars identify Critical Thinking with the mastery of specific skill sets and provide schematics or taxonomies to express their inter-relationships.

8 A Committee of College and University Examiners created one of the early taxonomies (Bloom). Bloom and his colleagues identified six major classes of cognitive skills: Knowledge (by which they mean recall); Comprehension; Application; Analysis; Synthesis; and Evaluation. One reason for this construction is that the lower skills are required in order for the higher skills to be used. Comprehension requires Knowledge, or recall. Therefore, Critical Thinking , in Bloom s view, is gaining mastery of these skill sets and selecting the appropriate techniques when encountering a novel situation. The primary strength of Bloom s taxonomy is that it is logical and hierarchical, guiding the educator in a process leading from the most simple to the most complex form of cognitive skills. It is also comparatively easy to evaluate the mastery of these skills because they link to particular behaviors (Bloom 12).

9 Bloom supplies numerous evaluative techniques linked to the taxonomy. A position held by a number of experts is summarized, and those experts are included in an in-text citation following MLA style. Note difference from an annotated bibliography: the same source may be cited more than once. Points out strengths & weaknesses of leaders in the field. 3 There are, however, disadvantages with Bloom s taxonomy. Historically, many teachers have used it as a cookbook without demonstrating Critical Thinking skills themselves (Paul 375-383). Paul also criticizes Bloom for overemphasizing recall and for insisting on neutrality. Paul believes that Critical Thinking should be used to reach substantial value judgments. Finally, Paul conceives Bloom s taxonomy as neglecting the dialectical dynamic of Critical Thinking .

10 With regard to his first point, Paul overstates his case; misuse of the taxonomy does not invalidate the design itself. The emphasis given to Knowledge or recall is more controversial, relating to the question whether or not Critical Thinking is subject-dependent. I do believe that Paul is correct in criticizing Bloom s view that Critical Thinking is value neutral, since real life decisions are never value neutral; but again, that does not invalidate the structure of his taxonomy. The neglect of the dialectical process in Critical Thinking , however, is a substantial criticism that does seem borne out in the construction of the taxonomy, which is designed to flow from simple to complex. Another general criticism of defining Critical Thinking as being comprised of a set of skills is that Critical thought also requires particular dispositions or habits to use those skills.

Documents from same domain

Oral Motor Exercises for the Treatment of Motor …

Oral Motor Exercises for the Treatment of Motor …

Oral Motor Exercises for the Treatment of Motor Speech Disorders: Efficacy and Evidence Based Practice Issues A literature review based on a tutorial by

  Exercise , Treatment , Motor , Motor exercises for the treatment of

Potential energy - UMass

Potential energy - UMass

Physics 190E: Energy & Society Fall 2007 Physics of Energy II - 1 Now, let Õs talk about a second form of energy Potential energy Imagine you are standing on top of half dome in

  Potential , Energy , Potential energy

Structural Concrete Structures - UMass Amherst

Structural Concrete Structures - UMass Amherst

Structural Concrete Structures Reinforced Concrete Construction. 2 Reinforced Concrete Construction ... Twin fixed concrete arch highway bridges .

  Bridge , Structure , Structural , Concrete , Reinforced , Reinforced concrete , Structural concrete structures , Structural concrete structures reinforced concrete

Lecture 2 – Grouped Data Calculation - UMass …

Lecture 2 – Grouped Data Calculation - UMass …

1. Mean, Median and Mode 2. First Quantile, third Quantile and Interquantile Range. Lecture 2 – Grouped Data Calculation

  Data , Grouped , Grouped data

BIOSTATS 540 Practice Problems CI and …

BIOSTATS 540 Practice Problems CI and …

BIOSTATS 540 – Fall 2017 Practice Problems – Confidence Intervals and Hypothesis Tests BIOSTATS 540 Practice Problems CI and Hypothesis Tests.docx Page 1 of 5

  Practices , Problem , Confidence , Interval , Hypothesis , Biostat , Biostats 540 practice problems ci and , Confidence intervals , Biostats 540 practice problems ci and hypothesis

HOW TO install R and R Studio WINDOWS Users …

HOW TO install R and R Studio WINDOWS Users …

HOW TO Install R and R Studio WINDOWS Users Fall 2017 HOW TO install R and R Studio WINDOWS Users Fall 2017 .docx Page 2of PART II …

  Fall , User , 2017 , Windows , Studio , R studio windows users , R studio windows users fall 2017

Neuromotor Examinations for Infants and Young …

Neuromotor Examinations for Infants and Young …

Neuromotor Examinations for Infants and Young Children less than Five Years Old By: Michele Boucher and Joanne Kabaniuk

  Infant , Year , Examination , Neuromotor examinations for infants and , Neuromotor , Years old

Normal distribution - people.umass.edu

Normal distribution - people.umass.edu

Obtaining Good Samples Unimodal and symmetric, bell shaped curve Many variables are nearly normal, but none are exactly normal Denoted as N(µ, σ) → Normal with mean µ and standard

  Name , Standards , Distribution , Normal , And standards , Normal distribution

Introduction: What is Language? What does it …

Introduction: What is Language? What does it …

1 Introduction: What is Language? What does it mean to know a language? Linguistics 201 Professor Oiry 1. Human Specialization for language Our speech organs were and are directly concerned with breathing and eating.

  What , Introduction , Language , What is language

Treiman, R., Clifton, C., Jr, Meyer, A. S., &amp; Wurm, L. H ...

Treiman, R., Clifton, C., Jr, Meyer, A. S., & Wurm, L. H ...

Treiman et al., Psycholinguistics, 2 Introduction Language comprehension Spoken word recognition Printed word recognition The mental lexicon Comprehension of …

  Language

Related documents

Scientific Literature Review - Dublin City University

Scientific Literature Review - Dublin City University

What is the purpose of a literature review ? Communication and advancement of scientific knowledge! • Scientific knowledge is not static: reviews help scientists to understand how knowledge in a particular field is changing and developing over time • There is a significant output of scientific publications – literature reviews save time for the scientific community

  Review , Purpose , Literature , Literature review

Literature Reviews Purpose of the Literature Review

Literature Reviews Purpose of the Literature Review

the report purpose . Extensive literature reviews may be separate from the report introduction and labeled “ Review of Relevant Research” or something like that. The examples here illustrate how the literature review occurs within the introduction. Note that the examples are color coded to help you see essential parts of the literature reviews .

  Review , Purpose , Literature , Literature review , Review of the literature , Literature reviews purpose of the literature review

Literature Reviews: Using a Matrix to Organize Research

Literature Reviews : Using a Matrix to Organize Research

Literature Reviews : Using a Matrix to Organize Research A literature review is not a sequence of summaries of research articles you have read. Instead it is a synthesis of ideas from the literature , and its purpose is to answer a research question. Reading one author will not give the answer to a research question.

Literature Review on the Impact of Digital Technology on ...

Literature Review on the Impact of Digital Technology on ...

Annex 4: Meta-analysis literature reviews included in the review ..... 55. 1 Executive Summary This literature review was commissioned by the Scottish Government to explore how the use of digital technology for learning and teaching can support ... enhancement to learning’ but is ‘on the fringes of the main purpose of tasks or

What is a Literature Review? - SAGE Publications Inc

What is a Literature Review ? - SAGE Publications Inc

literature review is an aid to gathering and synthesising that information. The pur-pose of the literature review is to draw on and critique previous studies in an orderly, precise and analytical manner. The fundamental aim of a literature review is to provide a comprehensive picture of the knowledge relating to a specific topic.

  Review , Literature , Sage , Publication , Pose , Literature review , Sage publications inc , Pur pose of the literature review

Literature Review and Focusing the Research

Literature Review and Focusing the Research

purpose of the literature review section of a research article is to provide the reader with an overall framework for where this piece of work fits in the “big picture” of what is known about a topic from previous research.

  Research , Review , Purpose , Literature , Focusing , Literature review and focusing the research , Purpose of the literature review

Related search queries

Literature review , Purpose , Reviews , Literature reviews , Literature Reviews Purpose of the Literature Review , Review , Of the literature reviews , Literature , SAGE Publications Inc , Pur-pose of the literature review , Literature Review and Focusing the Research , Purpose of the literature review

COMMENTS

  1. PDF Critical Thinking and Transferability: A Review of the Literature

    Critical Thinking and Transferability: A Review of the Literature. By Gwendolyn Reece April 9, 2002. Since the 1960s, concern that American students may not be capable of transferring the skills they have gained from their education to the practical problems of life has troubled educators. Of greatest concern is whether students have mastered.

  2. Critical Thinking and Transferability: A Review of the Literature

    Critical Thinking and Transferability: A Review of the Literature. Gwendolyn J. Reece. Published 2002. Education, Philosophy. subsidiary problems must first be answered before the problem of evaluating the effectiveness of critical thinking transfer can be approached. The first of these problems is whether the movement has a common theme or ...

  3. Critical Thinking and Transferability: A Review of the Literature

    The need for critical thinking education has been identified in the socio-economic shift toward laborers who are required to solve complex problems [1,2], and teaching students the critical ...

  4. Critical Thinking and Transferability: A Review of the Literature

    Abstract: subsidiary problems must first be answered before the problem of evaluating the effectiveness of critical thinking transfer can be approached. The first of these problems is whether the movement has a common theme or definition of "critical thinking." Second is the question, does "critical thinking" encompass "creative thinking" or is it antithetical to it.

  5. Bridging critical thinking and transformative learning: The role of

    In recent decades, approaches to critical thinking have generally taken a practical turn, pivoting away from more abstract accounts - such as emphasizing the logical relations that hold between statements (Ennis, 1964) - and moving toward an emphasis on belief and action.According to the definition that Robert Ennis (2018) has been advocating for the last few decades, critical thinking is ...

  6. Critical Thinking and Transferability [a visual guide for writing

    Critical Thinking and Transferability [a visual guide for writing literature reviews] : A Review of the Literature. This meta-literature review includes an actual literature review as well as margin notes identifying important concepts and structural elements that are usually found in literature reviews.

  7. PDF TRANSFER ISSUES AND EFFECTIVE PRACTICES A REVIEW OF THE LITERATURE ...

    the findings of an extensive review of literature on transfer issues and practices spanning over 100 references. The main purpose for this project is to identify practices that enhance the ... • Rigorous curriculum for all students that includes writing, critical thinking, mathematics, and the sciences • High quality instruction, including ...

  8. Bibliometric analysis of the literature on critical thinking: an

    This article is followed by 'Critical thinking in education: a review' by Pithers and Soden (Citation 2000), with 211 citations and an average of 10.05 cites per year, and 'Systems thinking - critical thinking skills for the 1990s and beyond' by B. Richmond (1993), with 195 citations and an average of 6.96 citations per year. Despite no ...

  9. The transfer and the transferability of critical thinking skills

    The transfer and the transferability of cri tical thinking skills. By Daniela Dumitru, PhD. Abstract. The paper contends th at if capacity fo r critical thinking is transferable, this would enable ...

  10. The TRANSFER Approach for assessing the transferability of systematic

    Stage 5: assess the transferability of review findings. Review authors should assess the transferability of a review finding to the review context, and in some cases may also consider a local context (see example in Table 10). When a review context is global, the review team may have fewer concerns regarding transferability if the data come ...

  11. To Clarity and Beyond: Situating Higher-Order, Critical, and Critical

    For this systematic review, learning from multiple texts served as the specific context for investigating the constructs of higher-order (HOT), critical (CT), and critical-analytic (CAT) thinking. Examining the manifestations of HOT, CT, and CAT within the specific context of learning from multiple texts allowed us to clarify and disentangle these valued modes of thought. We begin by ...

  12. PDF Teaching Critical Thinking Skills: Literature Review

    purposeful, reasoned and goal directed'. Halpren (1997, p. 4) states, 'Critical thinking is purposeful, reasoned, and goal-directed. It is the kind of thinking involved in solving problems, formulating inferences, calculating likelihoods, and making decisions. Critical thinkers use these skills appropriately, without prompting, and

  13. Criteria for evaluating transferability of health interventions: a

    Results. Thirty-seven articles were included in the review. The thematic synthesis revealed 44 criteria, covered by 4 overarching themes, which influence transferability of health interventions: The population (P), the intervention (I), and the environment (E) represent 30 conditional transferability criteria, and the transfer of the intervention (T) represents 14 process criteria for ...

  14. PDF By Gwendolyn Reece April 9, 2002

    Critical Thinking and Transferability: A Review of the Literature By Gwendolyn Reece April 9, 2002 . 2 Since the 1960s, concern that American students may not capable of transferring ... effectiveness of critical thinking transfer can be approached. The first of these problems

  15. (PDF) Teaching Critical Thinking Skills: Literature Review

    Critical Thinking (CT) has been recognized as one of the most important thinking skills and one of the most important indicators of student learning quality. In order to develop successful ...

  16. PDF Critical thinking: A literature review

    the definition of critical thinking. The purposes of this literature review are to (a) explore the. ways in which critical thinking has been defined by researchers, (b) investigate how critical. thinking develops (c) learn how teachers can encourage the development of critical thinking.

  17. The TRANSFER Approach for assessing the transferability of systematic

    The TRANSFER Approach supports review authors in collaborating with decision makers to ensure an informed consideration, from the beginning of the review process, of the transferability of the review findings to the review context. Further testing of TRANSFER is needed. Keywords: Transferability, Applicability, Indirectness, Relevance, Evidence ...

  18. PDF MEASURING STUDENT SUCCESS SKILLS: A REVIEW OF THE LITERATURE ON ...

    A REVIEW OF THE LITERATURE ON CRITICAL THINKING Carla Evans, Ph.D. ... skills: A review of the literature on critical thinking. Dover, NH: National Center for the ... Explicit instruction in critical thinking increases the potential for transfer across contexts (van Gelder, 2005). In this way, the psychological tradition tends to focus on what ...

  19. LibGuides: Types of academic writing: Literature reviews

    Sample literature review: Critical Thinking and Transferability: A Review of the Literature (Reece, 2002) Sample outline: See the sections relating to the literature review section of a major research paper in Outlining a Research Paper (©2011 Amy L. Stuart, Associate Professor, University of South Florida) Synthesis Table for Literature ...

  20. Teaching Critical Thinking Skills: Literature Review.

    Critical Thinking (CT) has been recognized as one of the most important thinking skills and one of the most important indicators of student learning quality. In order to develop successful critical thinkers, CT must be incorporated into the curriculum content and teaching approaches and sequenced at all grade levels. This research provides a systematic review of the extant literature on ...

  21. PDF The TRANSFER Approach for assessing the transferability of systematic

    to the context specified in the review, and how trans-ferable the review findings are to the end users' con-text [20]. Transferability is often assessed by end users based on the information provided in a system-atic review, and tools such as the one proposed by Schloemer and Schröeder-Bäck (2018) can assist them in doing so [21].

  22. Sample Lit Review

    Critical Thinking and Transferability: A Review of the Literature By Gwendolyn Reece April 9, 2002. Since the 1960s, concern that American students may not be capable of transferring the skills they have gained from their education to the practical problems of life has troubled educators. ... Most critical thinking literature provides program ...

  23. Critical Thinking and Transferability: A Review of the

    1 Critical Thinking and transferability: A Review of the literature By Gwendolyn Reece April 9, 2002 Since the 1960s, concern that American students may not be capable of transferring the skills they have gained from their education to the practical problems of life has troubled educators. Of greatest concern is whether students have mastered Critical Thinking or higher order Thinking skills ...