Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

14k Accesses

13 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

critical thinking peer reviewed articles

Fostering twenty-first century skills among primary school students through math project-based learning

critical thinking peer reviewed articles

A meta-analysis to gauge the impact of pedagogies employed in mixed-ability high school biology classrooms

critical thinking peer reviewed articles

A guide to critical thinking: implications for dental education

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis.

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

critical thinking peer reviewed articles

Critical thinking in nursing clinical practice, education and research: From attitudes to virtue

Affiliations.

  • 1 Department of Fundamental Care and Medical Surgital Nursing, Faculty of Medicine and Health Sciences, School of Nursing, Consolidated Research Group Quantitative Psychology (2017-SGR-269), University of Barcelona, Barcelona, Spain.
  • 2 Department of Fundamental Care and Medical Surgital Nursing, Faculty of Medicine and Health Sciences, School of Nursing, Consolidated Research Group on Gender, Identity and Diversity (2017-SGR-1091), University of Barcelona, Barcelona, Spain.
  • 3 Department of Fundamental Care and Medical Surgital Nursing, Faculty of Medicine and Health Sciences, School of Nursing, University of Barcelona, Barcelona, Spain.
  • 4 Multidisciplinary Nursing Research Group, Vall d'Hebron Research Institute (VHIR), Vall d'Hebron Hospital, Barcelona, Spain.
  • PMID: 33029860
  • DOI: 10.1111/nup.12332

Critical thinking is a complex, dynamic process formed by attitudes and strategic skills, with the aim of achieving a specific goal or objective. The attitudes, including the critical thinking attitudes, constitute an important part of the idea of good care, of the good professional. It could be said that they become a virtue of the nursing profession. In this context, the ethics of virtue is a theoretical framework that becomes essential for analyse the critical thinking concept in nursing care and nursing science. Because the ethics of virtue consider how cultivating virtues are necessary to understand and justify the decisions and guide the actions. Based on selective analysis of the descriptive and empirical literature that addresses conceptual review of critical thinking, we conducted an analysis of this topic in the settings of clinical practice, training and research from the virtue ethical framework. Following JBI critical appraisal checklist for text and opinion papers, we argue the need for critical thinking as an essential element for true excellence in care and that it should be encouraged among professionals. The importance of developing critical thinking skills in education is well substantiated; however, greater efforts are required to implement educational strategies directed at developing critical thinking in students and professionals undergoing training, along with measures that demonstrate their success. Lastly, we show that critical thinking constitutes a fundamental component in the research process, and can improve research competencies in nursing. We conclude that future research and actions must go further in the search for new evidence and open new horizons, to ensure a positive effect on clinical practice, patient health, student education and the growth of nursing science.

Keywords: critical thinking; critical thinking attitudes; nurse education; nursing care; nursing research.

© 2020 John Wiley & Sons Ltd.

  • Attitude of Health Personnel*
  • Education, Nursing / methods
  • Nursing Process
  • Nursing Research / methods

Grants and funding

  • PREI-19-007-B/School of Nursing. Faculty of Medicine and Health Sciences. University of Barcelona

OPINION article

Redefining critical thinking: teaching students to think like scientists.

\r\nRodney M. Schmaltz*

  • Department of Psychology, MacEwan University, Edmonton, AB, Canada

From primary to post-secondary school, critical thinking (CT) is an oft cited focus or key competency (e.g., DeAngelo et al., 2009 ; California Department of Education, 2014 ; Alberta Education, 2015 ; Australian Curriculum Assessment and Reporting Authority, n.d. ). Unfortunately, the definition of CT has become so broad that it can encompass nearly anything and everything (e.g., Hatcher, 2000 ; Johnson and Hamby, 2015 ). From discussion of Foucault, critique and the self ( Foucault, 1984 ) to Lawson's (1999) definition of CT as the ability to evaluate claims using psychological science, the term critical thinking has come to refer to an ever-widening range of skills and abilities. We propose that educators need to clearly define CT, and that in addition to teaching CT, a strong focus should be placed on teaching students how to think like scientists. Scientific thinking is the ability to generate, test, and evaluate claims, data, and theories (e.g., Bullock et al., 2009 ; Koerber et al., 2015 ). Simply stated, the basic tenets of scientific thinking provide students with the tools to distinguish good information from bad. Students have access to nearly limitless information, and the skills to understand what is misinformation or a questionable scientific claim is crucially important ( Smith, 2011 ), and these skills may not necessarily be included in the general teaching of critical thinking ( Wright, 2001 ).

This is an issue of more than semantics. While some definitions of CT include key elements of the scientific method (e.g., Lawson, 1999 ; Lawson et al., 2015 ), this emphasis is not consistent across all interpretations of CT ( Huber and Kuncel, 2016 ). In an attempt to provide a comprehensive, detailed definition of CT, the American Philosophical Association (APA), outlined six CT skills, 16 subskills, and 19 dispositions ( Facione, 1990 ). Skills include interpretation, analysis, and inference; dispositions include inquisitiveness and open-mindedness. 1 From our perspective, definitions of CT such as those provided by the APA or operationally defined by researchers in the context of a scholarly article (e.g., Forawi, 2016 ) are not problematic—the authors clearly define what they are referring to as CT. Potential problems arise when educators are using different definitions of CT, or when the banner of CT is applied to nearly any topic or pedagogical activity. Definitions such as those provided by the APA provide a comprehensive framework for understanding the multi-faceted nature of CT, however the definition is complex and may be difficult to work with at a policy level for educators, especially those who work primarily with younger students.

The need to develop scientific thinking skills is evident in studies showing that 55% of undergraduate students believe that a full moon causes people to behave oddly, and an estimated 67% of students believe creatures such as Bigfoot and Chupacabra exist, despite the lack of scientific evidence supporting these claims ( Lobato et al., 2014 ). Additionally, despite overwhelming evidence supporting the existence of anthropogenic climate change, and the dire need to mitigate its effects, many people still remain skeptical of climate change and its impact ( Feygina et al., 2010 ; Lewandowsky et al., 2013 ). One of the goals of education is to help students foster the skills necessary to be informed consumers of information ( DeAngelo et al., 2009 ), and providing students with the tools to think scientifically is a crucial component of reaching this goal. By focusing on scientific thinking in conjunction with CT, educators may be better able design specific policies that aim to facilitate the necessary skills students should have when they enter post-secondary training or the workforce. In other words, students should leave secondary school with the ability to rule out rival hypotheses, understand that correlation does not equal causation, the importance of falsifiability and replicability, the ability to recognize extraordinary claims, and use the principle of parsimony (e.g., Lett, 1990 ; Bartz, 2002 ).

Teaching scientific thinking is challenging, as people are vulnerable to trusting their intuitions and subjective observations and tend to prioritize them over objective scientific findings (e.g., Lilienfeld et al., 2012 ). Students and the public at large are prone to naïve realism, or the tendency to believe that our experiences and observations constitute objective reality ( Ross and Ward, 1996 ), when in fact our experiences and observations are subjective and prone to error (e.g., Kahneman, 2011 ). Educators at the post-secondary level tend to prioritize scientific thinking ( Lilienfeld, 2010 ), however many students do not continue on to a post-secondary program after they have completed high school. Further, students who are told they are learning critical thinking may believe they possess the skills to accurately assess the world around them. However, if they are not taught the specific skills needed to be scientifically literate, they may still fall prey to logical fallacies and biases. People tend to underestimate or not understand fallacies that can prevent them from making sound decisions ( Lilienfeld et al., 2001 ; Pronin et al., 2004 ; Lilienfeld, 2010 ). Thus, it is reasonable to think that a person who has not been adequately trained in scientific thinking would nonetheless consider themselves a strong critical thinker, and therefore would be even less likely consider his or her own personal biases. Another concern is that when teaching scientific thinking there is always the risk that students become overly critical or cynical (e.g., Mercier et al., 2017 ). By this, a student may be skeptical of nearly all findings, regardless of the supporting evidence. By incorporating and focusing on cognitive biases, instructors can help students understand their own biases, and demonstrate how the rigor of the scientific method can, at least partially, control for these biases.

Teaching CT remains controversial and confusing for many instructors ( Bensley and Murtagh, 2012 ). This is partly due to the lack of clarity in the definition of CT and the wide range of methods proposed to best teach CT ( Abrami et al., 2008 ; Bensley and Murtagh, 2012 ). For instance, Bensley and Spero (2014) found evidence for the effectiveness of direct approaches to teaching CT, a claim echoed in earlier research ( Abrami et al., 2008 ; Marin and Halpern, 2011 ). Despite their positive findings, some studies have failed to find support for measures of CT ( Burke et al., 2014 ) and others have found variable, yet positive, support for instructional methods ( Dochy et al., 2003 ). Unfortunately, there is a lack of research demonstrating the best pedagogical approaches to teaching scientific thinking at different grade levels. More research is needed to provide an empirically grounded approach to teach scientific thinking, and there is also a need to develop evidence based measures of scientific thinking that are grade and age appropriate. One approach to teaching scientific thinking may be to frame the topic in its simplest terms—the ability to “detect baloney” ( Sagan, 1995 ).

Sagan (1995) has promoted the tools necessary to recognize poor arguments, fallacies to avoid, and how to approach claims using the scientific method. The basic tenets of Sagan's argument apply to most claims, and have the potential to be an effective teaching tool across a range of abilities and ages. Sagan discusses the idea of a baloney detection kit, which contains the “tools” for skeptical thinking. The development of “baloney detection kits” which include age-appropriate scientific thinking skills may be an effective approach to teaching scientific thinking. These kits could include the style of exercises that are typically found under the banner of CT training (e.g., group discussions, evaluations of arguments) with a focus on teaching scientific thinking. An empirically validated kit does not yet exist, though there is much to draw from in the literature on pedagogical approaches to correcting cognitive biases, combatting pseudoscience, and teaching methodology (e.g., Smith, 2011 ). Further research is needed in this area to ensure that the correct, and age-appropriate, tools are part of any baloney detection kit.

Teaching Sagan's idea of baloney detection in conjunction with CT provides educators with a clear focus—to employ a pedagogical approach that helps students create sound and cogent arguments while avoiding falling prey to “baloney”. This is not to say that all of the information taught under the current banner of “critical thinking” is without value. In fact, many of the topics taught under the current approach of CT are important, even though they would not fit within the framework of some definitions of critical thinking. If educators want to ensure that students have the ability to be accurate consumers of information, a focus should be placed on including scientific thinking as a component of the science curriculum, as well as part of the broader teaching of CT.

Educators need to be provided with evidence-based approaches to teach the principles of scientific thinking. These principles should be taught in conjunction with evidence-based methods that mitigate the potential for fallacious reasoning and false beliefs. At a minimum, when students first learn about science, there should also be an introduction to the basics tenets of scientific thinking. Courses dedicated to promoting scientific thinking may also be effective. A course focused on cognitive biases, logical fallacies, and the hallmarks of scientific thinking adapted for each grade level may provide students with the foundation of solid scientific thinking skills to produce and evaluate arguments, and allow expansion of scientific thinking into other scholastic areas and classes. Evaluations of the efficacy of these courses would be essential, along with research to determine the best approach to incorporate scientific thinking into the curriculum.

If instructors know that students have at least some familiarity with the fundamental tenets of scientific thinking, the ability to expand and build upon these ideas in a variety of subject specific areas would further foster and promote these skills. For example, when discussing climate change, an instructor could add a brief discussion of why some people reject the science of climate change by relating this back to the information students will be familiar with from their scientific thinking courses. In terms of an issue like climate change, many students may have heard in political debates or popular culture that global warming trends are not real, or a “hoax” ( Lewandowsky et al., 2013 ). In this case, only teaching the data and facts may not be sufficient to change a student's mind about the reality of climate change ( Lewandowsky et al., 2012 ). Instructors would have more success by presenting students with the data on global warming trends as well as information on the biases that could lead some people reject the data ( Kowalski and Taylor, 2009 ; Lewandowsky et al., 2012 ). This type of instruction helps educators create informed citizens who are better able to guide future decision making and ensure that students enter the job market with the skills needed to be valuable members of the workforce and society as a whole.

By promoting scientific thinking, educators can ensure that students are at least exposed to the basic tenets of what makes a good argument, how to create their own arguments, recognize their own biases and those of others, and how to think like a scientist. There is still work to be done, as there is a need to put in place educational programs built on empirical evidence, as well as research investigating specific techniques to promote scientific thinking for children in earlier grade levels and develop measures to test if students have acquired the necessary scientific thinking skills. By using an evidence based approach to implement strategies to promote scientific thinking, and encouraging researchers to further explore the ideal methods for doing so, educators can better serve their students. When students are provided with the core ideas of how to detect baloney, and provided with examples of how baloney detection relates to the real world (e.g., Schmaltz and Lilienfeld, 2014 ), we are confident that they will be better able to navigate through the oceans of information available and choose the right path when deciding if information is valid.

Author Contribution

RS was the lead author and this paper, and both EJ and NW contributed equally.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1. ^ There is some debate about the role of dispositional factors in the ability for a person to engage in critical thinking, specifically that dispositional factors may mitigate any attempt to learn CT. The general consensus is that while dispositional traits may play a role in the ability to think critically, the general skills to be a critical thinker can be taught ( Niu et al., 2013 ; Abrami et al., 2015 ).

Abrami, P. C., Bernard, R. M., Borokhovski, E., Waddington, D. I., Wade, C. A., and Persson, T. (2015). Strategies for teaching students to think critically a meta-analysis. Rev. Educ. Res. 85, 275–314. doi: 10.3102/0034654308326084

CrossRef Full Text | Google Scholar

Abrami, P. C., Bernard, R. M., Borokhovski, E., Wade, A., Surkes, M. A., Tamim, R., et al. (2008). Instructional interventions affecting critical thinking skills and dispositions: a stage 1 meta-analysis. Rev. Educ. Res. 78, 1102–1134. doi: 10.3102/0034654308326084

Alberta Education (2015). Ministerial Order on Student Learning . Available online at: https://education.alberta.ca/policies-and-standards/student-learning/everyone/ministerial-order-on-student-learning-pdf/

Australian Curriculum Assessment and Reporting Authority (n.d.). Available online at: http://www.australiancurriculum.edu.au

Bartz, W. R. (2002). Teaching skepticism via the CRITIC acronym and the skeptical inquirer. Skeptical Inquirer 17, 42–44.

Google Scholar

Bensley, D. A., and Murtagh, M. P. (2012). Guidelines for a scientific approach to critical thinking assessment. Teach. Psychol. 39, 5–16. doi: 10.1177/0098628311430642

Bensley, D. A., and Spero, R. A. (2014). Improving critical thinking skills and metacognitive monitoring through direct infusion. Think. Skills Creativ. 12, 55–68. doi: 10.1016/j.tsc.2014.02.001

Bullock, M., Sodian, B., and Koerber, S. (2009). “Doing experiments and understanding science: development of scientific reasoning from childhood to adulthood,” in Human Development from Early Childhood to Early Adulthood: Findings from a 20 Year Longitudinal Study , eds W. Schneider and M. Bullock (New York, NY: Psychology Press), 173–197.

Burke, B. L., Sears, S. R., Kraus, S., and Roberts-Cady, S. (2014). Critical analysis: a comparison of critical thinking changes in psychology and philosophy classes. Teach. Psychol. 41, 28–36. doi: 10.1177/0098628313514175

California Department of Education (2014). Standard for Career Ready Practice . Available online at: http://www.cde.ca.gov/nr/ne/yr14/yr14rel22.asp

DeAngelo, L., Hurtado, S., Pryor, J. H., Kelly, K. R., Santos, J. L., and Korn, W. S. (2009). The American College Teacher: National Norms for the 2007-2008 HERI Faculty Survey . Los Angeles, CA: Higher Education Research Institute.

Dochy, F., Segers, M., Van den Bossche, P., and Gijbels, D. (2003). Effects of problem-based learning: a meta-analysis. Learn. Instruct. 13, 533–568. doi: 10.1016/S0959-4752(02)00025-7

Facione, P. A. (1990). Critical thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Research Findings and Recommendations. Newark, DE: American Philosophical Association.

Feygina, I., Jost, J. T., and Goldsmith, R. E. (2010). System justification, the denial of global warming, and the possibility of ‘system-sanctioned change’. Pers. Soc. Psychol. Bull. 36, 326–338. doi: 10.1177/0146167209351435

PubMed Abstract | CrossRef Full Text | Google Scholar

Forawi, S. A. (2016). Standard-based science education and critical thinking. Think. Skills Creativ. 20, 52–62. doi: 10.1016/j.tsc.2016.02.005

Foucault, M. (1984). The Foucault Reader . New York, NY: Pantheon.

Hatcher, D. L. (2000). Arguments for another definition of critical thinking. Inquiry 20, 3–8. doi: 10.5840/inquiryctnews20002016

Huber, C. R., and Kuncel, N. R. (2016). Does college teach critical thinking? A meta-analysis. Rev. Educ. Res. 86, 431–468. doi: 10.3102/0034654315605917

Johnson, R. H., and Hamby, B. (2015). A meta-level approach to the problem of defining “Critical Thinking”. Argumentation 29, 417–430. doi: 10.1007/s10503-015-9356-4

Kahneman, D. (2011). Thinking, Fast and Slow . New York, NY: Farrar, Straus and Giroux.

Koerber, S., Mayer, D., Osterhaus, C., Schwippert, K., and Sodian, B. (2015). The development of scientific thinking in elementary school: a comprehensive inventory. Child Dev. 86, 327–336. doi: 10.1111/cdev.12298

Kowalski, P., and Taylor, A. K. (2009). The effect of refuting misconceptions in the introductory psychology class. Teach. Psychol. 36, 153–159. doi: 10.1080/00986280902959986

Lawson, T. J. (1999). Assessing psychological critical thinking as a learning outcome for psychology majors. Teach. Psychol. 26, 207–209. doi: 10.1207/S15328023TOP260311

CrossRef Full Text

Lawson, T. J., Jordan-Fleming, M. K., and Bodle, J. H. (2015). Measuring psychological critical thinking: an update. Teach. Psychol. 42, 248–253. doi: 10.1177/0098628315587624

Lett, J. (1990). A field guide to critical thinking. Skeptical Inquirer , 14, 153–160.

Lewandowsky, S., Ecker, U. H., Seifert, C. M., Schwarz, N., and Cook, J. (2012). Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Public Interest 13, 106–131. doi: 10.1177/1529100612451018

Lewandowsky, S., Oberauer, K., and Gignac, G. E. (2013). NASA faked the moon landing—therefore, (climate) science is a hoax: an anatomy of the motivated rejection of science. Psychol. Sci. 24, 622–633. doi: 10.1177/0956797612457686

Lilienfeld, S. O. (2010). Can psychology become a science? Pers. Individ. Dif. 49, 281–288. doi: 10.1016/j.paid.2010.01.024

Lilienfeld, S. O., Ammirati, R., and David, M. (2012). Distinguishing science from pseudoscience in school psychology: science and scientific thinking as safeguards against human error. J. Sch. Psychol. 50, 7–36. doi: 10.1016/j.jsp.2011.09.006

Lilienfeld, S. O., Lohr, J. M., and Morier, D. (2001). The teaching of courses in the science and pseudoscience of psychology: useful resources. Teach. Psychol. 28, 182–191. doi: 10.1207/S15328023TOP2803_03

Lobato, E., Mendoza, J., Sims, V., and Chin, M. (2014). Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Appl. Cogn. Psychol. 28, 617–625. doi: 10.1002/acp.3042

Marin, L. M., and Halpern, D. F. (2011). Pedagogy for developing critical thinking in adolescents: explicit instruction produces greatest gains. Think. Skills Creativ. 6, 1–13. doi: 10.1016/j.tsc.2010.08.002

Mercier, H., Boudry, M., Paglieri, F., and Trouche, E. (2017). Natural-born arguers: teaching how to make the best of our reasoning abilities. Educ. Psychol. 52, 1–16. doi: 10.1080/00461520.2016.1207537

Niu, L., Behar-Horenstein, L. S., and Garvan, C. W. (2013). Do instructional interventions influence college students' critical thinking skills? A meta-analysis. Educ. Res. Rev. 9, 114–128. doi: 10.1016/j.edurev.2012.12.002

Pronin, E., Gilovich, T., and Ross, L. (2004). Objectivity in the eye of the beholder: divergent perceptions of bias in self versus others. Psychol. Rev. 111, 781–799. doi: 10.1037/0033-295X.111.3.781

Ross, L., and Ward, A. (1996). “Naive realism in everyday life: implications for social conflict and misunderstanding,” in Values and Knowledge , eds E. S. Reed, E. Turiel, T. Brown, E. S. Reed, E. Turiel and T. Brown (Hillsdale, NJ: Lawrence Erlbaum Associates Inc.), 103–135.

Sagan, C. (1995). Demon-Haunted World: Science as a Candle in the Dark . New York, NY: Random House.

Schmaltz, R., and Lilienfeld, S. O. (2014). Hauntings, homeopathy, and the Hopkinsville Goblins: using pseudoscience to teach scientific thinking. Front. Psychol. 5:336. doi: 10.3389/fpsyg.2014.00336

Smith, J. C. (2011). Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit . New York, NY: John Wiley and Sons.

Wright, I. (2001). Critical thinking in the schools: why doesn't much happen? Inform. Logic 22, 137–154. doi: 10.22329/il.v22i2.2579

Keywords: scientific thinking, critical thinking, teaching resources, skepticism, education policy

Citation: Schmaltz RM, Jansen E and Wenckowski N (2017) Redefining Critical Thinking: Teaching Students to Think like Scientists. Front. Psychol . 8:459. doi: 10.3389/fpsyg.2017.00459

Received: 13 December 2016; Accepted: 13 March 2017; Published: 29 March 2017.

Reviewed by:

Copyright © 2017 Schmaltz, Jansen and Wenckowski. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rodney M. Schmaltz, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Fostering Critical Thinking, Reasoning, and Argumentation Skills through Bioethics Education

* E-mail: [email protected]

Affiliation Northwest Association for Biomedical Research, Seattle, Washington, United States of America

Affiliation Center for Research and Learning, Snohomish, Washington, United States of America

  • Jeanne Ting Chowning, 
  • Joan Carlton Griswold, 
  • Dina N. Kovarik, 
  • Laura J. Collins

PLOS

  • Published: May 11, 2012
  • https://doi.org/10.1371/journal.pone.0036791
  • Reader Comments

Table 1

Developing a position on a socio-scientific issue and defending it using a well-reasoned justification involves complex cognitive skills that are challenging to both teach and assess. Our work centers on instructional strategies for fostering critical thinking skills in high school students using bioethical case studies, decision-making frameworks, and structured analysis tools to scaffold student argumentation. In this study, we examined the effects of our teacher professional development and curricular materials on the ability of high school students to analyze a bioethical case study and develop a strong position. We focused on student ability to identify an ethical question, consider stakeholders and their values, incorporate relevant scientific facts and content, address ethical principles, and consider the strengths and weaknesses of alternate solutions. 431 students and 12 teachers participated in a research study using teacher cohorts for comparison purposes. The first cohort received professional development and used the curriculum with their students; the second did not receive professional development until after their participation in the study and did not use the curriculum. In order to assess the acquisition of higher-order justification skills, students were asked to analyze a case study and develop a well-reasoned written position. We evaluated statements using a scoring rubric and found highly significant differences (p<0.001) between students exposed to the curriculum strategies and those who were not. Students also showed highly significant gains (p<0.001) in self-reported interest in science content, ability to analyze socio-scientific issues, awareness of ethical issues, ability to listen to and discuss viewpoints different from their own, and understanding of the relationship between science and society. Our results demonstrate that incorporating ethical dilemmas into the classroom is one strategy for increasing student motivation and engagement with science content, while promoting reasoning and justification skills that help prepare an informed citizenry.

Citation: Chowning JT, Griswold JC, Kovarik DN, Collins LJ (2012) Fostering Critical Thinking, Reasoning, and Argumentation Skills through Bioethics Education. PLoS ONE 7(5): e36791. https://doi.org/10.1371/journal.pone.0036791

Editor: Julio Francisco Turrens, University of South Alabama, United States of America

Received: February 7, 2012; Accepted: April 13, 2012; Published: May 11, 2012

Copyright: © 2012 Chowning et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: The “Collaborations to Understand Research and Ethics” (CURE) program was supported by a Science Education Partnership Award grant ( http://ncrrsepa.org ) from the National Center for Research Resources and the Division of Program Coordination, Planning, and Strategic Initiatives of the National Institutes of Health through Grant Number R25OD011138. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

While the practice of argumentation is a cornerstone of the scientific process, students at the secondary level have few opportunities to engage in it [1] . Recent research suggests that collaborative discourse and critical dialogue focused on student claims and justifications can increase student reasoning abilities and conceptual understanding, and that strategies are needed to promote such practices in secondary science classrooms [2] . In particular, students need structured opportunities to develop arguments and discuss them with their peers. In scientific argument, the data, claims and warrants (that relate claims to data) are strictly concerned with scientific data; in a socio-scientific argument, students must consider stakeholder perspectives and ethical principles and ideas, in addition to relevant scientific background. Regardless of whether the arguments that students employ point towards scientific or socio-scientific issues, the overall processes students use in order to develop justifications rely on a model that conceptualizes arguments as claims to knowledge [3] .

Prior research in informal student reasoning and socio-scientific issues also indicates that most learners are not able to formulate high-quality arguments (as defined by the ability to articulate justifications for claims and to rebut contrary positions), and highlights the challenges related to promoting argumentation skills. Research suggests that students need experience and practice justifying their claims, recognizing and addressing counter-arguments, and learning about elements that contribute to a strong justification [4] , [5] .

Proponents of Socio-scientific Issues (SSI) education stress that the intellectual development of students in ethical reasoning is necessary to promote understanding of the relationship between science and society [4] , [6] . The SSI approach emphasizes three important principles: (a) because science literacy should be a goal for all students, science education should be broad-based and geared beyond imparting relevant content knowledge to future scientists; (b) science learning should involve students in thinking about the kinds of real-world experiences that they might encounter in their lives; and (c) when teaching about real-world issues, science teachers should aim to include contextual elements that are beyond traditional science content. Sadler and Zeidler, who advocate a SSI perspective, note that “people do not live their lives according to disciplinary boundaries, and students approach socio-scientific issues with diverse perspectives that integrate science and other considerations” [7] .

Standards for science literacy emphasize not only the importance of scientific content and processes, but also the need for students to learn about science that is contextualized in real-world situations that involve personal and community decision-making [7] – [10] . The National Board for Professional Teaching Standards stresses that students need “regular exposure to the human contexts of science [and] examples of ethical dilemmas, both current and past, that surround particular scientific activities, discoveries, and technologies” [11] . Teachers are mandated by national science standards and professional teaching standards to address the social dimensions of science, and are encouraged to provide students with the tools necessary to engage in analyzing bioethical issues; yet they rarely receive training in methods to foster such discussions with students.

The Northwest Association for Biomedical Research (NWABR), a non-profit organization that advances the understanding and support of biomedical research, has been engaging students and teachers in bringing the discussion of ethical issues in science into the classroom since 2000 [12] . The mission of NWABR is to promote an understanding of biomedical research and its ethical conduct through dialogue and education. The sixty research institutions that constitute our members include academia, industry, non-profit research organizations, research hospitals, professional societies, and volunteer health organizations. NWABR connects the scientific and education communities across the Northwestern United States and helps the public understand the vital role of research in promoting better health outcomes. We have focused on providing teachers with both resources to foster student reasoning skills (such as activities in which students practice evaluating arguments using criteria for strong justifications), as well as pedagogical strategies for fostering collaborative discussion [13] – [15] . Our work draws upon socio-scientific elements of functional scientific literacy identified by Zeidler et al. [6] . We include support for teachers in discourse issues, nature of science issues, case-based issues, and cultural issues – which all contribute to cognitive and moral development and promote functional scientific literacy. Our Collaborations to Understand Research and Ethics (CURE) program, funded by a Science Education Partnership Award from the National Institutes of Health (NIH), promotes understanding of translational biomedical research as well as the ethical considerations such research raises.

Many teachers find a principles-based approach most manageable for introducing ethical considerations. The principles include respect for persons (respecting the inherent worth of an individual and his or her autonomy), beneficence/nonmaleficence (maximizing benefits/minimizing harms), and justice (distributing benefits/burdens equitably across a group of individuals). These principles, which are articulated in the Belmont Report [16] in relation to research with human participants (and which are clarified and defended by Beauchamp and Childress [17] ), represent familiar concepts and are widely used. In our professional development workshops and in our support resources, we also introduce teachers to care, feminist, virtue, deontological and consequentialist ethics. Once teachers become familiar with principles, they often augment their teaching by incorporating these additional ethical approaches.

The Bioethics 101 materials that were the focus of our study were developed in conjunction with teachers, ethicists, and scientists. The curriculum contains a series of five classroom lessons and a culminating assessment [18] and is described in more detail in the Program Description below. For many years, teachers have shared with us the dramatic impacts that the teaching of bioethics can have on their students; this research study was designed to investigate the relationship between explicit instruction in bioethical reasoning and resulting student outcomes. In this study, teacher cohorts and student pre/post tests were used to investigate whether CURE professional development and the Bioethics 101 curriculum materials made a significant difference in high school students’ abilities to analyze a case study and justify their positions. Our research strongly indicates that such reasoning approaches can be taught to high school students and can significantly improve their ability to develop well-reasoned justifications to bioethical dilemmas. In addition, student self-reports provide additional evidence of the extent to which bioethics instruction impacted their attitudes and perceptions and increased student motivation and engagement with science content.

Program Description

Our professional development program, Ethics in the Science Classroom, spanned two weeks. The first week, a residential program at the University of Washington (UW) Pack Forest Conference Center, focused on our Bioethics 101 curriculum, which is summarized in Table S1 and is freely available at http://www.nwabr.org . The curriculum, a series of five classroom lessons and a culminating assessment, was implemented by all teachers who were part of our CURE treatment group. The lessons explore the following topics: (a) characteristics of an ethical question; (b) bioethical principles; (c) the relationship between science and ethics and the roles of objectivity/subjectivity and evidence in each; (d) analysis of a case study (including identifying an ethical question, determining relevant facts, identifying stakeholders and their concerns and values, and evaluating options); and (e) development of a well-reasoned justification for a position.

Additionally, the first week focused on effective teaching methods for incorporating ethical issues into science classrooms. We shared specific pedagogical strategies for helping teachers manage classroom discussion, such as asking students to consider the concerns and values of individuals involved in the case while in small single and mixed stakeholder groups. We also provided participants with background knowledge in biomedical research and ethics. Presentations from colleagues affiliated with the NIH Clinical and Translational Science Award program, from the Department of Bioethics and Humanities at the UW, and from NWABR member institutions helped participants develop a broad appreciation for the process of biomedical research and the ethical issues that arise as a consequence of that research. Topics included clinical trials, animal models of disease, regulation of research, and ethical foundations of research. Participants also developed materials directly relevant and applicable to their own classrooms, and shared them with other educators. Teachers wrote case studies and then used ethical frameworks to analyze the main arguments surrounding the case, thereby gaining experience in bioethical analysis. Teachers also developed Action Plans to outline their plans for implementation.

The second week provided teachers with first-hand experiences in NWABR research institutions. Teachers visited research centers such as the Tumor Vaccine Group and Clinical Research Center at the UW. They also had the opportunity to visit several of the following institutions: Amgen, Benaroya Research Institute, Fred Hutchinson Cancer Research Center, Infectious Disease Research Institute, Institute for Stem Cells and Regenerative Medicine at the UW, Pacific Northwest Diabetes Research Institute, Puget Sound Blood Center, HIV Vaccine Trials Network, and Washington National Primate Research Center. Teachers found these experiences in research facilities extremely valuable in helping make concrete the concepts and processes detailed in the first week of the program.

We held two follow-up sessions during the school year to deepen our relationship with the teachers, promote a vibrant ethics in science education community, provide additional resources and support, and reflect on challenges in implementation of our materials. We also provided the opportunity for teachers to share their experiences with one another and to report on the most meaningful longer-term impacts from the program. Another feature of our CURE program was the school-year Institutional Review Board (IRB) and Institutional Animal Care and Use Committee (IACUC) follow-up sessions. Teachers chose to attend one of NWABR’s IRB or IACUC conferences, attend a meeting of a review board, or complete NIH online ethics training. Some teachers also visited the UW Embryonic Stem Cell Research Oversight Committee. CURE funding provided substitutes in order for teachers to be released during the workday. These opportunities further engaged teachers in understanding and appreciating the actual process of oversight for federally funded research.

Participants

Most of the educators who have been through our intensive summer workshops teach secondary level science, but we have welcomed teachers at the college, community college, and even elementary levels. Our participants are primarily biology teachers; however, chemistry and physical science educators, health and career specialists, and social studies teachers have also used our strategies and materials with success.

The research design used teacher cohorts for comparison purposes and recruited teachers who expressed interest in participating in a CURE workshop in either the summer of 2009 or the summer of 2010. We assumed that all teachers who applied to the CURE workshop for either year would be similarly interested in ethics topics. Thus, Cohort 1 included teachers participating in CURE during the summer of 2009 (the treatment group). Their students received CURE instruction during the following 2009–2010 academic year. Cohort 2 (the comparison group) included teachers who were selected to participate in CURE during the summer of 2010. Their students received a semester of traditional classroom instruction in science during the 2009–2010 academic year. In order to track participation of different demographic groups, questions pertaining to race, ethnicity, and gender were also included in the post-tests.

Using an online sample size calculator http://www.surveysystem.com/sscalc.htm , a 95% Confidence Level, and a Confidence Interval of 5, it was calculated that a sample size of 278 students would be needed for the research study. For that reason, six Cohort 1 teachers were impartially chosen to be in the study. For the comparison group, the study design also required six teachers from Cohort 2. The external evaluator contacted all Cohort 2 teachers to explain the research study and obtain their consent, and successfully recruited six to participate.

Ethics Statement

This study was conducted according to the principles expressed in the Declaration of Helsinki. Prior to the study, research processes and materials were reviewed and approved by the Western Institutional Review Board (WIRB Study #1103180). CURE staff and evaluators received written permission from parents to have their minor children participate in the Bioethics 101 curriculum, for the collection and subsequent analysis of students’ written responses to the assessment, and for permission to collect and analyze student interview responses. Teachers also provided written informed consent prior to study participation. All study participants and/or their legal guardians provided written informed consent for the collection and subsequent analysis of verbal and written responses.

Research Study

Analyzing a case study: cure and comparison students..

Teacher cohorts and pre/post tests were used to investigate whether CURE professional development and curriculum materials made a significant difference in high school students’ abilities to analyze a case study and justify their positions. Cohort 1 teachers (N = 6) received CURE professional development and used the Bioethics 101 curriculum with their students (N = 323); Cohort 2 teachers (N = 6) did not receive professional development until after their participation in the study and did not use the curriculum with their students (N = 108). Cohort 2 students were given the test case study and questions, but with only traditional science instruction during the semester. Each Cohort was further divided into two groups (A and B). Students in Group A were asked to complete a pre-test prior to the case study, while students in Group B did not. All four student groups completed a post-test after analysis of the case study. This four-group model ( Table 1 ) allowed us to assess: 1) the effect of CURE treatment relative to conventional education practices, 2) the effect of the pre-test relative to no pre-test, and 3) the interaction between the pre-test and CURE treatment condition. Random assignment of students to treatment and comparison groups was not possible; consequently we used existing intact classes. In all, 431 students and 12 teachers participated in the research study ( Table 2 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0036791.t001

thumbnail

https://doi.org/10.1371/journal.pone.0036791.t002

In order to assess the acquisition of higher-order justification skills, students used the summative assessment provided in our curriculum as the pre- and post-test. We designed the curriculum to scaffold students’ ability to write a persuasive bioethical position; by the time they participated in the assessment, Cohort 1 students had opportunities to discuss the elements of a strong justification as well as practice in analyzing case studies. For our research, both Cohort 1 and 2 students were asked to analyze the case study of “Ashley X” ( Table S2 ), a young girl with a severe neurological impairment whose parents wished to limit her growth through a combination of interventions so that they could better care for her. Students were asked to respond to the ethical question: “Should one or more medical interventions be used to limit Ashley’s growth and physical maturation? If so, which interventions should be used and why?” In their answer, students were encouraged to develop a well-reasoned written position by responding to five questions that reflected elements of a strong justification. One difficulty in evaluating a multifaceted science-related learning task (analyzing a bioethical case study and justifying a position) is that a traditional multiple-choice assessment may not adequately reflect the subtlety and depth of student understanding. We used a rubric to assess student responses to each of the following questions (Q) on a scale of 1 to 4; these questions represent key elements of a strong justification for a bioethical argument:

  • Q1: Student Position: What is your decision?
  • Q2: Factual Support: What facts support your decision? Is there missing information that could be used to make a better decision?
  • Q3: Interests and Views of Others: Who will be impacted by the decision and how will they be impacted?
  • Q4: Ethical Considerations: What are the main ethical considerations?
  • Q5: Evaluating Alternative Options: What are some strengths and weaknesses of alternate solutions?

In keeping with our focus on the process of reasoning rather than on having students draw any particular conclusion, we did not assess students on which position they took, but on how well they stated and justified the position they chose.

We used a rubric scoring guide to assess student learning, which aligned with the complex cognitive challenges posed by the task ( Table S3 ). Assessing complex aspects of student learning is often difficult, especially evaluating how students represent their knowledge and competence in the domain of bioethical reasoning. Using a scoring rubric helped us more authentically score dimensions of students’ learning and their depth of thinking. An outside scorer who had previously participated in CURE workshops, has secondary science teaching experience, and who has a Masters degree in Bioethics blindly scored all student pre- and post-tests. Development of the rubric was an iterative process, refined after analyzing a subset of surveys. Once finalized, we confirmed the consistency and reliability of the rubric and grading process by re-testing a subset of student surveys randomly selected from all participating classes. The Cronbach alpha reliability result was 0.80 [19] .

The rubric closely followed the framework introduced through the curricular materials and reinforced through other case study analyses. For example, under Q2, Factual Support , a student rated 4 out of 4 if their response demonstrated the following:

  • The justification uses the relevant scientific reasons to support student’s answer to the ethical question.
  • The student demonstrates a solid understanding of the context in which the case occurs, including a thoughtful description of important missing information.
  • The student shows logical, organized thinking. Both facts supporting the decision and missing information are presented at levels exceeding standard (as described above).

An example of a student response that received the highest rating for Q2 asking for factual support is: “Her family has a history of breast cancer and fibrocystic breast disease. She is bed-bound and completely dependent on her parents. Since she is bed-bound, she has a higher risk of blood clots. She has the mentality of an infant. Her parents’ requests offer minimal side effects. With this disease, how long is she expected to live? If not very long then her parents don’t have to worry about growth. Are there alternative measures?”

In contrast, a student rated a 1 for responses that had the following characteristics:

  • Factual information relevant to the case is incompletely described or is missing.
  • Irrelevant information may be included and the student demonstrates some confusion.

An example of a student response that rated a 1 for Q2 is: “She is unconscious and doesn’t care what happens.”

All data were entered into SPSS (Statistical Package for the Social Sciences) and analyzed for means, standard deviations, and statistically significant differences. An Analysis of Variance (ANOVA) was used to test for significant overall differences between the two cohort groups. Pre-test and post-test composite scores were calculated for each student by adding individual scores for each item on the pre- and post-tests. The composite score on the post-test was identical in form and scoring to the composite score on the pre-test. The effect of the CURE treatment on post-test composite scores is referred to as the Main Effect, and was determined by comparing the post-test composite scores of the Cohort 1 (CURE) and Cohort 2 (Comparison) groups. In addition, Cohort 1 and Cohort 2 means scores for each test question (Questions 1–5) were compared within and between cohorts using t-tests.

CURE student perceptions of curriculum effect.

During prior program evaluations, we asked teachers to identify what they believed to be the main impacts of bioethics instruction on students. From this earlier work, we identified several themes. These themes, listed below, were further tested in our current study by asking students in the treatment group to assess themselves in these five areas after participation in the lesson, using a retrospective pre-test design to measure self-reported changes in perceptions and abilities [20] .

  • Interest in the science content of class (before/after) participating in the Ethics unit.
  • Ability to analyze issues related to science and society and make well-justified decisions (before/after) participating in the Ethics unit.
  • Awareness of ethics and ethical issues (before/after) participating in the Ethics unit.
  • Understanding of the connection between science and society (before/after) participating in the Ethics unit.
  • Ability to listen to and discuss different viewpoints (before/after) participating in the Ethics unit.

After Cohort 1 (CURE) students participated in the Bioethics 101 curriculum, we asked them to indicate the extent to which they had changed in each of the theme areas we had identified using Likert-scale items on a retrospective pre-test design [21] , with 1 =  None and 5 =  A lot!. We used paired t-tests to examine self-reported changes in their perceptions and abilities. The retrospective design avoids response-shift bias that results from overestimation or underestimation of change since both before and after information is collected at the same time [20] .

Student Demographics

Demographic information is provided in Table 3 . Of those students who reported their gender, a larger number were female (N = 258) than male (N = 169), 60% and 40%, respectively, though female students represented a larger proportion of Cohort 1 than Cohort 2. Students ranged in age from 14 to 18 years old; the average age of the students in both cohorts was 15. Students were enrolled in a variety of science classes (mostly Biology or Honors Biology). Because NIH recognizes a difference between race and ethnicity, students were asked to respond to both demographic questions. Students in both cohorts were from a variety of ethnic and racial backgrounds.

thumbnail

https://doi.org/10.1371/journal.pone.0036791.t003

Pre- and Post-Test Results for CURE and Comparison Students

Post-test composite means for each cohort (1 and 2) and group (A and B) are shown in Table 4 . Students receiving CURE instruction earned significantly higher (p<0.001) composite mean scores than students in comparison classrooms. Cohort 1 (CURE) students (N = 323) post-test composite means were 10.73, while Cohort 2 (Comparison) students (N = 108) had post-test composite means of 9.16. The ANOVA results ( Table 5 ) showed significant differences in the ability to craft strong justifications between Cohort 1 (CURE) and Cohort 2 (Comparison) students F (1, 429) = 26.64, p<0.001.

thumbnail

https://doi.org/10.1371/journal.pone.0036791.t004

thumbnail

https://doi.org/10.1371/journal.pone.0036791.t005

We also examined if the pre-test had a priming effect on the students’ scores because it provides an opportunity to practice or think about the content. The pre-test would not have this effect on the comparison group because they were not exposed to CURE teaching or materials. If the pre-test provides a practice or priming effect, this would result in higher post-test performance by CURE students receiving the pre-test than by CURE students not receiving the pre-test. For this comparison, the F (1, 321) = 0.10, p = 0.92. This result suggests that the differences between the CURE and comparison groups are attributable to the treatment condition and not a priming effect of the pre-test.

After differences in main effects were investigated, we analyzed differences between and within cohorts on individual items (Questions 1–5) using t-tests. The Mean scores of individual questions for each cohort are shown in Figure 1 . There were no significant differences between Cohort 1 (CURE) and Cohort 2 (Comparison) on pre-test scores. In fact, for Q5, the mean pre-test scores for the Cohort 2 (Comparison) group were slightly higher (1.8) than the Cohort 1 (CURE) group (1.6). On the post-test, the Cohort 1 (CURE) students significantly outscored the Cohort 2 (Comparison) students on all questions; Q1, Q3, and Q4 were significant at p<0.001, Q2 was significant at p<0.01, and Q5 was significant at p<0.05. The largest post-test difference between Cohort 1 (CURE) students and Cohort 2 (Comparison) students was for Q3, with an increase of 0.6; all the other questions showed changes of 0.3 or less. Comparing Cohort 1 (CURE) post-test performance on individual questions yields the following results: scores were highest for Q1 (mean = 2.8), followed by Q3 (mean = 2.2), Q2 (mean = 2.1), and Q5 (mean = 1.9). Lowest Cohort 1 (CURE) post-test scores were associated with Q4 (mean = 1.8).

thumbnail

Mean scores for individual items of the pre-test for each cohort revealed no differences between groups for any of the items (Cohort 1, CURE, N = 323; Cohort 2, Comparison, N = 108). Post-test gains of Cohort 1 (CURE) relative to Cohort 2 (Comparison) were statistically significant for all questions. (Question (Q) 1) What is your decision? (Q2) What facts support your decision? Is there missing information that could be used to make a better decision? (Q3) Who will be impacted by the decision and how will they be impacted? (Q4) What are the main ethical considerations? and (Q5)What are some strengths and weaknesses of alternate solutions? Specifically: (Q1), (Q3), (Q4) were significant at p<0.001 (***); (Q2) was significant at p<0.01 (**); and (Q5) was significant at p<0.05 (*). Lines represent standard deviations.

https://doi.org/10.1371/journal.pone.0036791.g001

Overall, across all four groups, mean scores for Q1 were highest (2.6), while scores for Q4 were lowest (1.6). When comparing within-Cohort scores on the pre-test versus post-test, Cohort 2 (Comparison Group) showed little to no change, while CURE students improved on all test questions.

CURE Student Perceptions of Curriculum Effect

After using our resources, Cohort 1 (CURE) students showed highly significant gains (p<0.001) in all areas examined: interest in science content, ability to analyze socio-scientific issues and make well-justified decisions, awareness of ethical issues, understanding of the connection between science and society, and the ability to listen to and discuss viewpoints different from their own ( Figure 2 ). Overall, students gave the highest score to their ability to listen to and discuss viewpoints different than their own after participating in the CURE unit (mean = 4.2). Also highly rated were the changes in understanding of the connection between science and society (mean = 4.1) and the awareness of ethical issues (mean = 4.1); these two perceptions also showed the largest change pre-post (from 2.8 to 4.1 and 2.7 to 4.1, respectively).

thumbnail

Mean scores for individual items of the retrospective items on the post-test for Cohort 1 students revealed significant gains (p<0.001) in all self-reported items: Interest in science (N = 308), ability to Analyze issues related to science and society and make well-justified decisions (N = 306), Awareness of ethics and ethical issues (N = 309), Understanding of the connection between science and society (N = 308), and the ability to Listen and discuss different viewpoints (N = 308). Lines represent standard deviations.

https://doi.org/10.1371/journal.pone.0036791.g002

NWABR’s teaching materials provide support both for general ethics and bioethics education, as well as for specific topics such as embryonic stem cell research. These resources were developed to provide teachers with classroom strategies, ethics background, and decision-making frameworks. Teachers are then prepared to share their understanding with their students, and to support their students in using analysis tools and participating in effective classroom discussions. Our current research grew out of a desire to measure the effectiveness of our professional development and teaching resources in fostering student ability to analyze a complex bioethical case study and to justify their positions.

Consistent with the findings of SSI researchers and our own prior anecdotal observations of teacher classrooms and student work, we found that students improve in their analytical skill when provided with reasoning frameworks and background in concepts such as beneficence, respect, and justice. Our research demonstrates that structured reasoning approaches can be effectively taught at the secondary level and that they can improve student thinking skills. After teachers participated in a two-week professional development workshop and utilized our Bioethics 101 curriculum, within a relatively short time period (five lessons spanning approximately one to two weeks), students grew significantly in their ability to analyze a complex case and justify their position compared to students not exposed to the program. Often, biology texts present a controversial issue and ask students to “justify their position,” but teachers have shared with us that students frequently do not understand what makes a position or argument well-justified. By providing students with opportunities to evaluate sample justifications, and by explicitly introducing a set of elements that students should include in their justifications, we have facilitated the development of this important cognitive skill.

The first part of our research examined the impact of CURE instruction on students’ ability to analyze a case study. Although students grew significantly in all areas, the highest scores for the Cohort 1 (CURE) students were found in response to Q1 of the case analysis, which asked them to clearly state their own position, and represented a relatively easy cognitive task. This question also received the highest score in the comparison group. Not surprisingly, students struggled most with Q4 and Q5, which asked for the ethical considerations and the strengths and weaknesses of different solutions, respectively, and which tested specialized knowledge and sophisticated analytical skills. The area in which we saw the most growth in Cohort 1 (CURE) (both in comparison to the pre-test and in relation to the comparison group) was in students’ ability to identify stakeholders in a case and state how they might be impacted by a decision (Q3). Teachers have shared with us that secondary students are often focused on their own needs and perspectives; stepping into the perspectives of others helps enlarge their understanding of the many views that can be brought to bear upon a socio-scientific issue.

Many of our teachers go far beyond these introductory lessons, revisiting key concepts throughout the year as new topics are presented in the media or as new curricular connections arise. Although we have observed this phenomenon for many years, it has been difficult to evaluate these types of interventions, as so many teachers implement the concepts and ideas differently in response to their unique needs. Some teachers have used the Bioethics 101 curriculum as a means for setting the tone and norms for the entire year in their classes and fostering an atmosphere of respectful discussion. These teachers note that the “opportunity cost” of investing time in teaching basic bioethical concepts, decision-making strategies, and justification frameworks pays off over the long run. Students’ understanding of many different science topics is enhanced by their ability to analyze issues related to science and society and make well-justified decisions. Throughout their courses, teachers are able to refer back to the core ideas introduced in Bioethics 101, reinforcing the wide utility of the curriculum.

The second part of our research focused on changes in students’ self-reported attitudes and perceptions as a result of CURE instruction. Obtaining accurate and meaningful data to assess student self-reported perceptions can be difficult, especially when a program is distributed across multiple schools. The traditional use of the pretest-posttest design assumes that students are using the same internal standard to judge attitudes or perceptions. Considerable empirical evidence suggests that program effects based on pre-posttest self-reports are masked because people either overestimate or underestimate their pre-program perceptions [20] , [22] – [26] . Moore and Tananis [27] report that response shift can occur in educational programs, especially when they are designed to increase students’ awareness of a specific construct that is being measured. The retrospective pre-test design (RPT), which was used in this study, has gained increasing prominence as a convenient and valid method for measuring self-reported change. RPT has been shown to reduce response shift bias, providing more accurate assessment of actual effect. The retrospective design avoids response-shift bias that results from overestimation or underestimation of change since both before and after information is collected at the same time [20] . It is also convenient to implement, provides comparison data, and may be more appropriate in some situations [26] . Using student self-reported measures concerning perceptions and attitudes is also a meta-cognitive strategy that allows students to think about their learning and justify where they believe they are at the end of a project or curriculum compared to where they were at the beginning.

Our approach resulted in a significant increase in students’ own perceived growth in several areas related to awareness, understanding, and interest in science. Our finding that student interest in science can be significantly increased through a case-study based bioethics curriculum has implications for instruction. Incorporating ethical dilemmas into the classroom is one strategy for increasing student motivation and engagement with science content. Students noted the greatest changes in their own awareness of ethical issues and in understanding the connection between science and society. Students gave the highest overall rating to their ability to listen to and discuss viewpoints different from their own after participation in the bioethics unit. This finding also has implications for our future citizenry; in an increasingly diverse and globalized society, students need to be able to engage in civil and rational dialogue with others who may not share their views.

Conducting research studies about ethical learning in secondary schools is challenging; recruiting teachers for Cohort 2 and obtaining consent from students, parents, and teachers for participation was particularly difficult, and many teachers faced restraints from district regulations about curriculum content. Additional studies are needed to clarify the extent to which our curricular materials alone, without accompanying teacher professional development, can improve student reasoning skills.

Teacher pre-service training programs rarely incorporate discussion of how to address ethical issues in science with prospective educators. Likewise, with some noticeable exceptions, such as the work of the University of Pennsylvania High School Bioethics Project, the Genetic Science Learning Center at the University of Utah, and the Kennedy Institute of Ethics at Georgetown University, relatively few resources exist for high school curricular materials in this area. Teachers have shared with us that they know that such issues are important and engaging for students, but they do not have the experience in either ethical theory or in managing classroom discussion to feel comfortable teaching bioethics topics. After participating in our workshops or using our teaching materials, teachers shared that they are better prepared to address such issues with their students, and that students are more engaged in science topics and are better able to see the real-world context of what they are learning.

Preparing students for a future in which they have access to personalized genetic information, or need to vote on proposals for stem cell research funding, necessitates providing them with the tools required to reason through a complex decision containing both scientific and ethical components. Students begin to realize that, although there may not be an absolute “right” or “wrong” decision to be made on an ethical issue, neither is ethics purely relative (“my opinion versus yours”). They come to realize that all arguments are not equal; there are stronger and weaker justifications for positions. Strong justifications are built upon accurate scientific information and solid analysis of ethical and contextual considerations. An informed citizenry that can engage in reasoned dialogue about the role science should play in society is critical to ensure the continued vitality of the scientific enterprise.

“I now bring up ethical issues regularly with my students, and use them to help students see how the concepts they are learning apply to their lives…I am seeing positive results from my students, who are more clearly able to see how abstract science concepts apply to them.” – CURE Teacher “In ethics, I’ve learned to start thinking about the bigger picture. Before, I based my decisions on how they would affect me. Also, I made decisions depending on my personal opinions, sometimes ignoring the facts and just going with what I thought was best. Now, I know that to make an important choice, you have to consider the other people involved, not just yourself, and take all information and facts into account.” – CURE Student

Supporting Information

Bioethics 101 Lesson Overview.

https://doi.org/10.1371/journal.pone.0036791.s001

Case Study for Assessment.

https://doi.org/10.1371/journal.pone.0036791.s002

Grading Rubric for Pre- and Post-Test: Ashley’s Case.

https://doi.org/10.1371/journal.pone.0036791.s003

Acknowledgments

We thank Susan Adler, Jennifer M. Pang, Ph.D., Leena Pranikay, and Reitha Weeks, Ph.D., for their review of the manuscript, and Nichole Beddes for her assistance scoring student work. We also thank Carolyn Cohen of Cohen Research and Evaluation, former CURE Evaluation Consultant, who laid some of the groundwork for this study through her prior work with us. We also wish to thank the reviewers of our manuscript for their thoughtful feedback and suggestions.

Author Contributions

Conceived and designed the experiments: JTC LJC. Performed the experiments: LJC. Analyzed the data: LJC JTC DNK. Contributed reagents/materials/analysis tools: JCG. Wrote the paper: JTC LJC DNK JCG. Served as Principal Investigator on the CURE project: JTC. Provided overall program leadership: JTC. Led the curriculum and professional development efforts: JTC JCG. Raised funds for the CURE program: JTC.

  • 1. Bell P (2004) Promoting students’ argument construction and collaborative debate in the science classroom. Mahwah, NJ: Erlbaum.
  • View Article
  • Google Scholar
  • 3. Toulmin S (1958) The Uses of Argument. Cambridge: Cambridge University Press.
  • 6. Zeidler DL, Sadler TD, Simmons ML, Howes , EV (2005) Beyond STS: A research-based framework for socioscientific issues education. Wiley InterScience. pp. 357–377.
  • 8. AAAS (1990) Science for All Americans. New York: Oxford University Press.
  • 9. National Research Council (1996) National Science Education Standards. Washington, DC: National Academies Press.
  • 10. National Research Council (2011) A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: National Academies Press.
  • 11. National Board for Professional Teaching Standards (2007) Adolescence and Young Adulthood Science Standards. Arlington, VA.
  • 17. Beauchamp T, Childress JF (2001) Principles of biomedical ethics. New York: Oxford University Press.
  • 18. Chowning JT, Griswold JC (2010) Bioethics 101. Seattle, WA: NWABR.
  • 26. Klatt J, Taylor-Powell E (2005) Synthesis of literature relative to the retrospective pretest design. Presentation to the 2005 Joint CES/AEA Conference, Toronto.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

How Can Critical Thinking Be Used to Assess the Credibility of Online Information?

Albie van zyl.

Department of Informatics, University of Pretoria, Pretoria, 0001 South Africa

Marita Turpin

Machdel matthee.

The prevalence of unverified information on the internet and the associated potential adverse effect on society led to the development of a number of models and theories to assess the credibility of online information. Existing research consists of two diverse approaches: the first consists of checklist approaches or normative guidelines on how to assess the information whereas the second provides descriptive models and theories of how users actually go about when assessing credibility. The above mentioned approaches consider aspects related to the presentation and content of the information. However, the reasoning in the content is not a concern that is covered in these approaches. Critical thinking is considered an increasingly important 21st century work place skill. This paper investigates the potential value of using critical thinking in assessing the credibility of online information. The paper commences with an overview of existing approaches for assessing the credibility of online information. It then argues that the presence of a well-developed argument in online information to be an indication of credibility. Critical thinking also helps to evaluate the credibility of evidence. These thinking skills can be developed through training. It is shown how a group of first year Information Systems students were able to more critically engage with the content of online news after a course on critical thinking. This paper contributes to the literature on the assessment of the credibility of online information.

Introduction

The internet has become indispensable as a source of information and news. Given the vast amount of information available as well as the large numbers of information sites, it has become increasingly difficult to judge the credibility of online information [ 1 ]. Metzger argues that in the past, traditional publishing houses used to act as gatekeepers of the information published. There was a cost barrier to printing and the print process allowed for quality control. In the digital age, anyone can be an author of online content. Digital information and content can be published anonymously, and easily plagiarized and altered [ 1 , 2 ]. Online news platforms are in a continual race against time to be the first to publish online, and in the process they sacrifice quality control. In the process, the gatekeeping function of evaluating the credibility of online information has shifted to the individual users.

To date, scholars in information literacy have developed checklists to assist users in assessing the credibility of online information, as well as various theories and models to describe how users evaluate information in practice [ 2 – 5 ]. These models highlight aspects such as the influence of the subjectivity of the user in evaluating content, the process of evaluation as well as the cognitive heuristics that users typically apply during evaluation. The models also recognize that in the era of social computing and social media, evaluation has a strong social component [ 6 ].

In an overview of studies on the assessment of the credibility of online information, it was found that neither the established normative guidelines for evaluating credibility, nor the descriptive models for evaluating credibility consider the quality of reasoning and argumentation contained in the information that is evaluated. That is strange, since critical thinking is generally regarded as an important information literacy skill, and in addition it is viewed as an important 21 st century workplace skill [ 7 , 8 ].

In this paper, we present a case for the use of critical thinking as a means to assess the quality and credibility of online content. We suggest how critical thinking could be used to enhance current credibility assessment practices. The known processes have in common with critical thinking the fact that they are all concerned with the credibility of evidence that is presented to substantiate the findings of an online article. Whereas credibility models mainly focus on presentation and content, critical thinking extends the evaluation of content by evaluating the quality of the argument presented. Admittedly, many fake (and other) news stories contain limited if any arguments to evaluate. While the absence of an argument is not enough to discredit an online article, its presence can be used as a quality indicator. The presence of a weak argument will reduce the perceived credibility of the claim or finding of an article, while a strong argument will enhance its credibility.

This paper commences with a short overview of existing guidelines and descriptive models for evaluating the credibility of online information. The common themes among these models are summarized. Next, the paper introduces the building blocks of critical thinking and proceeds to indicate how critical thinking is used for argument evaluation. A means to assess the credibility of online information is proposed that uses critical thinking in a way that recognizes and builds on previous work related to credibility assessment.

Existing Research on Assessing the Credibility of Online Information

Credibility refers to the believability of information [ 4 ]. Credibility is regarded to be subjective: it is not an objective attribute of an information source, but the subjective perception of believability by the information receiver [ 4 , 9 ]. As such, two different information receivers can have different assessments of the credibility of the same piece of information.

Research on assessing the credibility of online information can be categorized into research on normative guidelines (in other words, what should people be looking at when they assess credibility) and research related to descriptive models or theories (how people are assessing credibility in practice).

A Checklist for Information Credibility Assessment

The normative approach to the assessment of information credibility is promoted by the proponents of digital literacy, who aim to assist internet users in developing the skills required for evaluating online information. Their assumption is that online information can be evaluated in the same manner as information found elsewhere [ 1 ]. A checklist approach is usually followed, where the list covers the following five components: accuracy, authority, objectivity, currency, and coverage or scope [ 1 ]. Accuracy refers to the degree that the content is free from errors and whether the information can be verified elsewhere. It is an indication of the reliability of the information on the website. Authority refers to the author of the website, and whether the website provides contact details of the author and the organisation. It is also concerned with whether the website is recommended or endorsed by a trusted source. Objectivity considers whether the content is opinion or fact, and whether there is commercial interest, indicated for example by a sponsored link. Currency refers to the frequency of updates, and whether the date is visible. Coverage refers to the depth and comprehensiveness of the information [ 1 ]. In a checklist approach, a user is given a list of questions of things to look out for. For example, in terms of currency, the user has to look for evidence of when the page was last updated.

In a series of studies conducted by Metzger and her colleagues [ 1 ], it was found that even when supplied with a checklist, users rarely used it as intended. Currency, comprehensiveness and objectivity were checked occasionally, whilst checking an author’s credentials, was the least preferred by users. This correlates with findings by Eysenbach and Köhler [ 10 ] who indicate that the users in their study, did not search for the sources behind their website information, or how the information was compiled. This lack of thoroughness is ascribed to the users’ lack of willingness to expend cognitive effort [ 6 ]. The apparent attempt by users to minimise cognitive effort has given rise to studies on how users apply cognitive heuristics as well as other means to assess credibility more quickly and with less effort. This research led to the development of a number of descriptive models and theories on how users assess credibility in practice.

Descriptive Models and Theories Related to Information Credibility Assessment

The use of cognitive heuristics..

A number of studies indicate that internet users avoid laborious methods of information evaluation, and that they prefer to use more superficial cues, such as using the look and feel of a website as a proxy for credibility rather than analyzing the content [ 5 , 6 , 11 ]. When evaluating credibility, people tend to apply cognitive heuristics, which are mental short cuts or rules of thumb. Based on their previous experience people respond to cues and act on these subconsciously, without the need to spend mental effort [ 6 , 12 , 13 ]. Five heuristics are identified that users commonly apply to decide on the credibility of online content [ 6 ]. The reputation heuristic is applied when users recognize the source of the information as one they believe to be reputable, possibly because of brand familiarity or authority. The endorsement heuristic means that a source is believed to be credible if other people believe so too; either people they know or people that have given it a good rating. The consistency heuristic means that if similar information about something appears on multiple websites, the information is deemed to be credible. The expectancy violation heuristic is a strong negative heuristic. Information that is contrary to the user’s own beliefs is not deemed to be credible. Lastly, when using the persuasive intent heuristic , users assess whether there is an attempt to persuade them or sell something to them. In this case, the information is perceived to be not credible because there is a perceived ulterior motive or an attempt to manipulate the user.

The Prominence-Interpretation Theory.

The Prominence-Interpretation theory comprises two interlinked components that describe what happens when a user assesses the credibility of a website [ 14 ]. First, a user notices something (prominence), and then they interpret what they see (interpretation). If one of the two components are missing, there is no credibility assessment. A user will notice existing and new elements of a website and interpret the elements for credibility in an iterative fashion until being satisfied that a credibility assessment can be made. Conversely, the user may stop when they reach a constraint, such as running out of time [ 14 ]. A visual representation of the Prominence-Interpretation theory is provided in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 497534_1_En_17_Fig1_HTML.jpg

Prominence-interpretation theory [ 14 ]

Prominence refers to the likelihood that certain elements will be noticed or perceived by the user. The user must first notice the element, to form a judgement of the credibility of the information. If the user does not notice the element, it does not play a role. Five factors are identified that influence prominence, namely: Involvement, topic, task, experience and individual differences. The most dominant influence is user involvement, referring to the user’s motivation and ability to engage with content. Topic refers to the type of website the user visits. The task is the reason why the user is visiting the websites. Experience refers to the experience of the user, in relation to the subject or topic of the website. Individual differences refer to the user’s learning style, literacy level or the user’s need for cognition. When a user’s involvement is high, and the user’s experience is of expert status, the user will cognitively notice more elements [ 14 ].

Interpretation refers to the user’s judgement of the element under review. For example, a broken link on a website will be interpreted as bad and lead to a lower credibility assessment of the website. Interpretation of elements is affected by a user’s assumptions, skills, knowledge and context.

Consolidation.

When comparing the research on the use of heuristics [ 6 ] with the Prominence-Interpretation theory [ 14 ], one can see that the use of heuristics fits well into the “interpretation” component of Prominence-Interpretation theory.

A Web Credibility Framework.

Fogg’s web credibility framework [ 15 ] contains the categories of operator , content and design . Operator refers to the source of the website, the person who runs and maintains the website. A user makes a credibility judgement based on the person or organisation operating the website. Content refers to what the site provides in terms of content and functionality. Of importance is the currency, accuracy and relevance of the content and the endorsements of a respected outside organisation. Design refers to the structure and layout of the website. Design has four elements namely information design (structure of the information), technical design (function of the site on a technical level, and search function), aesthetic design (looks, feel and professionality of the design) and interaction design (user experience, user interaction and navigation) [ 15 ].

The web credibility framework was extended by Choi and Stvilia [ 3 ] who divided each of the three categories (operator, content and design) into the two dimensions of trustworthiness and expertise, thereby forming what is called the Measures of Web Credibility Assessment Framework.

When consolidating the web credibility framework [ 15 ] and its extension [ 3 ] with the work on credibility assessment presented in the prior sections, one can say that the web credibility frameworks contribute to prominence as well as the interpretation . The web design contributes to the prominence or noticeability of the information. Further, the level of professionality of the design can be interpreted by means of a heuristic such as the reputation heuristic. The website operator and content, when noticed, get interpreted by means of evaluation heuristics. Hence, the work presented in 2.2.1 – 2.2.3 can be reconciled into different aspects of online information that, when noticed, get interpreted by means of heuristics.

Iterative Models on the Evaluation of Online Information.

According to the Prominence-Interpretation theory [ 14 ] the interpretation of information occurs in an iterative fashion until a credibility assessment can be made. Two other models also recognize the iterative nature of credibility assessment. These are the cognitive authority model [ 2 ] and Wathen and Burkell’s model [ 16 ].

With the cognitive authority model, the information seeker iteratively assesses the authority and credibility of online content by considering the author, document, institution and affiliations [ 2 ]. These are integrated into a credibility judgement. The model is similar to the checklist [ 1 ], but proposes that users employ the technology available to them to make the judgement. Like the checklist, the cognitive authority model is normative.

Wathen and Burkell [ 16 ] also propose an iterative way of assessment. According to their research users first do a surface credibility check based on the appearance and presentation of the website. Secondly, the user will look for message credibility by assessing the source and the content of the message. Lastly, the content itself is evaluated. During this final stage, sense-making of the content occurs, depending on factors such as the user’s previous level of knowledge on the topic. If, at any stage, the user becomes aware of a reason to doubt the credibility of the information, the iterative process is stopped. Wathen and Burkell’s model [ 16 ] is normative but also incorporates descriptive research on information evaluation.

A Synthesised Summary of Existing Work on the Credibility Assessment Process

To synthesise the joint findings from previous work on credibility assessment of online information:

  • Credibility cues need to be noticed before they are processed [ 14 ].
  • The evaluation process is iterative and moves from surface level checks (such as look and feel of a website) through to engagement with the content [ 14 , 16 ].
  • From the onset of the evaluation process, cognitive or judgmental heuristics are applied to assess credibility. This is especially true during the interpretation phase, when a user evaluates the content itself [ 1 , 4 – 6 ]. Judgmental heuristics are used in order to reduce cognitive effort as the user is inundated with information.
  • The evaluation process takes part in a social context and some of the evaluation cues are socially generated, such as number of website visitors, user recommendations or social rankings [ 6 ].

In the section that follows, the principles of critical thinking will be introduced. This is in order to assess how critical thinking might be used to evaluate online content in the light of what is already known about credibility evaluation.

Critical Thinking

The Foundation of Critical Thinking considers critical thinking as “that mode of thinking - about any subject, content, or problem - in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it” [ 17 ]. Some authors consider it an indispensable skill in problem solving. Halpern suggests a taxonomy of critical thinking skills covering a broad range of skills as (1) verbal reasoning skills, (2) argument analysis skills, (3) skills in thinking as hypothesis testing, (4) dealing with likelihood and uncertainties and (5) decision making and problem solving skills [ 18 ]. The aspect of critical thinking of interest in this paper, relates to the analysis of arguments. A useful definition for critical thinking is therefore the one suggested by Tiruneh and his co-authors [ 19 ]: critical thinking is the ability to analyse and evaluate arguments according to their soundness and credibility, respond to arguments and reach conclusions through deduction from given information [ 19 ]. Booth et al. [ 20 ], basing their work on ideas of Toulmin et al. [ 21 ], consider a basic argument to consist of a claim (or conclusion), backed by reasons which is supported by evidence. An argument is stronger if it acknowledges and responds to other views and if necessary, shows how a reason is relevant to a claim by drawing on a general principle (which is referred to as a warrant).

The following argument, adopted from [ 20 : 112] illustrates these components: “TV violence can have harmful psychological effects on children” (CLAIM), “because their constant exposure to violent images makes them unable to distinguish fantasy from reality” (REASON). “Smith (1997) found that children ages 5–7 who watched more than 3 h of violent television a day were 25% more likely to say that what they saw on television was ‘really happening’” (EVIDENCE). “Of course, some children who watch more violent entertainment might already be attracted to violence” (ACKNOWLEDGEMENT). “But Jones (1999) found that children with no predisposition to violence were as attracted to violent images as those with a violent history” (RESPONSE).

Booth and his co-authors [ 20 : 114] use the following argument to illustrate the use of a warrant in an argument: “We are facing significantly higher health care costs in Europe and North America (CLAIM) because global warming is moving the line of extended hard freezes steadily northward.” (REASON). In this case the relevance of the reason to the claim should be stated by a general principle: “When an area has fewer hard freezes, it must pay more to combat new diseases carried by subtropical insects no longer killed by those freezes” (WARRANT).

Of course, good arguments need more than one reason in support of conclusions and complex arguments contains sub-arguments. However the main components remain the same. Figure  2 summarizes the main components of a basic argument.

An external file that holds a picture, illustration, etc.
Object name is 497534_1_En_17_Fig2_HTML.jpg

The core components of an argument [ 20 : 116]

Critical thinking entails the identification of the core components in an argument (analysis) in order to judge its credibility, quality and to formulate a response to it. According to Butterworth and Thwaites [ 7 ], a good quality argument is one where the reasons are true or justified and where the conclusion follows from the reasons. By using these criteria in the evaluation of arguments, classical fallacies such as the post hoc fallacy or circular reasoning can be identified. In addition, the evaluation of an argument entails asking questions and finding counter examples. A good quality argument will pre-empt objections or counter examples and respond to it. Butterworth and Thwaites [ 7 ] consider a credible argument as one which is plausible/believable (acknowledging that some highly improbable claims can be true), and having a trusted source. The credibility is enhanced if the claim is corroborated by different sources with different kinds of evidence.

The Use of Critical Thinking in the Context of Existing Credibility Assessment Models

It is suggested that critical thinking is included in the credibility assessment process, as follows. With reference to the Prominence-Interpretation theory [ 14 ], critical thinking can be applied during the interpretation phase. It can be used to assess the quality of evidence as well as evaluate the argument itself. It will only be used during a later stage in the iterative process of credibility assessment, possibly in the third phase of Wathen an Burkell’s iterative model [ 16 ].

Discussion: Potential Challenges to Using Critical Thinking to Assess the Credibility of Online Information

When considering the use of critical thinking to evaluate the credibility of online information, some challenges are apparent.

First, as indicated earlier, it is known that users, who are flooded with information, are applying as a coping mechanism the use of judgmental heuristics to reduce cognitive effort. Therefore, they prefer to use cues that will give them immediate reason to believe or not believe the information presented to them. Argument evaluation is an exercise that requires cognitive effort, especially when a complex claim is presented. Therefore, users will not go to the effort of thoroughly evaluating an argument if they can help it, unless there is high motivation to do so, for example when university level students are looking for material to support the arguments in their essays.

A second challenge to the use of critical thinking in this context is that online news or other online content does not always contain an argument. A piece of news on social media may just consist of evidence. In that case, critical thinking would require the evaluation of the credibility of the evidence.

A third possible challenge is that in an effort to mislead, the author of fake news may present a credible looking argument on the basis of fake evidence that cannot readily be verified. Hence, while good argumentation is often associated with good quality content, this may not always be the case. However, the cognitive effort of trying to second-guess the veracity of a well presented argument is so high that this is not a feasible task in everyday credibility assessment situations.

Addressing the Challenges

The above mentioned challenges could be addressed as follows.

The challenge of the cognitive effort of critical thinking may be improved by means of training. As motivated earlier in the paper, critical thinking forms part of information literacy and is an important 21st century user skill. Training and regular exercise in argument evaluation will make it become an easier habit, so that it can be more easily applied. A number of universities have compulsory first year information literacy courses, and this is where critical thinking can be introduced. The authors are involved in the teaching of critical thinking and problem solving to IS first year students. A study with a pre- and post- assessment exercise to determine the effect of the course, was done during the first half of 2019. A total of 154 students participated in the pre-assessment evaluation and 166 students in the post-assessment evaluation. The objective of the course was not to train students to identify fake news but to analyse and evaluate arguments and to cultivate a critical attitude towards reading and interpreting texts.

Findings from a Course on Critical Thinking.

Pre - assessment: During the pre-assessment, students were asked several questions to test their critical thinking skills and one question to determine the credibility of a piece of information found online. The information presented to them [ 22 ] was part of fake news and presented an argument against the use of prison inmates to provide laughter in CBS sitcoms. In the pre-assessment only 16% of students could identify it as fake news. Students who identified it as fake news, applied most of the cognitive heuristics listed in Sect.  2.2 . For example, a few students knew that The Onion is a website known for its satire articles (reputation heuristics). A handful of students applied the expectancy violation heuristics (“It just doesn’t make sense to me honestly”; “In today’s   day and age, such practice would never be accepted seeing as people get offended by even the most futile things”; “In today’s age, laughter can be produced on computers or a group of laughs taken once and then played back whenever the producers feel”). Consistency heuristics were also used (“ This is my first time hearing about it ”). Quite a number of students pointed out the lack of credible evidence.

Post - assessment:

In the post-assessment questions were asked to assess critical thinking skills in general and the last question focused on fake news. Two different pieces of information were provided, one fake news and the other not (see Table  1 ).

Table 1.

Article 1 and Article 2

Both articles contained far-fetched claims. Article 1 [ 23 ] is an argument containing unsubstantiated claims, sweeping statements and emotional language. Article 2 [ 24 ] was sourced from a ‘strange but true’ SkyNews site. Article 2 is a report based on claims backed by credible evidence. Students were asked to determine which one is fake news and to provide an argument for their choice. The results are given in Table  2 .

Table 2.

Responses on question to identify articles as fake news

Students who correctly identified Article 1 as fake news (56% of students), typically mentioned the relative obscurity of the website and the absence of names of experts ( “there is said that experts were used in the article but none of the so called “experts” names or institutions were called to show the research” ). In other words, they applied the reputation heuristic. The following student applied the expectancy violation heuristic to (incorrectly) identify Article 1 as real news: (“ Article 1 can be seen as real news because the facts are not absurd”). What was clearly noticeable was that in their assessment, most students used the critical thinking skills taught during the semester: they pointed out that the claims are not supported by evidence (“ they state that there are parents who burned?? the film but no numbers are provided it could be 2 out of 1000 but nothing is stated to prove this reason” ). They further mentioned the subjective nature of the article (“ The use of adjectives such as “arrogant”, “disrespectful”, “envious” makes the article sound extremely bia s ed” ) as well as harsh language (“ The article is also very opinionated and the language used is quite harsh” ). They also think the reasoning to be faulty (“ And the argument is unstructured – “the “reasoning” doesn’t lead up to a suitable conclusion .”)

Students who incorrectly identified Article 2 as fake news (56% of students) in general used the expectancy violation heuristic. They could not imagine sheep to be school pupils. ( “Although article 2 comes from a reliable source the facts are absurd. [However] Article 1 can be seen as real news because the facts are not absurd” ).

Discussion.

Article 1 is an argument whereas Article 2 a report. This explains why students were able to use critical thinking skills to evaluate article 1. In article 2, where no clear argument was present, critical thinking could only be applied to evaluate the evidence. Students found the evidence to be specific and traceable which contributed to its credibility. Only 36% of students were able to classify both articles correctly. However, the fact that only 8% said that neither articles were fake news, was encouraging, compared to the pre-assessment result where 84% of students were not able to recognize the supplied article as fake news. The post-assessment results indicate that most students had developed a critical attitude towards the supplied texts.

Recommendations on Combining Critical Thinking and Cognitive Heuristics.

The use of critical thinking skills in identifying fake news can be complemented by applying the consistency heuristic [ 6 ] to seek for other online sources that carry similar evidence.

Lastly, since the assessment of credibility of online information has been found to be a socially interactive activity [ 6 ], the endorsement heuristic could be used to inquire on a social platform whether information is credible. For example, a hoax website can be visited to see if the information has been exposed by other users as a hoax.

This paper considered the work that has been done to date on the assessment of the credibility of online information. A concise overview was presented of some of the major contributions in this domain. These contributions were synthesized into a list of common attributes that represent the key characteristics of credibility assessment models. Following this, the elements of critical thinking was introduced. Suggestions were made as to how critical thinking could be used for credibility assessment. The challenges related to the use of critical thinking in practice were also considered, and suggestions were made to overcome these challenges. The outcomes of the effect of the teaching of critical thinking skills on IS students’ ability to identify fake news, were discussed. Preliminary findings show that where fake news are presented as arguments, students use their skills of analysis and evaluation of arguments to identify fake news. Where fake news are reports, students look for quality evidence.

This paper contributes to the literature on the assessment of the credibility of online information. It argues that, and suggests how, the important 21st century skill of critical thinking can be applied to assess the credibility of online information. In doing so, this paper makes a contribution in terms of the responsible design, implementation and use of current day information and communications technology.

Contributor Information

Marié Hattingh, Email: [email protected] .

Machdel Matthee, Email: [email protected] .

Hanlie Smuts, Email: [email protected] .

Ilias Pappas, Email: [email protected] .

Yogesh K. Dwivedi, Email: moc.liamg@ideviwdky .

Matti Mäntymäki, Email: [email protected] .

Exploring Students' Critical Thinking Skills Using the Engineering Design Process in a Physics Classroom

  • Regular Article
  • Published: 30 November 2021
  • Volume 32 , pages 141–149, ( 2023 )

Cite this article

critical thinking peer reviewed articles

  • Pramudya Dwi Aristya Putra   ORCID: orcid.org/0000-0002-1166-8220 1 ,
  • Nurul Fitriyah Sulaeman 2 ,
  • Supeno 1 &
  • Sri Wahyuni 1  

5510 Accesses

8 Citations

1 Altmetric

Explore all metrics

Critical thinking skills (CTS) have been applied in the learning environment to address students' challenges in the twenty-first century. Therefore, specific approaches need to be implemented in the learning environment to support students' CTS. This research explores students' CTS during the learning process through the engineering design process (EDP) in a physics classroom. The methodology relied on a case study where students were situated for the first time in an EDP classroom. Data were analyzed for each of the EDP stages based on CTS criteria codes. The accuracy of the data was tested through a peer review process to demonstrate the validity of the analysis. The results showed that students exhibited specific CTS criteria in each EDP stage. Therefore, EDP could be an alternative method to engage CTS. This result contributes empirical evidence that the research on CTS also needs the students' performance while engaged in EDP.

Similar content being viewed by others

critical thinking peer reviewed articles

T/E design based learning: assessing student critical thinking and problem solving abilities

critical thinking peer reviewed articles

Toward a Systematic and Model-Based Approach to Design Learning Environments for Critical Thinking

Systematic design of a learning environment for domain-specific and domain-general critical thinking skills.

Avoid common mistakes on your manuscript.

In the twenty-first century, critical thinking skills (CTS) are becoming one of the most crucial learning activities (Fuad et al., 2017 ; Kavenuke et al., 2020 ). CTS support students in making decisions in a specific way during the learning process. When students face a given problem, CTS drive a person to analyze a problem and evaluate possible solutions. In this approach, CTS also offer an opportunity for students to use a reasonable rationale for their thinking, reflecting on the problem, and the potential solution (Ennis, 1993 ).

Critical thinking (CT) has been defined as a cognitive process involving reasonable reflective thinking to develop a decision based on the problem faced by a person; CTS include a person's ability for higher-order thinking, problem-solving, and metacognition (Ennis, 1989 ). Furthermore, CT is reasonable reflective thinking focused on a decision that the students believe in or do, which is in the cognitive domain (Ennis, 1993 ). In contrast, Facione ( 1990 ) conceptualized that CTS relate to cognitive ability and affective ability to be a good thinker. CT is a reflective thinking skill involving analyzing, evaluating, or synthesizing relevant information to form an argument to make a decision (Ennis, 1993 ; Ghanizadeh, 2017 ). Wechsler et al. ( 2018 ) proposed that CTS be required to display tests conducted after the CTS process in the classroom and review student behavior during the learning process.

However, research on CTS has been conducted using single tests to determine students' cognitive ability for CT. For example., in a study by Mutakinati et al. ( 2018 ), the students in a junior high school were given a CTS post-test following a science lesson. The results showed that the students had sufficient thinking skills to critique their plan for systematic practice, including constructing a realistic critique on their power of thought. Additionally, a study by Fuad et al. ( 2017 ) trained students to study more by using exploratory questions and information about how to develop a hypothesis, assisting students in creating learning based on the students' needs. He gave a post-test to the students to evaluate their CTS.

It is crucial to utilize a learning approach that supports a student's thinking in the learning process (Shaw et al., 2020 ). CTS can be developed during the learning process using teaching approaches that prompt students to face real-world problems. The teacher can select a teaching approach that pushes students to explore their CT through argumentation to make decisions (Ghanizadeh, 2017 ). One of the teaching approaches to facilitate CTS is the engineering design process (EDP). In the EDP, reflective thinking is needed to produce a better decision in solving a problem given by the teacher. Yu et al. ( 2020 ) investigated the relationship between CTS and the EDP when students design a product. The results indicated that the EDP stages played an essential role in students' understanding of their own CTS. Using EDP student experienced to define a problem, develop the argumentation, and finally make a decision that matches the EDP step (Spector & Ma, 2019 ; Sulaeman et al., 2021 ). The EDP implementation with several stages of learning, allows students to define the problem before they make a decision (Arık & Topçu, 2020 ; Tank et al., 2018 ).

Research exploring how EDP stages engage CTS is scarce. Additionally, most of the research on CTS has been conducted quantitatively using statistical analyses (e.g., Kavenuke et al., 2020 ; Mutakinati et al., 2018 ; Yu et al., 2020 ). The performance of CTS is also essential to highlight the students' behavior during the utilization of their CT abilities (Ennis, 1993 ). This study explores the students' CTS in the EDP project. One challenge in this study is the possibility of excessive subjectivity when analyzing CT performance; the authors used the peer review process to reduce this potential concern (Merriam & Tisdell, 2016 ). Thus, the research questions to guide this study are as follows:

To what extent could the EDP support CTS?

How does the EDP support a student's CTS in the physics classroom by defining the problem, using argumentation, and developing a solution?

Theoretical Framework

Critical thinking skills (cts).

The urgency of CTS could be traced from the educational theory by Dewey. Experiential learning theory explains the practical learning in enquiry practices (Dewey, 1993 ). Dewey suggested that the essence of an enquiry is formulated in the experiential learning cycle, which is initiated with the perception of solving a problem and exploration of relevant knowledge to construct a meaningful explanation of the solution in solving the problem (Garrison et al., 2001 ). This experiential learning cycle is a form of reflective thinking to produce better solutions (Garrison & Arbaugh, 2007 ). Reflection demands to think critically to identify solutions from a problem (Antonieta et al., 2005 ). CT is reasonable thinking to develop a decision based on a problem caused (Ennis, 1989 ). The CT be included a person skill in reflecting a solution of a problem given (Ennis, 1993 ). CTS need to be identified in both cognitive and affective concepts (Facione, 1990 ; Shaw et al., 2020 ). Kavenuke et al. ( 2020 ) explained that CTS involves synthesizing, analyzing, and evaluating information to make a cognitive decision and transform it into affective domain performance.

Besides being related to the cognitive domain, CTS is also related to the affective domain. This domain engages students in communication to support their decision through argumentation (Antonieta et al., 2005 ). Students have an opportunity to criticize using scientific statements in a scientific environment when they are communicating their idea (Farmer & Wilkinson, 2018 ). CTS begin with a simple experience, such as observing a difference, encountering a problem, or questioning someone's statement, and then leads to an enquiry; then, more complex experiences are encountered, such as interactions through communication in the application of higher-order thinking skills (Spector & Ma, 2019 ).

Measurement tools have been developed using criteria to describe a person's ability in CT. Ernst and Monroe ( 2004 ) analyzed criteria while measuring CTS, such as interpretation, analysis, evaluation, inference, explanation, and self-regulation. Also, CT ability is developed in detail through enquiry, argumentation, and self-regulation (Kabir, 2002 ; Spector & Ma, 2019 ). Using argumentation in CT, students can select evidence that supports their decision (Giri & Paily, 2020 ). Moreover, the measurement of CT was developed based on some of the research and models available. CT could be assessed using an open-ended assessment model, multiple-choice with written justification model, essay testing of critical thinking model, and performance assessment model (Ennis, 1993 ). In general, the measurement in CTS is given through an experimental study (e.g., Farmer & Wilkinson, 2018 ; Fuad et al., 2017 ; Yu et al., 2020 ). The CTS then can be described in the median data collected statistically to different students' levels, such as low and high (Kim et al., 2013 ). However, studies to demonstrate the performance of CT in the learning process are lacking, requiring further investigation.

Research in measuring CT through performance tests can be done by applying a learning approach that describes reflective thinking (SEN et al., 2021 ; Yu et al., 2020 ). The learning approach shows the cycle learning that includes defining a problem, developing a design solution through scientific argumentation, and deciding. One of the learning approaches that facilitate the cycle model in the classroom is the implementation of the EDP.

Engineering Design Process (EDP)

Engineering is a discipline that solves problems by obeying constraints, using a body of knowledge implemented through science, math, and technological tools (NGSS, 2013 ; NRC, 2012 ). Continuing design is a critical aspect in engineering that aims to solve a problem by iterative thinking, being open to the idea of having many possible solutions, along with a meaningful understanding of the integration of science, math, and technological concepts (Guzey et al., 2019 ; Moore et al., 2014 ). The design process in engineering is needed to develop collaboration and social communication (Sulaeman et al., 2021 ; Yazici et al., 2020 ). Thus, the EDP's goal is to solve a real-world problem with an engineer-designed activity.

The engineering practice is recognized by students during the cycling process, solving a problem in several stages. These stages engage students in identifying a problem, understanding the engineering need, and the opportunity to offer multiple possible solutions (Lottero-perdue et al., 2015 ; Whitworth & Wheeler, 2017 ). Students develop a critical understanding of the potentially relevant issues within the problem statement, allowing them to generate the best solution in the engineering classroom (Arık & Topçu, 2020 ).

The EDP addresses students' abilities to make decisions by defining a problem, developing argumentation, and identifying a solution for the problem (Guzey et al., 2016 ; Mathis et al., 2017 ). Therefore, in this study, the EDP stage is a bridge to facilitate CTS. More specifically, the implementation of EDP involves cycling, which starts with defining a problem, learning a scientific concept, planning a solution, trying a solution, and deciding (Tank et al., 2018 ). Furthermore, each EDP step can facilitate the CTS to develop a solution in the engineering classroom using the investigation through defining a problem, developing argumentation, and making a decision (Ahern et al., 2012 ).

Methodology

Research design.

A single case study was utilized to explore a student's experience in the EDP classroom in relation to CTS (Yin, 2018 ). The single case study was selected because of the desire to explore an in-depth EDP based on the criteria of general CTS based on Ernst and Monroe ( 2004 ). This study was conducted during the pandemic era (when covid-19 hit in the selected area), so the study was conducted using two different approaches: both online and off-line learning (i.e., blended learning). The authors developed an EDP worksheet that guided individual activities and group activities. Due to the regulations affecting education during the pandemic, the classroom only allowed a maximum of 15 students.

Context of Study

The study was conducted in a physics classroom in one of the high schools, located in one district, in Indonesia. Students never followed the same program during the EDP project, particularly in physics. Through the EDP, the students had to define a problem, learn the physics concepts and the related subjects, develop a solution plan, and make a decision about their solution (Tank et al., 2018 ). The authors developed a EDP worksheet. This EDP worksheet is an instructional sheet for students to understand the EDP stages in this project (Sulaeman et al., 2021 ).

The team project addressed one challenge in which students could build a solution based on the given problem. The problem given by the worksheet involved asking students to solve a problem regarding the location of rice fields. The situation addressed the lack of water during the dry season and the amount of water during the rainy season. Figure  1 shows the EDP activities in the classroom, which total 315 min of activities. The project was solved by students individually and in groups to assess the consistency of improving the CTS of the students.

figure 1

The EDP steps during implementation in the physics classroom

Participants

There were 12 students in this study in the tenth grade at the time of data collection. They volunteered their participation, joining the EDP project in both the online and off-line classroom. They also agreed to follow government regulations regarding health protocols and received permission from their parents to participate. The full-time physics teacher identified students' levels in various physics achievements. The demographics of the students are described in Table 1 . The level of achievement is divided into three categories: high (student's achievement was more than 75); medium (student's achievement was more than 65 but less than 75); and low (student's achievement was less than 65). The score of 75 was a standard value to grade students' mastery in physics concepts in this school.

Data Collection

There were three data sources: text based on the EDP worksheet, the recording of students' group discussion, and the recording of the students' interviews. First, Text was collected based on the students answer from EDP worksheet. EDP worksheet presented students with activities to work on individually, enabling the collection of data regarding students' problem definition and learning. Students then developed an opinion based on the question asked: for example, who has a problem? What is the problem? And who is the user? In the individual work, the students' writing was collected. Second, when student worked in their group, the discussion process was recorded. All the students' communication was transcribed to analyze. The data collection was focused on stages of plan, try, test, and decide. Third, Students were also interviewed to acquire the necessary supporting data on the changes in students' CT abilities (see Online Appendix A).

Data Analysis

The three types of data were analyzed to triangulate strategies for confirming the accuracy of the data (Creswell & Poth, 2016 ). The authors then identified the stage of the EDP (see Fig.  1 ) and matched the code of CT. All the data collected were transcribed into text and coded as shown in Table 2 . The code of CT was developed using the criteria developed by Ernst and Monroe ( 2004 ). All students’ statements on the each EDP stage was read carefully and given a justification based on the CTS criteria. The number of the CTS criteria were calculated in each EDP and presented in Table 3 . Those criteria were divided into two levels to express the students' CT abilities based on the development of inductive code in the site (Saldana, 2016 ). Thus, each author coded the data (see example in Online Appendix B) as a peer examination (Merriam & Tisdell, 2016 ). When the coding presentation differed between authors, the authors met to negotiate a consensus students’ statement.

This research aimed to explore students' CTS through approaches of the EDP in a physics classroom. Our findings discuss the matrix of suitability of CTS aspects and each EDP stage. In addition, the results were organized based on the students' ability to define a problem, provide scientific argumentation, and generate a solution.

Matrix of Suitability of CTS Criteria and EDP Stages

From our analysis of students' worksheets and students' discussions in the physics classroom, the results indicated the justification of CTS criteria in each EDP stages. Table 3 provides code frequencies counts for the students’ statement in the EDP project.

Defining a Problem

In the EDP, defining a solution is the first step for students in the problem-solving process. Students experienced solving a problem with a focus on the necessary part of the situation and the constraints given by the teacher. In the "Define" step, students generally started by interpreting and identifying a problem given by the teacher, highlighting the problem statement and the need for problem-solving processes. Some examples were given that were expressed in the worksheet about the problem given: difficulties of watering in the farm field.

[A1]: Farmers in the western rice field of the village made small wells near their fields with the help of diesel pumps to supply water to their fields. The way for farmers to stop pumping water is by building a dam. [A1]: The client wants the dam to last a long time with an estimated cost of $2000. It is useful for storing water during the rainy season and supplies water for the dry season so that farmers do not have to pump water anymore. [A2]: The water supply is low in the irrigation area of the river. The river flow in the area is very small during the dry season, while other factors also influence the depth and width of the river. As a result, the branches of the river sometimes do not reach the rice fields, so the rice plants often lack water. [A2]: The head of the village wants to make a dam that has a width in total 3 metres, 2.5 metres for storing the water, and ½ metre for anticipating when the water overflows

Students [A1] and [A2] categorized the two statements based on the engineering problem given. They stated the problem, and they clarified a constraint to solve the problem. They did not only state the problem about the lack of watering in the rice field, but also mentioned constraints to solve the problem. Those students were categorized in the high level of interpretation criteria of the CTS. Students provide the problem information and the constrain to solve the problem. On the other hand, the two examples below show students low in CTS.

[U1]: During the dry season, the river flow in the area was very small, so that the water in branches of the river did not reach the rice fields [for watering]. [U2]: There was very little water supply in the irrigation area of the river and the [water] flow rate was very small during the dry season, so [the] availability of water in the river is little during the dry season.

Students [U1] and [U2] were less skilled in interpreting the problem given by the teacher. Even though the students could state the problem in the village, they could not adequately to explain of the constraint when asked to develop a solution.

Students' Argumentation

Students in the EDP classroom also provided an argument when they planned, tried, and tested possible effective solutions. In those steps, students worked as a group to discuss their solution for solving a problem, totalling four students per group. The data shown in this section uses vignettes to show each group's manner of discussion. The discussion in this example shows when students were trying to develop a solution. Students [A1], [S1], and [U1] discussed the possible implementation of one of the physics concepts to build a dam. The student [U1] showed an increase in the level of CTS when joining the discussion section with the group.

[A1]: I think the materials used must be strong and durable to build the foundation of the dam. We also need the concept of physics using hydrostatic pressure. The lowest foundation is built wide and thicker, while the higher one is like the shape of a cone like this [while demonstrating by hand the shape of cone]. [S1]: Then Pascal's Law also explain the water capacity. During the rainy season, the water does not flow out, or in other words, water can accommodate both in raining and dry season. The water can irrigate the rice fields continuously. [U1]: Yes, I agree, so we also implement Pascal's Law, and we pay attention to the capacity during the rainy season so that the water doesn't overflow [water spill from the dam].

In Vignette 1, the presented discussion is between the students to plan the creation of the dam. They collected physics-related evidence to emphasize the possible implementation of this real-world problem. They learned the concepts of hydrostatic pressure, serving as a base to solve the problem. Student [A1] demonstrated high CTS because he examined ideas by analyzing the concept of physics. His statement is a clear expression of the CTS of analysis, and student [S1] examined the alternative of concept work in emphasizing the situation by collecting evidence of the amount of water during dry or rainy seasons. She demonstrated high CTS of evaluation criteria. Student [U1] also described and emphasized the solution by drawing a conclusion of the dam being built. He also demonstrated high CTS in terms of the inference criteria.

In the "Test" step, students concluded their problem design and coordinated it with their understanding of the criteria needed. In Vignette 2, the discussion shows students offering further clarification about their design. Students evaluated the criteria of the design based on the problem given.

[U1]: The dam has a width of 3 metres; it's almost the same as the constraint. [S1]: Yes, 2.5 metres is added during the rainy season and ½ metres during the dry season. [A1]: The dam must last a long time with a budget of $2000. [S1]: Okay, this means, yes, the criteria requested by the client are a dam that has a width of 3 metres and a depth of 2.5 metres during the rainy season, and ½ metre during the dry season, and the dam must last a long time with a budget of $2000." (Writes down the criteria requested by the client)

Students [U1] and [S1] emphasized the dam size based on the client's request, and they also analyzed the total approved budget. The budget was used to guide the criteria of the dam, so here they are rethinking the criteria request by the client (chief of the village). [S1] agreed with the situation offered by his peer, and she interpreted it by clarifying the size and the budget.

Develop Decision

The "Decision" step is the final step in the EDP. The students worked in groups, comparing their design to the other group's design. In this step, the majority of CTS were self-regulated (see Table 4 ). Students presented the results of their final design, at the front of the classroom, to show that their design could effectively solve the problem. Furthermore, the students compared their designs, drawing a conclusion to redesign when their design failed. Table 4 shows Group 1's design compared to that of Group 2.

Students used this data to develop an improved design. Moreover, in this step, students also implemented a redesign to solve the problem given. This situation expressed high CTS in self-regulation criteria; student [A1] stated the advantages of their group's dam design.

[A1]: The dam made by our group uses the concepts of Pascal's Law and hydrostatic pressure, so that the construction can be durable and sturdy. It also has another advantage, namely the budget is not more than $2000. However, it still has a drawback, which is to look at the situation and condition of the cost of the dam.

This study showed that EDP is beneficial in supporting students' CTS. Each stage of the EDP could be investigated, focusing on the majority of CTS exhibited. This result is in line with the results uncovered by Yu et al. ( 2020 ): the EDP plays an important role in developing CTS. When students work through all the stages of the EDP, they also develop and meet the criteria of CTS. Especially, when students did individual work, they described the cognitive domain based on the CTS; students identified the problem in the situation, highlighted the goal of the human need, and paid attention in the constraint to solve a problem (Ernst & Monroe, 2004 ). When students worked in the groups to discuss their ideas, they tried to exchange their ideas in the stage of planning a solution, tried their design, tested, and decided on their design (Giri & Paily, 2020 ; Kabir, 2002 ). This situation described the affective domain because students communicated based on their argumentation to reinforce their ideas showed an affective domain in CTS (Antonieta et al., 2005 ).

The goal in the EDP classroom was for students to make decisions regarding effective solutions to solve the lack of water in rice fields. Following the steps of the EDP, going from "Plan" to "Test," students clearly utilized the process of argumentation to decide. This implementation of the EDP followed reflective thinking because students also conducted self-regulation to redesign their solution. Self-regulation involves the necessity of re-establishing the performance that students think is recursive to setting the goal (Ghanizadeh, 2017 ). Furthermore, students thought back and forth between the problem given and the solution produced, following the learning cycle (Dewey, 1993 ; Garrison et al., 2001 ). In addition, during these activities, students designed a solution based on their understanding of the physics concepts learned.

This process highlighted that student can improve their CTS. Student [U1] showed that in the individual activities he had low CTS, but after joining the group discussion he had a high level of CTS. This phenomenon shows that group interaction can improve a student's level of CTS through communication (Farmer & Wilkinson, 2018 ). Students' communication regarding the design of the solution showed the concept of argumentation because students provided evidence to support their claims (Mathis et al., 2017 ). Therefore, students argued using a well-founded reason from various sources, including discussion activities, which in turn require CTS (Yazici et al., 2020 ).

This study emphasizes that in the EDP stages could express the specific criteria of student's CTS. Additionally, the contribution of this study is that it seems essential to show that measuring of CTS need student’s performance to conform achieving in CTS. Mainly, the cyclical thinking and the use of self-regulation could improve solutions based on the given problems. Using the EDP also gives students an opportunity to communicate with each other to build a better solution based on physics concepts. Furthermore, this research also asked students to rethink the management strategies for solving problems via communication in the group that described CTS based on the affective domain. This research differed from previous research that has investigated the link between the EDP and students' CTS by giving students post-tests of CTS (Mutakinati et al., 2018 ; Yu et al., 2020 ). Students could be investigated in more detail using cyclical thinking to generate the best solution based on the problem given and the constraints (Arık & Topçu, 2020 ; Lottero-perdue et al., 2015 ; Tank et al., 2018 ).

This research provides, through a qualitative study, empirical evidence that the EDP supports a student's CTS. In the case study, students utilize their CTS based on the dominant criteria that appeared in each step of the EDP. The EDP facilitated students' collaboration, working in groups, where students could share and explore their ideas. Additionally, students engaged in argumentation while they began the phases of planning, trying, and testing. After students decided on their design to solve the problem, they conducted self-examination, looking over their design, seeing if the results were comparable to other groups. This situation demonstrated iterative thinking, which is one of the goals of effective CTS.

This research described the infusion of engineering as central to integrating science, technology, engineering, and mathematics (STEM) approaches. This study implies that CTS should be measured during the learning process. For the teacher, the EDP approach might be formulated to teach integrated STEM in the future. Through EDP stages, the integration subjects could be involved in the classroom to support the students' 21st-century skills. The policymaker needs to support the implementation of EDP in the school curricula because students showed positive behavior in the CTS. Additionally, providing professional development (PD) in implementing engineering education is also essential to running the EDP in the classroom. Through this PD, the application of EDP in STEM learning will improve both in quality and quantity.

Furthermore, this study was limited in the sampling of participants, but the exploration of CTS in EDP stages could be described in detail. Analysis in the greater participants needs to be assessed to show the results consistently. Moreover, the EDP is to infuse the engineering in STEM education, so that in the future research, the analysis of student’ CTS through STEM learning can be investigated to gather student’ understanding comprehensively in science, technology, engineering, and mathematics subjects than in silo subject.

Ahern, A., O’Connor, T., McRuairc, G., McNamara, M., & O’Donnell, D. (2012). Critical thinking in the university curriculum—The impact on engineering education. European Journal of Engineering Education, 37 (2), 125–132. https://doi.org/10.1080/03043797.2012.666516

Article   Google Scholar  

Antonieta, M., Celani, A., & Collins, H. (2005). Critical thinking in reflective sessions and in online interactions. AILA Review, 18 , 41.

Arık, M., & Topçu, M. S. (2020). Implementation of engineering design process in the K-12 science classrooms: Trends and issues. Research in Science Education . https://doi.org/10.1007/s11165-019-09912-x

Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing among five approaches . Sage publications.

Google Scholar  

Dewey, J. (1993). How we think. A restatement of the relation of reflective thinking to the educative process . Berlin: D. C. Heath.

Ennis, R. H. (1989). Critical thinking and subject specificity: Clarification and needed research. Educational Researcher, 18 (3), 4.

Ennis, R. H. (1993). Critical thinking assessment. Theory into Practice, 32 (3), 179–186. https://doi.org/10.1080/00405849309543594

Ernst, J., & Monroe, M. (2004). The effects of environment-based education on students’ critical thinking skills and disposition toward critical thinking. Environmental Education Research, 10 (4), 507–522. https://doi.org/10.1080/1350462042000291038

Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction . The California Academic Press.

Farmer, J. L., & Wilkinson, L. (2018). Engineering success: Using problem-based learning to develop critical thinking and communication skills in a Chemical Engineering classroom. Proceedings of the Canadian Engineering Education Association (CEEA) . https://doi.org/10.24908/pceea.v0i0.13057

Fuad, N. M., Zubaidah, S., Mahanal, S., & Suarsini, E. (2017). Improving junior high schools’ critical thinking skills based on test three different models of learning. International Journal of Instruction, 10 (1), 101–116. https://doi.org/10.12973/iji.2017.1017a

Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. Internet and Higher Education, 10 (3), 157–172. https://doi.org/10.1016/j.iheduc.2007.04.001

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. International Journal of Phytoremediation, 21 (1), 7–23. https://doi.org/10.1080/08923640109527071

Ghanizadeh, A. (2017). The interplay between reflective thinking, critical thinking, self-monitoring, and academic achievement in higher education. Higher Education, 74 (1), 101–114. https://doi.org/10.1007/s10734-016-0031-y

Giri, V., & Paily, M. U. (2020). Effect of scientific argumentation on the development of critical thinking. Science and Education, 29 (3), 673–690. https://doi.org/10.1007/s11191-020-00120-y

Guzey, S. S., Moore, T. J., & Harwell, M. (2016). Building up stem: An analysis of teacher-developed engineering design-based stem integration curricular materials. Journal of Pre-College Engineering Education Research, 6 (1), 10–29. https://doi.org/10.7771/2157-9288.1129

Guzey, S. S., Ring-Whalen, E. A., Harwell, M., & Peralta, Y. (2019). Life STEM: A case study of life science learning through engineering design. International Journal of Science and Mathematics Education, 17 (1), 23–42. https://doi.org/10.1007/s10763-017-9860-0

Kabir, S. (2002). Literature review Literature review. Basic Guidelines for Research, 6 (July), 33–37.

Kavenuke, P. S., Kinyota, M., & Kayombo, J. J. (2020). The critical thinking skills of prospective teachers: Investigating their systematicity, self-confidence and scepticism. Thinking Skills and Creativity, 37 (March 2019), 100677. https://doi.org/10.1016/j.tsc.2020.100677

Kim, K., Sharma, P., Land, S. M., & Furlong, K. P. (2013). Effects of active learning on enhancing student critical thinking in an undergraduate general science course. Innovative Higher Education, 38 (3), 223–235. https://doi.org/10.1007/s10755-012-9236-x

Lottero-perdue, B. P., Bolotin, S., Benyameen, R., Brock, E., & Metzger, E. (2015). The engineering design process-5E. Science and Children, 3 (1), 60–66.

Mathis, C. A., Siverling, E. A., Glancy, A. W., & Moore, T. J. (2017). Teachers’ incorporation of argumentation to support engineering learning in STEM integration curricula. Journal of Pre-College Engineering Education Research, 7 (1), 76–89. https://doi.org/10.7771/2157-9288.1163

Merriam, S. B., & Tisdell, E. J. (2016). Qualitative Research A guide to design and implementation (4th ed.). Jossey-Bass A Wiley Brand.

Moore, T. J., Stohlmann, M. S., Wang, H. H., Tank, K. M., Glancy, A. W., & Roehrig, G. H. (2014). Implementation and integration of engineering in K-12 STEM education. In Engineering in pre-college settings: Synthesizing research, policy, and practices. Purdue University Press

Mutakinati, L., Anwari, I., & Yoshisuke, K. (2018). Analysis of students’ critical thinking skill of middle school through stem education project-based learning. Jurnal Pendidikan IPA Indonesia, 7 (1), 54–65. https://doi.org/10.15294/jpii.v7i1.10495

NGSS. (2013). Next generation science standards: For states, by states . National Academies Press.

NRC. (2012). STEM integration in K-12: Status, prospects, and an agenda for research engineering . The National Academic Press.

Saldana, J. (2016). Qualitative coding: The manual researchers for qualitative researchers . SAGE Publications Ltd.

Sen, C., Sonay, Z., & Ahmet, K. S. (2021). Computational thinking skills of gifted and talented students in integrated STEM activities based on the engineering design process: The case of robotics and 3D robot modeling. Thinking Skills and Creativity . https://doi.org/10.1016/j.tsc.2021.100931

Shaw, A., Liu, O. L., Gu, L., Kardonova, E., Chirikov, I., Li, G., Hu, S., Yu, N., Ma, L., Guo, F., Su, Q., Shi, J., Shi, H., & Loyalka, P. (2020). Thinking critically about critical thinking: Validating the Russian HEIghten® critical thinking assessment. Studies in Higher Education, 45 (9), 1933–1948. https://doi.org/10.1080/03075079.2019.1672640

Spector, J. M., & Ma, S. (2019). Inquiry and critical thinking skills for the next generation: from artificial intelligence back to human intelligence. Smart Learning Environments . https://doi.org/10.1186/s40561-019-0088-z

Sulaeman, N. F., Putra, P. D. A., Mineta, I., Hakamada, H., Takahashi, M., Ide, Y., & Kumano, Y. (2021). Exploring Student Engagement in STEM Education through the Engineering Design Process. Jurnal Penelitian Dan Pembelajaran IPA, 7 (1), 1. https://doi.org/10.30870/jppi.v7i1.10455

Tank, K. M., Rynearson, A. M., & Moore, T. J. (2018). Examining student and teacher talk within engineering design in kindergarten. European Journal of STEM Education . https://doi.org/10.20897/ejsteme/3870

Wechsler, S. M., Saiz, C., Rivas, S. F., Vendramini, C. M. M., Almeida, L. S., Mundim, M. C., & Franco, A. (2018). Creative and critical thinking: Independent or overlapping components? Thinking Skills and Creativity, 27 , 114–122. https://doi.org/10.1016/j.tsc.2017.12.003

Whitworth, B., & Wheeler, L. (2017). Is It engineering or not? The Science Teacher, 084 (05), 25–29. https://doi.org/10.2505/4/tst17_084_05_25

Yazici, H. J., Zidek, L. A., & St. Hill, H. (2020). A study of critical thinking and cross-disciplinary teamwork in engineering education. In A. E. Smith (Ed.), Women in industrial and systems engineering: Key advances and perspectives on emerging topics (pp. 185–196). Berlin: Springer. https://doi.org/10.1007/978-3-030-11866-2_8

Chapter   Google Scholar  

Yin, R. K. (2018). Case study research and applications: Design and methods. Journal of Hospitality & Tourism Research . https://doi.org/10.1177/109634809702100108

Yu, K. C., Wu, P. H., & Fan, S. C. (2020). Structural relationships among high school students’ scientific knowledge, critical thinking, engineering design process, and design product. International Journal of Science and Mathematics Education, 18 (6), 1001–1022. https://doi.org/10.1007/s10763-019-10007-2

Download references

Author information

Authors and affiliations.

Department of Science Education, Faculty of Teacher Training and Education, University of Jember, Jember, Indonesia

Pramudya Dwi Aristya Putra,  Supeno & Sri Wahyuni

Department of Physics Education, Faculty of Teacher Training and Education, Mulawarman University, Samarinda, Indonesia

Nurul Fitriyah Sulaeman

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Pramudya Dwi Aristya Putra .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 17 kb)

Rights and permissions.

Reprints and permissions

About this article

Putra, P.D.A., Sulaeman, N.F., Supeno et al. Exploring Students' Critical Thinking Skills Using the Engineering Design Process in a Physics Classroom. Asia-Pacific Edu Res 32 , 141–149 (2023). https://doi.org/10.1007/s40299-021-00640-3

Download citation

Accepted : 19 November 2021

Published : 30 November 2021

Issue Date : February 2023

DOI : https://doi.org/10.1007/s40299-021-00640-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical thinking skills
  • Engineering design process
  • Physics classroom
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    critical thinking peer reviewed articles

  2. Critical Thinking Essay Sample

    critical thinking peer reviewed articles

  3. (PDF) Peer Instruction in a Flipped Learning Environment: Investigating

    critical thinking peer reviewed articles

  4. (PDF) Critical Thinking: What It Is and Why It Counts

    critical thinking peer reviewed articles

  5. The Extraordinary Teaching Project: Developing Critical Thinking through Peer Review

    critical thinking peer reviewed articles

  6. PPT

    critical thinking peer reviewed articles

VIDEO

  1. Critical thinking project: Is Peer Pressure beneficial to development ? ​Group: ITBA​

  2. Critical thinking at university

  3. Critical Thinking: Essential to Self Actualize

  4. Unleashing Critical Thinking in Healthcare: The Power of Reflective Journaling

  5. "Peoples Talk's"

  6. CLIP: What Happens When Critical Theory Deconstructs Things?

COMMENTS

  1. Bridging critical thinking and transformative learning: The role of

    Although the literature on critical thinking and transformative learning has remained relatively distinct, ... peer-reviewed journal articles, and scholarly books - tend to make objective, evidence-based arguments to support theoretical positions. We regard these readings as necessary for the growth of critical thinkers. Gradually, through ...

  2. Critical Thinking: A Model of Intelligence for Solving Real-World

    4. Critical Thinking as an Applied Model for Intelligence. One definition of intelligence that directly addresses the question about intelligence and real-world problem solving comes from Nickerson (2020, p. 205): "the ability to learn, to reason well, to solve novel problems, and to deal effectively with novel problems—often unpredictable—that confront one in daily life."

  3. The role of critical thinking skills and learning styles of university

    Critical thinking is one of the aspects of thinking that has been accepted as a way to overcome the difficulties and to facilitate the access to information in life . ... the students' academic performance was reviewed. After data collection, the data were coded and analyzed, using the SPSS 14 ( SPSS Inc, Chicago, IL, USA) software. To describe ...

  4. Critical Thinking: The Development of an Essential Skill for Nursing

    Critical thinking is applied by nurses in the process of solving problems of patients and decision-making process with creativity to enhance the effect. It is an essential process for a safe, efficient and skillful nursing intervention. Critical thinking according to Scriven and Paul is the mental active process and subtle perception, analysis ...

  5. The effectiveness of collaborative problem solving in promoting

    Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded ...

  6. Critical thinking in nursing clinical practice, education and research

    Critical thinking is a complex, dynamic process formed by attitudes and strategic skills, with the aim of achieving a specific goal or objective. The attitudes, including the critical thinking attitudes, constitute an important part of the idea of good care, of the good professional. It could be said that they become a virtue of the nursing ...

  7. Facilitating critical thinking in decision making-based professional

    Critical thinking ability is one of the higher order thinking capabilities. In peer review, learners learn to comment on works from different perspectives; the process of commenting involves logical thinking and skills in order to make reasonable statements or evaluations (Hovardas, Tsivitanidou, & Zacharia, 2014). The interactive peer review ...

  8. Frontiers

    Scientific thinking is the ability to generate, test, and evaluate claims, data, and theories (e.g., Bullock et al., 2009; Koerber et al., 2015 ). Simply stated, the basic tenets of scientific thinking provide students with the tools to distinguish good information from bad. Students have access to nearly limitless information, and the skills ...

  9. Constructivism learning theory: A paradigm for students' critical

    Abstract. This study looks at whether creativity and critical thinking help students solve problems and improve their grades by mediating the link between 21 st century skills (learning motivation, cooperativity, and interaction with peers, engagement with peers, and a smart classroom environment). The mediating relationship between creativity and critical thinking was discovered using ...

  10. Fostering Critical Thinking, Reasoning, and Argumentation Skills ...

    Developing a position on a socio-scientific issue and defending it using a well-reasoned justification involves complex cognitive skills that are challenging to both teach and assess. Our work centers on instructional strategies for fostering critical thinking skills in high school students using bioethical case studies, decision-making frameworks, and structured analysis tools to scaffold ...

  11. Growth of critical thinking skills in middle school immersive science

    The similarities and differences in the definitions can be seen in at least 10 different assessments of CT skills, from the Watson-Glaser Critical Thinking Appraisal Tool (Watson & Glaser, 2010) to the California Critical thinking Disposition Inventory (Liu et al., 2010). A comprehensive review of the use of these measures and some of their ...

  12. PDF Critical Thinking: More Than Test Scores

    Critical Thinking: More Than Test Scores This manuscript has been peer-reviewed, accepted, and endorsed by the National Council of Professors of Educational Administration (NCPEA) as a significant contribution to the scholarship and practice of school administration and K-12 education. Vernon G. Smith Antonia Szymanski Indiana University Northwest

  13. Critical thinking in nursing clinical practice, education and research

    Based on selective analysis of the descriptive and empirical literature that addresses conceptual review of critical thinking, we conducted an analysis of this topic in the settings of clinical practice, training and research from the virtue ethical framework. Following JBI critical appraisal checklist for text and opinion papers, we argue the ...

  14. Full article: Children's critical thinking skills: perceptions of

    The importance of fostering and developing critical thinking (CT) in children from a young age (Lai 2011) has been widely discussed and endorsed in scholarship (Facione 2011; Lipman 1991). ... Lai, E. R. 2011. "Critical Thinking: A Literature Review." Pearson's Research Reports 6 (1): 40-41. Google Scholar. Lewis, A, and D Smith. 1993 ...

  15. Improving 21st-century teaching skills: The key to effective 21st

    The 21st-century skillset is generally understood to encompass a range of competencies, including critical thinking, problem solving, creativity, meta-cognition, communication, digital and technological literacy, civic responsibility, and global awareness (for a review of frameworks, see Dede, 2010).And nowhere is the development of such competencies more important than in developing country ...

  16. PDF Teaching Critical Thinking Skills: Literature Review

    This study provides a systematic review of the literature on teaching CT skills focusing on published articles in academic journals as well as dissertations in this field. The rest of the article is organised as follows: First, the method used to identify and select studies for inclusion in this review is described. The article then presents the

  17. Full article: Life skills for 'real life': How critical thinking is

    Introduction. In European educational policy, 'critical thinking' is one of four 'civic competence areas' that educational institutions and programmes should promote (European Commission/EACEA/Eurydice Citation 2017).Although interpreted in multiple and sometimes contradictory ways, critical thinking is commonly described, in line with frameworks of '21st century skills' (e.g ...

  18. The outcomes of learner-centred pedagogy: A systematic review

    This article summarises the findings of a systematic review of 62 journal articles reporting the outcomes of LCP implementation in low- to middle-income countries. The review found relatively few studies that provided objective evidence of LCP effectiveness. A higher number of studies identified non-objective perspectives of LCP effectiveness ...

  19. Effective Learning Behavior in Problem-Based Learning: a Scoping Review

    Problem-based learning (PBL) emphasizes learning behavior that leads to critical thinking, problem-solving, communication, and collaborative skills in preparing students for a professional medical career. However, learning behavior that develops these skills has not been systematically described. This review aimed to unearth the elements of effective learning behavior in a PBL context, using ...

  20. How Can Critical Thinking Be Used to Assess the Credibility of Online

    The Use of Critical Thinking in the Context of Existing Credibility Assessment Models. It is suggested that critical thinking is included in the credibility assessment process, as follows. With reference to the Prominence-Interpretation theory [ 14 ], critical thinking can be applied during the interpretation phase.

  21. Exploring Students' Critical Thinking Skills Using the ...

    Critical thinking skills (CTS) have been applied in the learning environment to address students' challenges in the twenty-first century. Therefore, specific approaches need to be implemented in the learning environment to support students' CTS. This research explores students' CTS during the learning process through the engineering design process (EDP) in a physics classroom. The methodology ...

  22. The Perception of Critical Thinking and Problem Solving Skill among

    Social science students appear to perform better in this skill, as compared to science and engineering students. © 2015 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of GLTR International Sdn. Berhad. Keywords:Critical thinking, gender differences, academic discipline; problem solving; Malaysian undergraduate ...

  23. Thinking Skills and Creativity

    About the journal. This leading international journal, launched in 2006, uniquely identifies and details critical issues in the future of learning and teaching of creativity, as well as innovations in teaching for thinking. As a peer-reviewed forum for interdisciplinary researchers and communities of researcher-practitioner-educators, the ...