• Author Rights
  • Diversity, Equity & Inclusion

Journal of Leadership Education

  • JOLE 2023 Special Issue
  • Editorial Staff
  • 20th Anniversary Issue
  • The Development of Problem-Solving Skills for Aspiring Educational Leaders

Jeremy D. Visone 10.12806/V17/I4/R3

Introduction

Solving problems is a quintessential aspect of the role of an educational leader. In particular, building leaders, such as principals, assistant principals, and deans of students, are frequently beset by situations that are complex, unique, and open-ended. There are often many possible pathways to resolve the situations, and an astute educational leader needs to consider many factors and constituencies before determining a plan of action. The realm of problem solving might include student misconduct, personnel matters, parental complaints, school culture, instructional leadership, as well as many other aspects of educational administration. Much consideration has been given to the development of problem-solving skills for educational leaders. This study was designed to answer the following research question: “How do aspiring educational leaders’ problem solving skills, as well as perceptions of their problem-solving skills, develop during a year-long graduate course sequence focused on school-level leadership that includes the presentation of real-world scenarios?” This mixed-methods study extends research about the development of problem-solving skills conducted with acting administrators (Leithwood & Steinbach, 1992, 1995).

The Nature of Problems

Before examining how educational leaders can process and solve problems effectively, it is worth considering the nature of problems. Allison (1996) posited simply that problems are situations that require thought and/or actions. Further, there are different types of problems presented to educational leaders. First, there are  well-structured problems , which can be defined as those with clear goals and relatively prescribed resolution pathways, including an easy way of determining whether goals were met (Allison, 1996).

Conversely,  ill-structured problems  are those with more open-ended profiles, whereby the goals, resolution pathways, or evidence of success are not necessarily clear. These types of problems could also be considered  unstructured  (Leithwood & Steinbach, 1995) or  open-design  (Allison, 1996). Many of the problems presented to educational leaders are unstructured problems. For example, a principal must decide how to discipline children who misbehave, taking into consideration their disciplinary history, rules and protocols of the school, and other contextual factors; determine how best to raise student achievement (Duke, 2014); and resolve personnel disputes among staff members. None of these problems point to singular solutions that can be identified as “right” or “wrong.” Surely there are responses that are less desirable than others (i.e. suspension or recommendation for expulsion for minor infractions), but, with justification and context, many possible solutions exist.

Problem-Solving Perspectives and Models

Various authors have shared perspectives about effective problem solving. Marzano, Waters, and McNulty (2005) outlined the “21 Responsibilities of the School Leader.” These responsibilities are highly correlated with student achievement based upon the authors’ meta- analysis of 69 studies about leadership’s effect on student achievement. The most highly correlated of the responsibilities was  situational awareness , which refers to understanding the school deeply enough to anticipate what might go wrong from day-to-day, navigate the individuals and groups within the school, and recognize issues that might surface at a later time (Marzano et al., 2005). Though the authors discuss the utility of situational awareness for long- term, large-scale decision making, in order for an educational leader to effectively solve the daily problems that come her way, she must again have a sense of situational awareness, lest she make seemingly smaller-scale decisions that will lead to large-scale problems later.

Other authors have focused on problems that can be considered more aligned with the daily work of educational leaders. Considering the problem-type classification dichotomies of Allison (1996) and Leithwood and Steinbach (1995), problems that educational leaders face on a daily basis can be identified as either well-structured or unstructured. Various authors have developed problem-solving models focused on unstructured problems (Bolman & Deal, 2008; Leithwood & Steinbach, 1995; Simon, 1993), and these models will be explored next.

Simon (1993) outlined three phases of the decision-making process. The first is to find problems that need attention. Though many problems of educational leaders are presented directly to them via, for example, an adult referring a child for discipline, a parent registering a complaint about a staff member, or a staff member describing a grievance with a colleague, there is a corollary skill of identifying what problems—of the many that come across one’s desk— require immediate attention, or ultimately, any attention, at all. Second, Simon identified “designing possible courses of action” (p. 395). Finally, educational leaders must evaluate the quality of their decisions. From this point of having selected a viable and positively evaluated potential solution pathway, implementation takes place.

Bolman and Deal (2008) outlined a model of reframing problems using four different frames, through which problems of practice can be viewed. These frames provide leaders with a more complete set of perspectives than they would likely utilize on their own. The  structural frame  represents the procedural and systems-oriented aspects of an organization. Within this frame, a leader might ask whether there is a supervisory relationship involved in a problem, if a protocol exists to solve such a problem, or what efficiencies or logical processes can help steer a leader toward a resolution that meets organizational goals. The  human resource frame  refers to the needs of individuals within the organization. A leader might try to solve a problem of practice with the needs of constituents in mind, considering the development of employees and the balance between their satisfaction and intellectual stimulation and the organization’s needs. The  political frame  includes the often competing interests among individuals and groups within the organization, whereby alliances and negotiations are needed to navigate the potential minefield of many groups’ overlapping aims. From the political frame, a leader could consider what the interpersonal costs will be for the leader and organization among different constituent groups, based upon which alternatives are selected. Last, the  symbolic frame  includes elements of meaning within an organization, such as traditions, unspoken rules, and myths. A leader may need to consider this frame when proposing a solution that might interfere with a long-standing organizational tradition.

Bolman and Deal (2008) identified the political and symbolic frames as weaknesses in most leaders’ consideration of problems of practice, and the weakness in recognizing political aspects of decision making for educational leaders was corroborated by Johnson and Kruse (2009). An implication for leadership preparation is to instruct students in the considerations of these frames and promote their utility when examining problems.

Authors have noted that experts use different processes than novice problem solvers (Simon, 1993; VanLehn, 1991). An application of this would be Simon’s (1993) assertion that experts can rely on their extensive experience to remember solutions to many problems, without having to rely on an extensive analytical process. Further, they may not even consider a “problem” identified by a novice a problem, at all. With respect to educational leaders, Leithwood and Steinbach (1992, 1995) outlined a set of competencies possessed by expert principals, when compared to their typical counterparts. Expert principals were better at identifying the nature of problems; possessing a sense of priority, difficulty, how to proceed, and connectedness to prior situations; setting meaningful goals for problem solving, such as seeking goals that are student-centered and knowledge-focused; using guiding principles and long-term purposes when determining the best courses of action; seeing fewer obstacles and constraints when presented with problems; outlining detailed plans for action that include gathering extensive information to inform decisions along the plan’s pathway; and responding with confidence and calm to problem solving. Next, I will examine how problem-solving skills are developed.

Preparation for Educational Leadership Problem Solving

How can the preparation of leaders move candidates toward the competencies of expert principals? After all, leading a school has been shown to be a remarkably complex enterprise (Hallinger & McCary, 1990; Leithwood & Steinbach, 1992), especially if the school is one where student achievement is below expectations (Duke, 2014), and the framing of problems by educational leaders has been espoused as a critically important enterprise (Bolman & Deal, 2008; Dimmock, 1996; Johnson & Kruse, 2009; Leithwood & Steinbach, 1992, 1995; Myran & Sutherland, 2016). In other disciplines, such as business management, simulations and case studies are used to foster problem-solving skills for aspiring leaders (Rochford & Borchert, 2011; Salas, Wildman, & Piccolo, 2009), and attention to problem-solving skills has been identified as an essential curricular component in the training of journalism and mass communication students (Bronstein & Fitzpatrick, 2015). Could such real-world problem solving methodologies be effective in the preparation of educational leaders? In a seminal study about problem solving for educational leaders, Leithwood and Steinbach (1992, 1995) sought to determine if effective problem-solving expertise could be explicitly taught, and, if so, could teaching problem- processing expertise be helpful in moving novices toward expert competence? Over the course of four months and four separate learning sessions, participants in the control group were explicitly taught subskills within six problem-solving components: interpretation of the problem for priority, perceived difficulty, data needed for further action, and anecdotes of prior experience that can inform action; goals for solving the problem; large-scale principles that guide decision making; barriers or obstacles that need to be overcome; possible courses of action; and the confidence of the leader to solve the problem. The authors asserted that providing conditions to participants that included models of effective problem-solving, feedback, increasingly complex problem-solving demands, frequent opportunities for practice, group problem-solving, individual reflection, authentic problems, and help to stimulate metacognition and reflection would result in educational leaders improving their problem-solving skills.

The authors used two experts’ ratings of participants’ problem-solving for both process (their methods of attacking the problem) and product (their solutions) using a 0-3 scale in a pretest-posttest design. They found significant increases in some problem-solving skills (problem interpretation, goal setting, and identification of barriers or obstacles that need to be overcome) after explicit instruction (Leithwood & Steinbach, 1992, 1995). They recommended conducting more research on the preparation of educational leaders, with particular respect to approaches that would improve the aspiring leaders’ problem-solving skills.

Solving problems for practicing principals could be described as constructivist, since most principals do solve problems within a social context of other stakeholders, such as teachers, parents, and students (Leithwood & Steinbach, 1992). Thus, some authors have examined providing opportunities for novice or aspiring leaders to construct meaning from novel scenarios using the benefits of, for example, others’ point of view, expert modeling, simulations, and prior knowledge (Duke, 2014; Leithwood & Steinbach, 1992, 1995; Myran & Sutherland, 2016; Shapira-Lishchinsky, 2015). Such collaborative inquiry has been effective for teachers, as well (DeLuca, Bolden, & Chan, 2017). Such learning can be considered consistent with the ideas of other social constructivist theorists (Berger & Luckmann, 1966; Vygotsky, 1978) as well, since individuals are working together to construct meaning, and they are pushing into areas of uncertainty and lack of expertise.

Shapira-Lishchinsky (2015) added some intriguing findings and recommendations to those of Leithwood and Steinbach (1992, 1995). In this study, 50 teachers with various leadership roles in their schools were presented regularly with ethical dilemmas during their coursework. Participants either interacted with the dilemmas as members of a role play or by observing those chosen. When the role play was completed, the entire group debriefed and discussed the ethical dilemmas and role-playing participants’ treatment of the issues. This method was shown, through qualitative analysis of participants’ discussions during the simulations, to produce rich dialogue and allow for a safe and controlled treatment of difficult issues. As such, the use of simulations was presented as a viable means through which to prepare aspiring educational leaders. Further, the author suggested the use of further studies with simulation-based learning that seek to gain information about aspiring leaders’ self-efficacy and psychological empowerment. A notable example of project-based scenarios in a virtual collaboration environment to prepare educational leaders is the work of Howard, McClannon, and Wallace (2014). Shapira-Lishchinsky (2015) also recommended similar research in other developed countries to observe the utility of the approaches of simulation and social constructivism to examine them for a wider and diverse aspiring administrator candidate pool.

Further, in an extensive review of prior research studies on the subject, Hallinger and Bridges (2017) noted that Problem-Based Learning (PBL), though applied successfully in other professions and written about extensively (Hallinger & Bridges, 1993, 2017; Stentoft, 2017), was relatively unheralded in the preparation of educational leaders. According to the authors, characteristics of PBL included problems replacing theory as the organization of course content, student-led group work, creation of simulated products by students, increased student ownership over learning, and feedback along the way from professors. Their review noted that PBL had positive aspects for participants, such as increased motivation, real-world connections, and positive pressure that resulted from working with a team. However, participants also expressed concerns about time constraints, lack of structure, and interpersonal dynamics within their teams. There were positive effects found on aspiring leaders’ problem-solving skill development with PBL (Copland, 2000; Hallinger & Bridges, 2017). Though PBL is much more prescribed than the scenarios strategy described in the Methods section below, the applicability of real-world problems to the preparation of educational leaders is summarized well by Copland (2000):

[I]nstructional practices that activate prior knowledge and situate learning in contexts similar to those encountered in practice are associated with the development of students’ ability to understand and frame problems. Moreover, the incorporation of debriefing techniques that encourage students’ elaboration of knowledge and reflection on learning appear to help students solidify a way of thinking about problems. (p. 604)

This study involved a one-group pretest-posttest design. No control group was assigned, as the pedagogical strategy in question—the use of real-world scenarios to build problem-solving skill for aspiring educational leaders—is integral to the school’s curriculum that prepares leaders, and, therefore, it is unethical to deny to student participants (Gay & Airasian, 2003). Thus, all participants were provided instruction with the use of real-world scenarios.

Participants.  Graduate students at a regional, comprehensive public university in the Northeast obtaining a 6 th -year degree (equivalent to a second master’s degree) in educational leadership and preparing for certification as educational administrators served as participants. Specifically, students in three sections of the same full-year, two-course sequence, entitled “School Leadership I and II” were invited to participate. This particular course was selected from the degree course sequence, as it deals most directly with the problem-solving nature and daily work of school administrators. Some key outcomes of the course include students using data to drive school improvement action plans, communicating effectively with a variety of stakeholders, creating a safe and caring school climate, creating and maintaining a strategic and viable school budget, articulating all the steps in a hiring process for teachers and administrators, and leading with cultural proficiency.

The three sections were taught by two different professors. The professors used real- world scenarios in at least half of their class meetings throughout the year, or in approximately 15 classes throughout the year. During these classes, students were presented with realistic situations that have occurred, or could occur, in actual public schools. Students worked with their classmates to determine potential solutions to the problems and then discussed their responses as a whole class under the direction of their professor, a master practitioner. Both professors were active school administrators, with more than 25 years combined educational leadership experience in public schools. It should be noted that the scenario presentation and discussions took place during the class sessions, only. These were not presented for homework or in online forums.

Of the 44 students in these three sections, 37 volunteered to participate at some point in the data collection sequence, but not all students in the pretest session attended the posttest session months later and vice versa. As a result, only 20 students’ data were used for the matched pairs analysis. All 37 participants were certified professional educators in public schools in Connecticut. The participants’ professional roles varied and included classroom teachers, instructional coaches, related service personnel, unified arts teachers, as well as other non- administrative educational roles. Characteristics of participants in the overall and matched pairs groups can be found in Table 1.

Table 1 Participant Characteristics

Procedure.  Participants’ data were compared between a fall of 2016 baseline data collection period and a spring of 2017 posttest data collection period. During the fall data collection period, participants were randomly assigned one of two versions of a Google Forms survey. After items about participant characteristics, the survey consisted of 11 items designed to elicit quantitative and qualitative data about participants’ perceptions of their problem-solving abilities, as well as their ability to address real-world problems faced by educational leaders. The participants were asked to rate their perception of their situational awareness, flexibility, and problem solving ability on a 10-point (1-10) Likert scale, following operational definitions of the terms (Marzano, Waters, & McNulty, 2005; Winter, 1982). They were asked, for each construct, to write open-ended responses to justify their numerical rating. They were then asked to write what they perceived they still needed to improve their problem-solving skills. The final four items included two real-world, unstructured, problem-based scenarios for which participants were asked to create plans of action. They were also asked to rate their problem-solving confidence with respect to their proposed action plans for each scenario on a 4-point (0-3) Likert scale.

During the spring data collection period, participants accessed the opposite version of the Google Forms survey from the one they completed in the fall. All items were identical on the two survey versions, except the scenarios, which were different on each survey version. The use of two versions was to ensure that any differences in perceived or actual difficulty among the four scenarios provided would not alter results based upon the timing of participant access (Leithwood & Steinbach, 1995). In order to link participants’ fall and spring data in a confidential manner, participants created a unique, six-digit alphanumeric code.

A focus group interview followed each spring data collection session. The interviews were recorded to allow for accurate transcription. The list of standard interview questions can be found in Table 2. This interview protocol was designed to elicit qualitative data with respect to aspiring educational leaders’ perceptions about their developing problem-solving abilities.

Table 2 Focus Group Interview Questions ___________________________________________________________________________________________

Please describe the development of your problem-solving skills as an aspiring educational leader over the course of this school year. In what ways have you improved your skills? Be as specific as you can.

What has been helpful to you (i.e. coursework, readings, experiences, etc.) in this development of your problem-solving skills? Why?

What do you believe you still need for the development in your problem-solving skills as an aspiring educational leader?

Discuss your perception of your ability to problem solve as an aspiring educational leader. How has this changed from the beginning of this school year? Why?

Please add anything else you perceive is relevant to this conversation about the development of your problem-solving skills as an aspiring educational leader.

___________________________________________________________________________________________

Data Analysis.

Quantitative data .  Data were obtained from participants’ responses to Likert-scale items relating to their confidence levels with respect to aspects of problem solving, as well as from the rating of participants’ responses to the given scenarios  against a rubric. The educational leadership problem-solving rubric chosen (Leithwood & Steinbach, 1995) was used with permission, and it reflects the authors’ work with explicitly teaching practicing educational leaders components of problem solving. The adapted rubric can be found in Figure 1. Through the use of this rubric, each individual response by a participant to a presented scenario was assigned a score from 0-15. It should be noted that affect data (representing the final 3 possible points on the 18-point rubric) were obtained via participants’ self-reporting their confidence with respect to their proposed plans of action. To align with the rubric, participants self-assessed their confidence through this item with a 0-3 scale.

0 = No Use of the Subskill 1 = There is Some Indication of Use of the Subskill 2 = The Subskill is Present to Some Degree 3 = The Subskill is Present to a Marked Degree; This is a Fine Example of this Subskill

Figure 1.  Problem-solving model for unstructured problems. Adapted from “Expert Problem Solving: Evidence from School and District Leaders,” by K. Leithwood and R. Steinbach, pp. 284-285. Copyright 1995 by the State University of New York Press.

I compared Likert-scale items and rubric scores via descriptive statistics and rubric scores also via a paired sample  t -test and Cohen’s  d , all using the software program IBM SPSS. I did not compare the Likert-scale items about situational awareness, flexibility, and problem solving ability with  t -tests or Cohen’s  d , since these items did not represent a validated instrument. They were only single items based upon participants’ ratings compared to literature-based definitions. However, the value of the comparison of means from fall to spring was triangulated with qualitative results to provide meaning. For example, to say that participants’ self-assessment ratings for perceived problem-solving abilities increased, I examined both the mean difference for items from fall to spring and what participants shared throughout the qualitative survey items and focus group interviews.

Prior to scoring participants’ responses to the scenarios using the rubric, and in an effort to maximize the content validity of the rubric scores, I calibrated my use of the rubric with two experts from the field. Two celebrated principals, representing more than 45 combined years of experience in school-level administration, collaboratively and comparatively scored participant responses. Prior to scoring, the team worked collaboratively to construct appropriate and comprehensive exemplar responses to the four problem-solving scenarios. Then the team blindly scored fall pretest scenario responses using the Leithwood and Steinbach (1995) rubric, and upon comparing scores, the interrater reliability correlation coefficient was .941, indicating a high degree of agreement throughout the team.

Qualitative data.  These data were obtained from open-ended items on the survey, including participants’ responses to the given scenarios, as well as the focus group interview transcripts. I analyzed qualitative data consistent with the grounded theory principles of Strauss and Corbin (1998) and the constant comparative methods of Glaser (1965), including a period of open coding of results, leading to axial coding to determine the codes’ dimensions and relationships between categories and their subcategories, and selective coding to arrive at themes. Throughout the entire data analysis process, I repeatedly returned to raw data to determine the applicability of emergent codes to previously analyzed data. Some categorical codes based upon the review of literature were included in the initial coding process. These codes were derived from the existing theoretical problem-solving models of Bolman and Deal (2008) and Leithwood and Steinbach (1995). These codes included  modeling ,  relationships , and  best for kids . Open codes that emerged from the participants’ responses included  experience ,  personality traits ,  current job/role , and  team . Axial coding revealed, for example, that current jobs or roles cited, intuitively, provided both sufficient building-wide perspective and situational memory (i.e. for special education teachers and school counselors) and insufficient experiences (i.e. for classroom teachers) to solve the given problems with confidence. From such understandings of the codes, categories, and their dimensions, themes were developed.

Quantitative Results.   First, participants’ overall, aggregate responses (not matched pairs) were compared from the fall to spring, descriptively. These findings are outlined in Table  3. As is seen in the table, each item saw a modest increase over the course of the year. Participant perceptions of their problem-solving abilities across the three constructs presented (situational awareness, flexibility, and problem solving) did increase over the course of the year, as did the average group score for the problem-solving scenarios. However, due to participant differences in the two data collection periods, these aggregate averages do not represent a matched-pair dataset.

Table 3 Fall to Spring Comparison of Likert-Scale and Rubric-Scored Items

a  These problem-solving dimensions from literature were rated by participants on a scale from 1- 10. b  Participants received a rubric score for each scenario between 0-18. Participants’ two scenario scores for each data collection period (fall, spring) were averaged to arrive at the scores represented here.

In order to determine the statistical significance of the increase in participants’ problem- solving rubric scores, a paired-samples  t -test was applied to the fall ( M  = 9.15;  SD  = 2.1) and spring ( M  = 9.25;  SD  = 2.3) averages. Recall that 20 participants had valid surveys for both the fall and spring. The  t -test ( t  = -.153;  df  = 19;  p  = .880) revealed no statistically significant change from fall to spring, despite the minor increase (0.10). I applied Cohen’s  d  to calculate the effect size. The small sample size ( n  = 20) for the paired-sample  t -test may have contributed to the lack of statistical significance. However, standard deviations were also relatively small, so the question of effect size was of particular importance. Cohen’s  d  was 0.05, which is also very small, indicating that little change—really no improvement, from a statistical standpoint—in participants’ ability to create viable action plans to solve real-world problems occurred throughout the year. However, the participants’ perceptions of their problem-solving abilities did increase, as evidenced by the increases in the paired-samples perception means shown in Table 3, though these data were only examined descriptively (from a quantitative perspective) due to the fact that these questions were individual items that are not part of a validated instrument.

Qualitative Results.   Participant responses to open-ended items on the questionnaire, responses to the scenarios, and oral responses to focus group interview questions served as sources of qualitative data. Since the responses to the scenarios were focused on participant competence with problem solving, as measured by the aforementioned rubric (Leithwood &  Steinbach, 1995), these data were examined separately from data collected from the other two sources.

Responses to scenarios.  As noted, participants’ rubric ratings for the scenarios did not display a statistically significant increase from fall to spring. As such, this outline will not focus upon changes in responses from fall to spring. Rather, I examined the responses, overall, through the lens of the Leithwood and Steinbach (1995) problem-solving framework indicators against which they were rated. Participants typically had outlined reasonable, appropriate, and logical solution processes. For example, in a potential bullying case scenario, two different participants offered, “I would speak to the other [students] individually if they have said or done anything mean to other student [ sic ] and be clear that it is not tolerable and will result in major consequences” and “I would initiate an investigation into the situation beginning with [an] interview with the four girls.” These responses reflect actions that the consulted experts anticipated from participants and deemed as logical and needed interventions. However, these two participants omitted other needed steps, such as addressing the bullied student’s mental health needs, based upon her mother’s report of suicidal ideations. Accordingly, participants earned points for reasonable and logical responses very consistently, yet, few full-credit responses were observed.

Problem interpretation scores were much more varied. For this indicator, some participants were able to identify many, if not all, the major issues in the scenarios that needed attention. For example, for a scenario where two teachers were not interacting professionally toward each other, many participants correctly identified that this particular scenario could include elements of sexual harassment, professionalism, teaching competence, and personality conflict. However, many other participants missed at least two of these key elements of the problem, leaving their solution processes incomplete. The categories of (a) goals and (b) principles and values also displayed a similarly wide distribution of response ratings.

One category, constraints, presented consistent difficulty for the participants. Ratings were routinely 0 and 1. Participants could not consistently report what barriers or obstacles would need addressing prior to success with their proposed solutions. To be clear, it was not a matter of participants listing invalid or unrealistic barriers or obstacles; rather, the participants were typically omitting constraints altogether from their responses. For example, for a scenario involving staff members arriving late and unprepared to data team meetings, many participants did not identify that a school culture of not valuing data-driven decision making or lack of norms for data team work could be constraints that the principal could likely face prior to reaching a successful resolution.

Responses to open-ended items.  When asked for rationale regarding their ratings for situational awareness, flexibility, and problem solving, participants provided open-ended responses. These responses revealed patterns worth considering, and, again, this discussion will consider, in aggregate, responses made in both the pre- and post- data collection periods, again due to the similarities in responses between the two data collection periods. The most frequently observed code (112 incidences) was  experience . Closely related were the codes  current job/role  (50 incidences). Together, these codes typically represented a theme that participants were linking their confidence with respect to problem solving with their exposure (or lack thereof) in their professional work. For example, a participant reported, “As a school counselor, I have a lot of contact with many stakeholders in the school -admin [ sic ], parents, teachers, staff, etc. I feel that I have a pretty good handle on the systemic issues.” This example is one of many where individuals working in counseling, instructional coaching, special education, and other support roles expressed their advanced levels of perspective based upon their regular contact with many stakeholders, including administrators. Thus, they felt they had more prior knowledge and situational memory about problems in their schools.

However, this category of codes also included those, mostly classroom or unified arts teachers, who expressed that their relative lack of experiences outside their own classrooms limited their perspective for larger-scale problem solving. One teacher succinctly summarized this sentiment, “I have limited experience in being part of situations outside of my classroom.” Another focused on the general problem solving skill in her classroom not necessarily translating to confidence with problem solving at the school level: “I feel that I have a high situational awareness as a teacher in the classroom, but as I move through these leadership programs I find that I struggle to take the perspective of a leader.” These experiences were presented in opposition to their book learning or university training. There were a number of instances (65 combined) of references to the value of readings, class discussions, group work, scenarios presented, research, and coursework in the spring survey. When asked what the participants need more, again, experience was referenced often. One participant summarized this concept, “I think that I, personally, need more experience in the day-to-day . . . setting.” Another specifically separated experiences from scenario work, “[T]here is [ sic ] some things you can not [ sic ] learn from merely discussing a ‘what if” scenario. A seasoned administrator learns problem solving skills on the job.”

Another frequently cited code was  personality traits  (63 incidences), which involved participants linking elements of their own personalities to their perceived abilities to process problems, almost exclusively from an assets perspective. Examples of traits identified by participants as potentially helpful in problem solving included: open-mindedness, affinity for working with others, not being judgmental, approachability, listening skills, and flexibility. One teacher exemplified this general approach by indicating, “I feel that I am a good listener in regards to inviting opinions. I enjoy learning through cooperation and am always willing to adapt my teaching to fit needs of the learners.” However, rare statements of personality traits interfering with problem solving included, “I find it hard to trust others [ sic ] abilities” and “my personal thoughts and biases.”

Another important category of the participant responses involved connections with others. First, there were many references to  relationships  (27 incidences), mostly from the perspective that building positive relationships leads to greater problem-solving ability, as the aspiring leader knows stakeholders better and can rely on them due to the history of positive interactions. One participant framed this idea from a deficit perspective, “Not knowing all the outlying relationships among staff members makes situational awareness difficult.” Another identified that established positive relationships are already helpful to an aspiring leader, “I have strong rapport with fellow staff members and administrators in my building.” In a related way, many instances of the code  team  were identified (29). These references overwhelmingly identified that solving problems within a team context is helpful. One participant stated, “I often team with people to discuss possible solutions,” while another elaborated,

I recognize that sometimes problems may arise for which I am not the most qualified or may not have the best answer. I realize that I may need to rely on others or seek out help/opinions to ensure that I make the appropriate decision.

Overall, participants recognized that problem-solving for leaders does not typically occur in a vacuum.

Responses to focus group interview questions.  As with the open-ended responses, patterns were evident in the interview responses, and many of these findings were supportive of the aforementioned themes. First, participants frequently referenced the power of group work to help build their understanding about problems and possible solutions. One participant stated, “hearing other people talk and realizing other concerns that you may not have thought of . . . even as a teacher sometimes, you look at it this way, and someone else says to see it this way.” Another added, “seeing it from a variety of persons [ sic ] point of views. How one person was looking at it, and how another person was looking at it was really helpful.” Also, the participants noted the quality of the discussion was a direct result of “professors who have had real-life experience” as practicing educational leaders, so they could add more realistic feedback and insight to the discussions.

Perhaps most notable in the participant responses during the focus groups was the emphasis on the value of real-world scenarios for the students. These were referenced, without prompting, in all three focus groups by many participants. Answers to the question about what has been most helpful in the development of their problem-solving skills included, “I think the real-world application we are doing,” “I think being presented with all the scenarios,” and “[the professor] brought a lot of real situations.”

With respect to what participants believed they still needed to become better and more confident problem solvers, two patterns emerged. First, students recognized that they have much more to learn, especially with respect to policy and law. It is noteworthy that, with few exceptions, these students had not taken the policy or law courses in the program, and they had not yet completed their administrative internships. Some students actually reported rating themselves as less capable problem solvers in the spring because they now understood more clearly what they lacked in knowledge. One student exemplified this sentiment, “I might have graded myself higher in the fall than I did now . . . [I now can] self identify areas I could improve in that I was not as aware of.” Less confidence in the spring was a minority opinion, however. In a more typical response, another participant stated, “I feel much more prepared for that than I did at the beginning of the year.”

Overall, the most frequently discussed future need identified was experience, either through the administrative internship or work as a formal school administrator. Several students summarized this idea, “That real-world experience to have to deal with it without being able to talk to 8 other people before having to deal with it . . . until you are the person . . . you don’t know” and “They tell you all they want. You don’t know it until you are in it.” Overall, most participants perceived themselves to have grown as problem solvers, but they overwhelmingly recognized that they needed more learning and experience to become confident and effective problem solvers.

This study continues a research pathway about the development of problem-solving skills for administrators by focusing on their preparation. The participants did not see a significant increase in their problem-solving skills over the year-long course in educational leadership.

Whereas, this finding is not consistent with the findings of others who focused on the development of problem-solving skills for school leaders (Leithwood & Steinbach, 1995; Shapira-Lishchinsky, 2015), nor is it consistent with PBL research about the benefits of that approach for aspiring educational leaders (Copland, 2000; Hallinger & Bridges, 2017), it is important to note that the participants in this study were at a different point in their careers. First, they were aspirants, as opposed to practicing leaders. Also, the studied intervention (scenarios) was not the same or nearly as comprehensive as the prescriptive PBL approach. Further, unlike the participants in either the practicing leader or PBL studies, because these individuals had not yet had their internship experiences, they had no practical work as educational leaders. This theme of lacking practical experience was observed in both open-ended responses and focus group interviews, with participants pointing to their upcoming internship experiences, or even their eventual work as administrators, as a key missing piece of their preparation.

Despite the participants’ lack of real gains across the year of preparation in their problem- solving scores, the participants did, generally, report an increase in their confidence in problem solving, which they attributed to a number of factors. The first was the theme of real-world context. This finding was consistent with others who have advocated for teaching problem solving through real-world scenarios (Duke, 2014; Leithwood & Steinbach, 1992, 1995; Myran & Sutherland, 2016; Shapira-Lishchinsky, 2015). This study further adds to this conversation, not only a corroboration of the importance of this method (at least in aspiring leaders’ minds), but also that participants specifically recognized their professors’ experiences as school administrators as important for providing examples, context, and credibility to the work in the classroom.

In addition to the scenario approach, the participants also recognized the importance of learning from one another. In addition to the experiences of their practitioner-professors, many participants espoused the value of hearing the diverse perspectives of other students. The use of peer discussion was also an element of instruction in the referenced studies (Leithwood & Steinbach, 1995; Shapira-Lishchinsky, 2015), corroborating the power of aspiring leaders learning from one another and supporting existing literature about the social nature of problem solving (Berger & Luckmann, 1966; Leithwood & Steinbach, 1992; Vygotsky, 1978).

Finally, the ultimate theme identified through this study is the need for real-world experience in the field as an administrator or intern. It is simply not enough to learn about problem solving or learn the background knowledge needed to solve problems, even when the problems presented are real-world in nature. Scenarios are not enough for aspiring leaders to perceive their problem-solving abilities to be adequate or for their actual problem-solving abilities to improve. They need to be, as some of the participants reasoned, in positions of actual responsibility, where the weight of their decisions will have tangible impacts on stakeholders, including students.

The study of participants’ responses to the scenarios connected to the Four Frames model of Bolman and Deal (2008). The element for which participants received the consistently highest scores was identifying solution processes. This area might most logically be connected to the structural and human resource frames, as solutions typically involve working to meet individuals’ needs, as is necessary in the human resource frame, and attending to protocols and procedures, which is the essence of the structural frame. As identified above, the political and symbolic frames have been cited by the authors as the most underdeveloped by educational leaders, and this assertion is corroborated by the finding in this study that participants struggled the most with identifying constraints, which can sometimes arise from an understanding of the competing personal interests in an organization (political frame) and the underlying meaning behind aspects of an organization (symbolic frame), such as unspoken rules and traditions. The lack of success identifying constraints is also consistent with participants’ statements that they needed actual experiences in leadership roles, during which they would likely encounter, firsthand, the types of constraints they were unable to articulate for the given scenarios. Simply, they had not yet “lived” these types of obstacles.

The study includes several notable limitations. First, the study’s size is limited, particularly with only 20 participants’ data available for the matched pairs analysis. Further, this study was conducted at one university, within one particular certification program, and over three sections of one course, which represented about one-half of the time students spend in the program. It is likely that more gains in problem-solving ability and confidence would have been observed if this study was continued through the internship year. Also, the study did not include a control group. The lack of an experimental design limits the power of conclusions about causality. However, this limitation is mitigated by two factors. First, the results did not indicate a statistically significant improvement, so there is not a need to attribute a gain score to a particular variable (i.e. use of scenarios), anyway, and, second, the qualitative results did reveal the perceived value for participants in the use of scenarios, without any prompting of the researcher. Finally, the participant pool was not particularly diverse, though this fact is not particularly unusual for the selected university, in general, representing a contemporary challenge the university’s state is facing to educate its increasingly diverse student population, with a teaching and administrative workforce that is predominantly White.

The findings in this study invite further research. In addressing some of the limitations identified here, expanding this study to include aspiring administrators across other institutions representing different areas of the United States and other developed countries, would provide a more generalizable set of results. Further, studying the development of problem-solving skills during the administrative internship experience would also add to the work outlined here by considering the practical experience of participants.

In short, this study illustrates for those who prepare educational leaders the value of using scenarios in increasing aspiring leaders’ confidence and knowledge. However, intuitively, scenarios alone are not enough to engender significant change in their actual problem-solving abilities. Whereas, real-world context is important to the development of aspiring educational leaders’ problem-solving skills, the best context is likely to be the real work of administration.

Allison, D. J. (1996). Problem finding, classification and interpretation: In search of a theory of administrative problem processing. In K. Leithwood, J. Chapman, D. Corson, P. Hallinger, & A. Hart (Eds.),  International handbook of educational leadership and administration  (pp. 477–549). Norwell, MA: Kluwer Academic.

Berger, P. L., & Luckmann, T. (1966).  The social construction of reality . Garden City, NJ: Doubleday.

Bolman, L. G., & Deal, T. E. (2008).  Re-framing organizations: Artistry, choice and leadership  (4th ed.). San Francisco: Jossey Bass.

Bronstein, C., & Fitzpatrick, K. R. (2015). Preparing tomorrow’s leaders: Integrating leadership development in journalism and mass communication education.  Journalism & Mass Communication Educator, 70 (1), 75–88. https://doi.org/10.1177/1077695814566199

Copland, M. A. (2000). Problem-based learning and prospective principals’ problem-framing ability.  Educational Administration Quarterly ,  36 , 585–607.

Deluca, C., Bolden, B., & Chan, J. (2017). Systemic professional learning through collaborative inquiry: Examining teachers’ perspectives.  Teaching and Teacher Education ,  67 , 67–78. https://doi.org/10.1016/j.tate.2017.05.014

Dimmock, C. (1996). Dilemmas for school leaders and administrators in restructuring. In K. Leithwood, J. Chapman, D. Corson, P. Hallinger, & A. Hart (Eds.),  International Handbook of Educational Leadership and Administration  (pp. 135–170). Norwell, MA: Kluwer Academic.

Duke, D. L. (2014). A bold approach to developing leaders for low-performing schools. Management in Education, 28 (3), 80–85. https://doi.org/10.1177/0892020614537665

Gay, L. R., & Airasian, P. (2003).  Educational research: Competancies for analysis and applications  (7th ed.). Upper Saddle River, NJ: Pearson Education.

Glaser, B. G. (1965). The constant comparative method of qualitative analysis.  Social Problems, 12 (4), 436-445.

Hallinger, P., & Bridges, E. (1993). Problem-based learning in medical and managerial education. In P. Hallinger, K. Leithwood, & J. Murphy (Eds.),  Cognitive perspectives on educational leadership  (pp. 253–267). New York: Teachers’ College Press.

Hallinger, P., & Bridges, E. M. (2017). A systematic review of research on the use of problem- based learning in the preparation and development of school leaders.  Educational Administration Quarterly . 53(2), 255-288. https://doi.org/10.1177/0013161X16659347

Hallinger, P., & McCary, C. E. (1990). Developing the strategic thinking of instructional leaders. Elementary School Journal ,  91 (2), 89–108.

Howard, B. B., McClannon, T. W., & Wallace, P. R. (2014). Collaboration through role play among graduate students in educational leadership in distance learning.  American Journal of Distance Education ,  28 (1), 51–61. https://doi.org/10.1080/08923647.2014.868665

Johnson, B. L., & Kruse, S. D. (2009).  Decision making for educational leaders: Underexamined dimensions and issues . Albany, NY: State University of New York Press.

Leithwood, K., & Steinbach, R. (1992). Improving the problem-solving expertise of school administrators: Theory and practice.  Education and Urban Society ,  24 (3), 317–345. Retrieved from https://journals.sagepub.com.ccsu.idm.oclc.org/doi/pdf/10.1177/0013124592024003003

Leithwood, K., & Steinbach, R. (1995).  Expert problem solving: Evidence from school and district leaders . Albany, NY: State University of New York Press.

Marzano, R. J., Waters, T., & McNulty, B. A. (2005).  School leadership that works: From research to results . Denver, CO: ASCD.

Myran, S., & Sutherland, I. (2016). Problem posing in leadership education: Using case study to foster more effective problem solving.  Journal of Cases in Educational Leadership ,  19 (4), 57–71. https://doi.org/10.1177/1555458916664763

Rochford, L., & Borchert, P. S. (2011). Assessing higher level learning: Developing rubrics for case analysis.  Journal of Education for Business ,  86 , 258–265. https://doi.org/10.1080/08832323.2010.512319

Salas, E., Wildman, J. L., & Piccolo, R. F. (2009). Using simulation based training to enhance management education.  Academy of Management Learning & Education ,  8 (4), 559–573. https://doi.org/10.5465/AMLE.2009.47785474

Shapira-Lishchinsky, O. (2015). Simulation-based constructivist approach for education leaders. Educational Management Administration & Leadership ,  43 (6), 972–988. https://doi.org/10.1177/1741143214543203

Simon, H. A. (1993). Decision making: Rational, nonrational, and irrational.  Educational Administration Quarterly ,  29 (3), 392–411. https://doi.org/10.1177/0013161X93029003009

Stentoft, D. (2017). From saying to doing interdisciplinary learning: Is problem-based learning the answer?  Active Learning in Higher Education ,  18 (1), 51–61. https://doi.org/10.1177/1469787417693510

Strauss, A., & Corbin, J. (1998).  Basics of qualitative research  (2nd ed.). Thousand Oaks, CA: Sage.

VanLehn, K. (1991). Rule acquisition events in the discovery of problem-solving strategies. Cognitive Science ,  15 (1), 1–47. https://doi.org/10.1016/0364-0213(91)80012-T

Vygotsky, L. S. (1978).  Mind in society . Cambridge, MA: Harvard University Press.

Winter, R. (1982). Dilemma analysis: A contribution to methodology for action research. Cambridge Journal of Education, 12 (3), 166-173.

Author Biography

Dr. Jeremy Visone is an Assistant Professor of Educational Leadership, Policy, & Instructional Technology. Until 2016, he worked as an administrator at both the elementary and secondary levels, most recently at Anna Reynolds Elementary School, a National Blue Ribbon School in 2016. Dr. Visone can be reached at  [email protected] .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

15k Accesses

14 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

importance of problem solving skills in education pdf

Fostering twenty-first century skills among primary school students through math project-based learning

importance of problem solving skills in education pdf

A meta-analysis to gauge the impact of pedagogies employed in mixed-ability high school biology classrooms

importance of problem solving skills in education pdf

A guide to critical thinking: implications for dental education

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis.

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

importance of problem solving skills in education pdf

Advertisement

Advertisement

Educational leaders’ problem-solving for educational improvement: Belief validity testing in conversations

  • Open access
  • Published: 01 October 2021
  • Volume 24 , pages 133–181, ( 2023 )

Cite this article

You have full access to this open access article

importance of problem solving skills in education pdf

  • Claire Sinnema   ORCID: orcid.org/0000-0002-6707-6726 1 ,
  • Frauke Meyer 1 ,
  • Deidre Le Fevre 1 ,
  • Hamish Chalmers 1 &
  • Viviane Robinson 1  

13k Accesses

8 Citations

21 Altmetric

Explore all metrics

Educational leaders’ effectiveness in solving problems is vital to school and system-level efforts to address macrosystem problems of educational inequity and social injustice. Leaders’ problem-solving conversation attempts are typically influenced by three types of beliefs—beliefs about the nature of the problem, about what causes it, and about how to solve it. Effective problem solving demands testing the validity of these beliefs—the focus of our investigation. We analyzed 43 conversations between leaders and staff about equity related problems including teaching effectiveness. We first determined the types of beliefs held and the validity testing behaviors employed drawing on fine-grained coding frameworks. The quantification of these allowed us to use cross tabs and chi-square tests of independence to explore the relationship between leaders’ use of validity testing behaviors (those identified as more routine or more robust, and those relating to both advocacy and inquiry) and belief type. Leaders tended to avoid discussion of problem causes, advocate more than inquire, bypass disagreements, and rarely explore logic between solutions and problem causes. There was a significant relationship between belief type and the likelihood that leaders will test the validity of those beliefs—beliefs about problem causes were the least likely to be tested. The patterns found here are likely to impact whether micro and mesosystem problems, and ultimately exo and macrosystem problems, are solved. Capability building in belief validity testing is vital for leadership professional learning to ensure curriculum, social justice and equity policy aspirations are realized in practice.

Similar content being viewed by others

importance of problem solving skills in education pdf

Teachers’ Beliefs Shift Across Year-Long Professional Development: ENA Graphs Transformation of Privately Held Beliefs Over Time

importance of problem solving skills in education pdf

Addressing inequity and underachievement: Intervening to improve middle leaders’problem-solving conversations

importance of problem solving skills in education pdf

Teaching Testable Explanations and Putting Them into Practice

Avoid common mistakes on your manuscript.

This study examines the extent to which leaders, in their conversations with others, test rather than assume the validity of their own and others’ beliefs about the nature, causes of, and solutions to problems of teaching and learning that arise in their sphere of responsibility. We define a problem as a gap between the current and desired state, plus the demand that the gap be reduced (Robinson, 1993 ). We position this focus within the broader context of educational change, and educational improvement in particular, since effective discussion of such problems is central to improvement and vital for addressing issues of educational equity and social justice.

Educational improvement and leaders’ role in problem solving

Educational leaders work in a discretionary problem-solving space. Ball ( 2018 ) describes discretionary spaces as the micro level practices of the teacher. It is imperative to attend to what happens in these spaces because the specific talk and actions that occur in particular moments (for example, what the teacher says or does when one student responds in a particular way to his or her question) impact all participants in the classroom and shape macro level educational issues including legacies of racism, oppression, and marginalization of particular groups of students. A parallel exists, we argue, for leaders’ problem solving—how capable leaders are at dealing with micro-level problems in the conversational moment impacts whether a school or network achieves its improvement goals. For example, how a leader deals with problems with a particular teacher or with a particular student or group of students is subtly but strongly related to the solving of equity problems at the exo and macro levels. Problem solving effectiveness is also related to challenges in the realization of curriculum reform aspirations, including curriculum reform depth, spread, reach, and pace (Sinnema & Stoll, 2020b ).

The conversations leaders have with others in their schools in their efforts to solve educational problems are situated in a broader environment which they both influence and are influenced by. We draw here on Bronfonbrenner’s ( 1992 ) ecological systems theory to construct a nested model of educational problem solving (see Fig.  1 ). Bronfenbrenner focused on the environment around children, and set out five interrelated systems that he professed influence a child’s development. We propose that these systems can also be used to understand another type of learner—educators, including leaders and teachers—in the context of educational problem solving.

figure 1

Nested model of educational problem solving

Bronfenbrenner’s ( 1977 ) microsystem sets out the immediate environment, parents, siblings, teachers, and peers as influencers of and influenced by children. We propose the micro system for educators to include those they have direct contact with including their students, other teachers in their classroom and school, the school board, and the parent community. Bronfenbrenner’s meso system referred to the interactions between a child’s microsystems. In the same way, when foregrounding the ecological system around educators, we suggest attention to the problems that occur in the interactions between students, teachers, school leaders, their boards, and communities. In the exo system, Bronfenbrenner directs attention to other social structures (formal and informal), which do not themselves contain the child, but indirectly influence them as they affect one of the microsystems. In the same way, we suggest educational ministries, departments and agencies function to influence educators. The macro system as theorized by Bronfenbrenner focuses on how child development is influenced by cultural elements established in society, including prevalent beliefs, attitudes, and perceptions. In our model, we recognise how such cultural elements of Bronfenbrenner’s macro system also relate to educators in that dominant and pervasive beliefs, attitudes and perceptions create and perpetuate educational problems, including those relating to educational inequity, bias, racism, social injustice, and underachievement. The chronosystem, as Bronfenbrenner describes, shows the role of environmental changes across a lifetime, which influences development. In a similar way, educators′ professional transitions and professional milestones influence and are influenced by other system levels, and in the context of our work, their problem solving approaches.

Leaders’ effectiveness in discussions about problems related to the micro and mesosystem contributes greatly to the success of exosystem reform efforts, and those efforts, in turn, influence the beliefs, attitudes, and ideologies of the macrosystem. As Fig.  1 shows, improvement goals (indicated by the arrows moving from the current to a desired state) in the exo or macrosystem are unlikely to be achieved without associated improvement in the micro and mesosystem involving students, teachers, and groups of teachers, schools and their boards and parent communities. Similarly, the level of improvement in the macro and exosystems is limited by the extent to which more improvement goals at the micro and mesosystem are achieved through solving problems relating to students’ experience and school and classroom practices including curriculum, teaching, and assessment. As well as drawing on Bronfenbrenner’s ecological systems theory, our nested model of problem solving draws on problem solving theory to draw attention to how gaps between current and desired states at each of the system levels also influence each other (Newell & Simon, 1972 ). Efforts to solve problems in any one system (to move from current state toward a more desired state) are supported by similar moves at other interrelated systems. For example, the success of a teacher seeking to solve a curriculum problem (demand from parents to focus on core knowledge in traditional learning domains, for example)—a problem related to the microsystem and mesosystem—will be influenced by how similar problems are recognised, attended to, and solved by those in the ministries, departments and agencies in the exosystem.

In considering the role of educational leaders in this nested model of problem solving, we take a capability perspective (Mumford et al., 2000 ) rather than a leadership style perspective (Bedell-Avers et al., 2008 ). School leaders (including those with formal and informal leadership positions) require particular capabilities if they are to enact ambitious policies and solve complex problems related to enhancing equity for marginalized and disadvantaged groups of students (Mavrogordato & White, 2020 ). Too often, micro and mesosystem problems remain unsolved which is problematic not only for those directly involved, but also for the resolution of the related exo and macrosystem problems. The ill-structured nature of the problems school leaders face, and the social nature of the problem-solving process, contribute to the ineffectiveness of leaders’ problem-solving efforts and the persistence of important microsystem and mesosystem problems in schools.

Ill-structured problems

The problems that leaders need to solve are typically ill-structured rather than clearly defined, complex rather that than straight-forward, and adaptive rather than routine challenges (Bedell-Avers et al., 2008 ; Heifetz et al., 2009 ; Leithwood & Stager, 1989 ; Leithwood & Steinbach, 1992 , 1995 ; Mumford & Connelly, 1991 ; Mumford et al., 2000 ; Zaccaro et al., 2000 ). As Mumford and Connelly explain, “even if their problems are not totally unprecedented, leaders are, […] likely to be grappling with unique problems for which there is no clear-cut predefined solution” (Mumford & Connelly, 1991 , p. 294). Most such problems are difficult to solve because they can be construed in various ways and lack clear criteria for what counts as a good solution. Mumford et al. ( 2000 ) highlight the particular difficulties in solving ill-structured problems with regard to accessing, evaluating and using relevant information:

Not only is it difficult in many organizational settings for leaders to say exactly what the problem is, it may not be clear exactly what information should be brought to bear on the problem. There is a plethora of available information in complex organizational systems, only some of which is relevant to the problem. Further, it may be difficult to obtain accurate, timely information and identify key diagnostic information. As a result, leaders must actively seek and carefully evaluate information bearing on potential problems and goal attainment. (p. 14)

Problems in schools are complex. Each single problem can comprise multiple educational dimensions (learners, learning, curriculum, teaching, assessment) as well as relational, organizational, psychological, social, cultural, and political dimensions. In response to a teaching problem, for example, a single right or wrong answer is almost never at play; there are typically countless possible ‘responses’ to the problem of how to teach effectively in any given situation.

Problem solving as socially situated

Educational leaders’ problem solving is typically social because multiple people are usually involved in defining, explaining, and solving any given problem (Mumford et al., 2000 ). When there are multiple parties invested in addressing a problem, they typically hold diverse perspectives on how to describe (frame, perceive, and communicate about problems), explain (identify causes which lead to the problem), and solve the problem. Argyris and Schön ( 1974 ) argue that effective leaders must manage the complexity of integrating multiple and diverse perspectives, not only because all parties need to be internally committed to solutions, but also because quality solutions rely on a wide range of perspectives and evidence. Somewhat paradoxically, while the multiple perspectives involved in social problem solving add to their inherent complexity, these perspectives are a resource for educational change, and for the development of more effective solutions (Argyris & Schön, 1974 ). The social nature of problem solving requires high trust so participants can provide relevant, accurate, and timely information (rather than distort or withhold it), recognize their interdependence, and avoid controlling others. In high trust relationships, as Zand’s early work in this field established, “there is less socially generated uncertainty and problems are solved more effectively” (Zand, 1972 , p. 238).

Leaders’ capabilities in problem solving

Leadership research has established the centrality of capability in problem solving to leadership effectiveness generally (Marcy & Mumford, 2010 ; Mumford et al., 2000 , 2007 ) and to educational leadership in particular. Leithwood and Stager ( 1989 ), for example, consider “administrator’s problem-solving processes as crucial to an understanding of why principals act as they do and why some principals are more effective than others” (p. 127). Similarly, Robinson ( 1995 , 2001 , 2010 ) positions the ability to solve complex problems as central to all other dimensions of effective educational leadership. Unsurprisingly, problem solving is often prominent in standards for school leaders/leadership and is included in tools for the assessment of school leadership (Goldring et al., 2009 ). Furthermore, its importance is heightened given the increasing demand and complexity in standards for teaching (Sinnema, Meyer & Aitken, 2016) and the trend toward leadership across networks of schools (Sinnema, Daly, Liou, & Rodway, 2020a ) and the added complexity of such problem solving where a system perspective is necessary.

Empirical research on leaders’ practice has revealed that there is a need for capability building in problem solving (Le Fevre et al., 2015 ; Robinson et al., 2020 ; Sinnema et al., 2013 ; Sinnema et al., 2016 ; Smith, 1997 ; Spillane et al., 2009 ; Timperley & Robinson, 1998 ; Zaccaro et al., 2000 ). Some studies have compared the capability of leaders with varying experience. For example, Leithwood and Stager ( 1989 ) noted differences in problem solving approaches between novice and expert principals when responding to problem scenarios, particularly when the scenarios described ill-structured problems. Principals classified as ‘experts’ were more likely to collect information rather than make assumptions, and perceived unstructured problems to be manageable, whereas typical principals found these problems stressful. Expert principals also consulted extensively to get relevant information and find ways to deal with constraints. In contrast, novice principals consulted less frequently and tended to see constraints as obstacles (Leithwood & Stager, 1989 ). Allison and Allison ( 1993 ) reported that while experienced principals were better than novices at developing abstract problem-solving goals, they were less interested in the detail of how they would pursue these goals. Similar differences were found in Spillane et al.’s ( 2009 ) work that found expert principals to be better at interpreting problems and reflecting on their own actions compared with aspiring principals. More recent work (Sinnema et al., 2021 ) highlights that educators perceptions of discussion quality is positively associated with both new learning for the educator (learning that influences their practice) and improved practice (practices that reach students)—the more robust and helpful educators report their professional discussion to be, the more likely they are to report improvement in their practice. This supports the demand for quality conversation in educational teams.

Solving problems related to teaching and learning that occur in the micro or mesosystem usually requires conversations that demand high levels of interpersonal skill. Skill development is important because leaders tend to have difficulty inquiring deeply into the viewpoints of others (Le Fevre & Robinson, 2015 ; Le Fevre et al., 2015 ; Robinson & Le Fevre, 2011 ). In a close analysis of 43 conversation transcripts, Le Fevre et al. ( 2015 ) showed that when leaders anticipated or encountered diverse views, they tended to ask leading or loaded rather than genuine questions. This pattern was explained by their judgmental thinking, and their desire to avoid negative emotion and stay in control of the conversation. In a related study of leaders’ conversations, a considerable difference was found between the way educational leaders described their problem before and during the conversation with those involved (Sinnema et al., 2013 ). Prior to the conversation, privately, they tended to describe their problem as more serious and more urgent than they did in the conversation they held later with the person concerned.

One of the reasons for the mismatch between their private descriptions and public disclosures was the judgmental framing of their beliefs about the other party’s intentions, attitudes, and/or motivations (Peeters & Robinson, 2015 ). If leaders are not willing or able to reframe such privately-held beliefs in a more respectful manner, they will avoid addressing problems through fear of provoking negative emotion, and neither party will be able to critique the reasoning that leads to the belief in question (Robinson et al., 2020 ). When that happens, beliefs based on faulty reasoning may prevail, problem solutions may be based only on that which is discussable, and the problem may persist.

A model of effective problem-solving conversations

We present below a normative model of effective problem-solving conversations (Fig.  2 ) in which testing the validity of relevant beliefs plays a central role. Leaders test their beliefs about a problem when they draw on a set of validity testing behaviors and enact those behaviors, through their inquiry and advocacy, in ways that are consistent with the three interpersonal values included in the model. The model proposes that these processes increase the effectiveness of social problem solving, with effectiveness understood as progressing the task of solving the problem while maintaining or improving the leader’s relationship with those involved. In formulating this model, we drew on the previously discussed research on problem solving and theories of interpersonal and organisational effectiveness.

figure 2

Model of effective problem-solving conversations

The role of beliefs in problem solving

Beliefs are important in the context of problem solving because they shape decisions about what constitutes a problem and how it can be explained and resolved. Beliefs link the object of the belief (e.g., a teacher’s planning) to some attribute (e.g., copied from the internet). In the context of school problems these attributes are usually tightly linked to a negative evaluation of the object of the belief (Fishbein & Ajzen, 1975 ). Problem solving, therefore, requires explicit attention by leaders to the validity of the information on which their own and others’ beliefs are based. The model draws on the work of Mumford et al. ( 2000 ) by highlighting three types of beliefs that are central to how people solve problems—beliefs about whether and why a situation is problematic (we refer to these as problem description beliefs); beliefs about the precursors of the problem situation (we refer to these as problem explanation beliefs); and beliefs about strategies which could, would, or should improve the situation (we refer to these as problem solution beliefs). With regard to problem explanation beliefs, it is important that attention is not limited to surface level factors, but also encompasses consideration of deeper related issues in the broader social context and how they contribute to any given problem.

The role of values in problem-solving conversations

Figure  2 proposes that problem solving effectiveness is increased when leaders’ validity testing behaviors are consistent with three values—respecting the views of others, seeking to maximize validity of their own and others’ beliefs, and building internal commitment to decisions reached. The inclusion of these three values in the model means that our validity testing behaviors must be conceptualized and measured in ways that capture their interpersonal (respect and internal commitment) and epistemic (valid information) underpinnings. Without this conceptual underpinning, it is likely to be difficult to identify the validity testing behaviors that are associated with effectiveness. For example, the act of seeking agreement can be done in a coercive or a respectful manner, so it is important to define and measure this behavior in ways that distinguish between the two. How this and similar distinctions were accomplished is described in the subsequent section on the five validity testing behaviors.

The three values in Fig.  2 are based on the theories and practice of interpersonal and organizational effectiveness developed by Argyris and Schön ( 1974 , 1978 , 1996 ) and applied more recently in a range of educational leadership research contexts (Hannah et al., 2018 ; Patuawa et al., 2021 ; Sinnema et al., 2021a ). We have drawn on the work of Argyris and Schön because their theories explain the dilemma many leaders experience between the two components of problem solving effectiveness and indicate how that dilemma can be avoided or resolved.

Seeking to maximize the validity of information is important because leaders’ beliefs have powerful consequences for the lives and learning of teachers and students and can limit or support educational change efforts. Leaders who behave consistently with the validity of information value are truth seekers rather than truth claimers in that they are open-minded and thus more attentive to the information that disconfirms rather than confirms their beliefs. Rather than assuming the validity of their beliefs and trying to impose them on others, their stance is one of seeking to detect and correct errors in their own and others′ thinking (Robinson, 2017 ).

The value of respect is closely linked to the value of maximizing the validity of information. Leaders increase validity by listening carefully to the views of others, especially if those views differ from their own. Listening carefully requires the accordance of worth and respect, rather than private or public dismissal of views that diverge from or challenge one’s own. If leaders’ conversations are guided by the two values of valid information and respect, then the third value of fostering internal commitment is also likely to be present. Teachers become internally committed to courses of action when their concerns have been listened to and directly addressed as part of the problem-solving process.

The role of validity testing behaviors in problem solving

Figure  2 includes five behaviors designed to test the validity of the three types of belief involved in problem solving. They are: 1) disclosing beliefs; 2) providing grounds; 3) exploring difference; 4) examining logic; and 5) seeking agreement. These behaviors enable leaders to check the validity of their beliefs by engaging in open minded disclosure and discussion of their thinking. While these behaviors are most closely linked to the value of maximizing valid information, the values of respect and internal commitment are also involved in these behaviors. For example, it is respectful to honestly and clearly disclose one’s beliefs about a problem to the other person concerned (advocacy), and to do so in ways that make the grounds for the belief testable and open to revision. It is also respectful to combine advocacy of one’s own beliefs with inquiry into others’ reactions to those beliefs and with inquiry into their own beliefs. When leaders encounter doubts and disagreements, they build internal rather than external commitment by being open minded and genuinely interested in understanding the grounds for them (Spiegel, 2012 ). By listening to and responding directly to others’ concerns, they build internal commitment to the process and outcomes of the problem solving.

Advocacy and inquiry dimensions

Each of the five validity testing behaviors can take the form of a statement (advocacy) or a question (inquiry). A leader’s advocacy contributes to problem solving effectiveness when it communicates his or her beliefs and the grounds for them, in a manner that is consistent with the three values. Such disclosure enables others to understand and critically evaluate the leader’s thinking (Tompkins, 2013 ). Respectful inquiry is equally important, as it invites the other person into the conversation, builds the trust they need for frank disclosure of their views, and signals that diverse views are welcomed. Explicit inquiry for others’ views is particularly important when there is a power imbalance between the parties, and when silence suggests that some are reluctant to disclose their views. Across their careers, leaders tend to rely more heavily on advocating their own views than on genuinely inquiring into the views of others (Robinson & Le Fevre, 2011 ). It is the combination of advocacy and inquiry behaviors, that enables all parties to collaborate in formulating a more valid understanding of the nature of the problem and of how it may be solved.

The five validity testing behaviors

Disclosing beliefs is the first and most essential validity testing behavior because beliefs cannot be publicly tested, using the subsequent four behaviors, if they are not disclosed. This behavior includes leaders’ advocacy of their own beliefs and their inquiry into others’ beliefs, including reactions to their own beliefs (Peeters & Robinson, 2015 ; Robinson & Le Fevre, 2011 ).

Honest and respectful disclosure ensures that all the information that is believed to be relevant to the problem, including that which might trigger an emotional reaction, is shared and available for validity testing (Robinson & Le Fevre, 2011 ; Robinson et al., 2020 ; Tjosvold et al., 2005 ). Respectful disclosure has been linked with follower trust. The empirical work of Norman et al. ( 2010 ), for example, showed that leaders who disclose more, and are more transparent in their communication, instill higher levels of trust in those they work with.

Providing grounds , the second validity testing behavior, is concerned with leaders expressing their beliefs in a way that makes the reasoning that led to them testable (advocacy) and invites others to do the same (inquiry). When leaders clearly explain the grounds for their beliefs and invite the other party to critique their relevance or accuracy, the validity or otherwise of the belief becomes more apparent. Both advocacy and inquiry about the grounds for beliefs can lead to a strengthening, revision, or abandonment of the beliefs for either or both parties (Myran & Sutherland, 2016 ; Robinson & Le Fevre, 2011 ; Robinson et al., 2020 ).

Exploring difference is the third validity testing behavior. It is essential because two parties simply disclosing beliefs and the grounds for them is insufficient for arriving at a joint solution, particularly when such disclosure reveals that there are differences in beliefs about the accuracy and implications of the evidence or differences about the soundness of arguments. Exploring difference through advocacy is seen in such behaviors as identifying and signaling differing beliefs and evaluating contrary evidence that underpins those differing beliefs. An inquiry approach to exploring difference (Timperley & Parr, 2005 ) occurs when a leader inquires into the other party’s beliefs about difference, or their response to the leaders’ beliefs about difference.

Exploring differences in beliefs is key to increasing validity in problem solving efforts (Mumford et al., 2007 ; Robinson & Le Fevre, 2011 ; Tjosvold et al., 2005 ) because it can lead to more integrative solutions and enhance the commitment from both parties to work with each other in the future (Tjosvold et al., 2005 ). Leaders who are able to engage with diverse beliefs are more likely to detect and challenge any faulty reasoning and consequently improve solution development (Le Fevre & Robinson, 2015 ). In contrast, when leaders do not engage with different beliefs, either by not recognizing or by intentionally ignoring them, validity testing is more limited. Such disengagement may be the result of negative attributions about the other person, such as that they are resistant, stubborn, or lazy. Such attributions reduce opportunities for the rigorous public testing that is afforded by the exchange and critical examination of competing views.

Examining logic , the fourth validity testing behavior, highlights the importance of devising a solution that adequately addresses the nature of the problem at hand and its causes. To develop an effective solution both parties must be able to evaluate the logic that links problems to their assumed causes and solutions. This behavior is present when the leader suggests or critiques the relationship between possible causes of and solutions to the identified problem. In its inquiry form, the leader seeks such information from the other party. As Zaccaro et al. ( 2000 ) explain, good problem solvers have skills and expertise in selecting the information to attend to in their effort to “understand the parameters of problems and therefore the dimensions and characteristics of a likely solution” (p. 44–45). These characteristics may include solution timeframes, resource capacities, an emphasis on organizational versus personal goals, and navigation of the degree of risk allowed by the problem approach. Explicitly exploring beliefs is key to ensuring the logic linking problem causes and any proposed solution. Taking account of a potentially complex set of contributing factors when crafting logical solutions, and testing the validity of beliefs about them, is likely to support effective problem solving. This requires what Copland ( 2010 ) describes as a creative process with similarities to clinical reasoning in medicine, in which “the initial framing of the problem is fundamental to the development of a useful solution” (p. 587).

Seeking agreement , the fifth validity testing behavior, signals the importance of warranted agreement about problem beliefs. We use the term ‘warranted’ to make clear that the goal is not merely getting the other party to agree (either that something is a problem, that a particular cause is involved, or that particular actions should be carried out to solve it)—mere agreement is insufficient. Rather, the goal is for warranted agreement whereby both parties have explored and critiqued the beliefs (and their grounds) of the other party in ways that provide a strong basis for the agreement. Both parties must come to some form of agreement on beliefs because successful solution implementation occurs in a social context, in that it relies on the commitment of all parties to carry it out (Mumford et al., 2000 ; Robinson & Le Fevre, 2011 ; Tjosvold et al., 2005 ). Where full agreement does not occur, the parties must at least be clear about where agreement/disagreement lies and why.

Testing the validity of beliefs using these five behaviors, and underpinned by the values described earlier is, we argue, necessary if conversations are to lead to two types of improvement—progress on the task (i.e., solving the problem) and improving the relationship between those involved in the conversation (i.e., ensuring those relationship between the problem-solvers is intact and enhanced through the process). We draw attention here to those improvement purposes as distinct from those underpinning work in the educational leadership field that takes a neo-managerialist perspective. The rise of neo-managerialism is argued to redefine school management and leadership along managerial lines and hence contribute to schools that are inequitable, reductionist, and inauthentic (Thrupp & Willmott, 2003 ). School leaders, when impacted by neo-managerialism, need to be (and are seen as) “self-interested, opportunistic innovators and risk-takers who exploit information and situations to produce radical change.” In contrast, the model we propose rejects self-interest. Our model emphasizes on deep respect for the views of others and the relentless pursuit of genuine shared commitment to understanding and solving problems that impact on children and young people through collaborative engagement in joint problem solving. Rather than permitting leaders to exploit others, our model requires leaders to be adept at using both inquiry and advocacy together with listening to both progress the task (solving problems) and simultaneously enhance the relationship between those involved. We position this model of social problem solving effectiveness as a tool for addressing social justice concerns—it intentionally dismisses problem solving approaches that privilege organizational efficiency indicators and ignore the wellbeing of learners and issues of inequity, racism, bias, and social injustice within and beyond educational contexts.

Methodology

The following section outlines the purpose of the study, the participants, and the mixed methods approach to data collection and analysis.

Research purpose

Our prior qualitative research (Robinson et al., 2020 ) involving in-depth case studies of three educational leaders revealed problematic patterns in leaders’ approach to problem-solving conversations: little disclosure of causal beliefs, little public testing of beliefs that might trigger negative emotions, and agreement on solutions that were misaligned with causal beliefs. The present investigation sought to understand if a quantitative methodological approach would reveal similar patterns and examine the relationship between belief types and leaders’ use of validity testing behaviors. Thus, our overarching research question was: to what extent do leaders test the validity of their beliefs in conversations with those directly involved in the analysis and resolution of the problem? Our argument is that while new experiences might motivate change in beliefs (Bonner et al., 2020 ), new insights gained through testing the validity of beliefs is also imperative to change. The sub-questions were:

What is the relative frequency in the types of beliefs leaders hold about problems involving others?

To what extent do leaders employ validity testing behaviors in conversations about those problems?

Are there differential patterns in leaders’ validity testing of the different belief types?

Participants

The participants were 43 students in a graduate course on educational leadership in New Zealand who identified an important on the job problem that they intended to discuss with the person directly involved.

The mixed methods approach

The study took a mixed methods approach using a partially mixed sequential equal status design; (QUAL → QUAN) (Leech & Onwuegbuzie, 2009 ). The five stages of sourcing and analyzing data and making interpretations are summarised in Fig.  3 below and outlined in more detail in the following sections (with reference in brackets to the numbered phases in the figure). We describe the study as partially mixed because, as Leech & Onwuegbuzie, 2009 explain, in partially mixed methods “both the quantitative and qualitative elements are conducted either concurrently or sequentially in their entirety before being mixed at the data interpretation stage” (p. 267).

figure 3

Overview of mixed methods approach

Stage 1: Qualitative data collection

Three data sources were used to reveal participants’ beliefs about the problem they were seeking to address. The first source was their response to nine open ended items in a questionnaire focused on a real problem the participant had attempted to address but that still required attention (1a). The items were about: the nature and history of the problem; its importance; their own and others’ contribution to it; the causes of the problem; and the approach to and effectiveness of prior attempts to resolve it.

The second source (1b) was the transcript of a real conversation (typically between 5 and 10 minutes duration) the leaders held with the other person involved in the problem, and the third was the leaders’ own annotations of their unspoken thoughts and feelings during the course of the conversation (1c). The transcription was placed in the right-hand column (RHC) of a split page with the annotations recorded at the appropriate place in the left-hand column (LHC). The LHC method was originally developed by Argyris and Schön ( 1974 ) as a way of examining discrepancies between people’s espoused and enacted interpersonal values. Referring to data about each leader’s behavior (as recorded in the transcript of the conversation) and their thoughts (as indicated in the LHC) was important since the model specifies validity testing behaviors that are motivated by the values of respect, valid information, and internal commitment. Since motives cannot be revealed by speech alone, we also needed access to the thoughts that drove their behavior, hence our use of the LHC data collection technique. This approach allowed us to respond to Leithwood and Stager’s ( 1989 ) criticism that much research on effective problem solving gives results that “reveal little or nothing about how actions were selected or created and treat the administrator’s mind as a ‘black box’” (p. 127).

Stage 2: Qualitative analysis

The three stages of qualitative analysis focused on identifying discrete beliefs in the three qualitative data sources, distilling those discrete beliefs into key beliefs, and identifying leaders’ use of validity testing behaviors.

Stage 2a: Analyzing types of beliefs about problems

For this stage, we developed and applied coding rules (see Table 1 ) for the identification of the three types of beliefs in the three sources described earlier—leaders’ questionnaire responses, conversation transcript (RHC), and unexpressed thoughts (LHC). We identified 903 discrete beliefs (utterances or thoughts) from the 43 transcripts, annotations, and questionnaires and recorded these on a spreadsheet (2a). While our model proposes that leaders’ inquiry will surface and test the beliefs of others, we quantify in this study only the leaders’ beliefs.

Stage 2b: Distilling discrete beliefs into key beliefs

Next, we distilled the 903 discrete beliefs into key beliefs (KBs) (2b). This was a complex process and involved multiple iterations across the research team to determine, check, and test the coding rules. The final set of rules for distilling key beliefs were:

Beliefs should be made more succinct in the key belief statement, and key words should be retained as much as possible

Judgment quality (i.e., negative or positive) of the belief needs to be retained in the key belief

Key beliefs should use overarching terms where possible

The meaning and the object of the belief need to stay constant in the key belief

When reducing overlap, the key idea of both beliefs need to be captured in the key beliefs

Distinctive beliefs need to be summarized on their own and not combined with other beliefs

The subject of the belief must be retained in the key belief—own belief versus restated belief of other

All belief statements must be accounted for in key beliefs

These rules were applied to the process of distilling multiple related beliefs into statements of key beliefs as illustrated by the example in the table below (Table 2 ).

Further examples of how the rules were applied are outlined in ' Appendix A '. The number of discrete beliefs for each leader ranged from 7 to 35, with an average of 21, and the number of key beliefs for each leader ranged between 4 and 14, with an average of eight key beliefs. Frequency counts were used to identify any patterns in the types of key beliefs which were held privately (not revealed in the conversation but signalled in the left hand column or questionnaire) or conveyed publicly (in conversation with the other party).

Stage 2c: Analyzing leaders’ use of validity testing behaviors

We then developed and applied coding rules for the five validity testing behaviors (VTB) outlined in our model (disclosing beliefs, providing grounds, exploring difference, examining logic, and seeking agreement). Separate rules were established for the inquiry and advocacy aspects of each VTB, generating ten coding rules in all (Table 3 ).

These rules, summarised in the table below, and outlined more fully in ' Appendix A ', encompassed inclusion and exclusion criteria for the advocacy and inquiry dimensions of each validity testing behavior. For example, the inclusion rule for the VTB of ‘Disclosing Beliefs’ required leaders to disclose their beliefs about the nature, and/or causes, and/or possible solutions to the problem, in ways that were consistent with the three values included in the model. The associated exclusion rule signalled that this criterion was not met if, for example, the leader asked a question in order to steer the other person toward their own views without having ever disclosed their own views, or if they distorted the urgency or seriousness of the problem related to what they had expressed privately. The exclusion rules also noted how thoughts expressed in the left hand column would exclude the verbal utterance from being treated as disclosure—for example if there were contradictions between the right hand (spoken) and left hand column (thoughts), or if the thoughts indicated that the disclosure had been distorted in order to minimise negative emotion.

The coding rules reflected the values of respect and internal commitment in addition to the valid information value that was foregrounded in the analysis. The emphasis on inquiry, for example (into others’ beliefs and/or responses to the beliefs already expressed by the leader), recognised that internal commitment would be impossible if the other party held contrary views that had not been disclosed and discussed. Similarly, the focus on leaders advocating their beliefs, grounds for those beliefs and views about the logic linking solutions to problem causes recognise that it is respectful to make those transparent to another party rather than impose a solution in the absence of such disclosure.

The coding rules were applied to all 43 transcripts and the qualitative analysis was carried out using NVivo 10. A random sample of 10% of the utterances coded to a VTB category was checked independently by two members of the research team following the initial analysis by a third member. Any discrepancies in the coding were resolved, and data were recoded if needed. Descriptive analyses then enabled us to compare the frequency of leaders’ use of the five validity testing behaviors.

Stage 3: Data transformation: From qualitative to quantitative data

We carried out transformation of our data set (Burke et al., 2004 ), from qualitative to quantitative, to allow us to carry out statistical analysis to answer our research questions. The databases that resulted from our data transformation, with text from the qualitative coding along with numeric codes, are detailed next. In database 1, key beliefs were all entered as cases with indications in adjacent columns as to the belief type category they related to, and the source/s of the belief (questionnaire, transcript or unspoken thoughts/feelings). A unique identifier was created for each key belief.

In database 2, each utterance identified as meeting the VTB coding rules were entered in column 1. The broader context of the utterance from the original transcript was then examined to establish the type of belief (description, explanation, or solution) the VTB was being applied to, with this recorded numerically alongside the VTB utterance itself. For example, the following utterance had been coded to indicate that it met the ‘providing grounds’ coding rule, and in this phase it was also coded to indicate that it was in relation to a ‘problem description’ belief type:

“I noticed on the feedback form that a number of students, if I’ve got the numbers right here, um, seven out of ten students in your class said that you don’t normally start the lesson with a ‘Do Now’ or a starter activity.” (case 21)

A third database listed all of the unique identifiers for each leader’s key beliefs (KB) in the first column. Subsequent columns were set up for each of the 10 validity testing codes (the five validity testing behaviors for both inquiry and advocacy). The NVivo coding for the VTBs was then examined, one piece of coding at a time, to identify which key belief the utterance was associated with. Each cell that intersected the appropriate key belief and VTB was increased by one as a VTB utterance was associated with a key belief. Our database included variables for both the frequency of each VTB (the number of instances the behavior was used) and a parallel version with just a dichotomous variable indicating the presence or absence or each VTB. The dichotomous variable was used for our subsequent analysis because multiple utterances indicating a certain validity testing behavior were not deemed to necessarily constitute better quality belief validity testing than one utterance.

Stage 4: Quantitative analysis

The first phase of quantitative analysis involved the calculation of frequency counts for the three belief types (4a). Next, frequencies were calculated for the five validity testing behaviors, and for those behaviors in relation to each belief type (4b).

The final and most complex stage of the quantitative analysis, stages 4c through 4f, involved looking for patterns across the two sets of data created through the prior analyses (belief type and validity testing behaviors) to investigate whether leaders might be more inclined to use certain validity testing behaviors in conjunction with a particular belief type.

Stage 4a: Analyzing for relationships between belief type and VTB

We investigated the relationship between belief type and VTB, first, for all key beliefs. Given initial findings about variability in the frequency of the VTBs, we chose not to use all five VTBs separately in our analysis, but rather the three categories of: 1) None (key beliefs that had no VTB applied to them); 2) VTB—Routine (the sum of VTBs 1 and 2; given those were much more prevalent than others in the case of both advocacy and inquiry); and 3) VTB—Robust (the sum of the VTBs 3, 4 and 5 given these were all much less prevalent than VTBs 1 and 2, again including both advocacy and/or inquiry). Cross tabs were prepared and a chi-square test of independence was performed on the data from all 331 key beliefs.

Stage 4b: Analyzing for relationships between belief type and VTB

Next, because more than half (54.7%, 181) of the 331 key beliefs were not tested by leaders using any one of the VTBs, we analyzed a sub-set of the database, selecting only those key beliefs where leaders had disclosed the belief (using advocacy and/or inquiry). The reason for this was to ensure that any relationships established statistically were not unduly influenced by the data collection procedure which limited the time for the conversation to 10 minutes, during which it would not be feasible to fully disclose and address all key beliefs held by the leader. For this subset we prepared cross tabs and carried out chi-square tests of independence for the 145 key beliefs that leaders had disclosed. We again investigated the relationship between key belief type and VTBs, this time using a VTB variable with two categories: 1) More routine only and 2) More routine and robust.

Stage 4c: Analyzing for relationships between belief type and advocacy/inquiry dimensions of validity testing

Next, we investigated the relationship between key belief type and the advocacy and inquiry dimensions of validity testing. This analysis was to provide insight into whether leaders might be more or less inclined to use certain VTBs for certain types of belief. Specifically, we compared the frequency of utterances about beliefs of all three types for the categories of 1) No advocacy or inquiry, 2) Advocacy only, 3) Inquiry only, and 4) Advocacy and inquiry (4e). Cross tabs were prepared, and a chi-square test of independence was performed on the data from all 331 key beliefs. Finally, we again worked with the subset of 145 key beliefs that had been disclosed, comparing the frequency of utterances coded to 1) Advocacy or inquiry only, or 2) Both advocacy and inquiry (4f).

Below, we highlight findings in relation to the research questions guiding our analysis about: the relative frequency in the types of beliefs leaders hold about problems involving others; the extent to which leaders employ validity testing behaviors in conversations about those problems; and differential patterns in leaders’ validity testing of the different belief types. We make our interpretations based on the statistical analysis and draw on insights from the qualitative analysis to illustrate those results.

Belief types

Leaders’ key beliefs about the problem were evenly distributed between the three belief types, suggesting that when they think about a problem, leaders think, though not necessarily in a systematic way, about the nature of, explanation for, and solutions to their problem (see Table 4 ). These numbers include beliefs that were communicated and also those recorded privately in the questionnaire or in writing on the conversation transcripts.

Patterns in validity testing

The majority of the 331 key beliefs (54.7%, 181) were not tested by leaders using any one of the VTBs, not even the behavior of disclosing the belief. Our analysis of the VTBs that leaders did use (see Table 5 ) shows the wide variation in frequency of use with some, arguably the more robust ones, hardly used at all.

The first pattern was more frequent disclosure of key beliefs than provision of the grounds for them. The lower levels of providing grounds is concerning because it has implications for the likelihood of those in the conversation subsequently reaching agreement and being able to develop solutions logically aligned to the problem (VTB4). The logical solution if it is the time that guided reading takes that is preventing a teacher doing ‘shared book reading’ (as Leader 20 believed to be the case) is quite different to the solution that is logical if in fact the reason is something different, for example uncertainty about how to go about ‘shared book reading’, lack of shared book resources, or a misunderstanding that school policy requires greater time on shared reading.

The second pattern was a tendency for leaders to advocate much more than they inquire— there was more than double the proportion of advocacy than inquiry overall and for some behaviors the difference between advocacy and inquiry was up to seven times greater. This suggests that leaders were more comfortable disclosing their own beliefs, providing the grounds for their own beliefs and expressing their own assumptions about agreement, and less comfortable in inquiring in ways that created space and invited the other person in the conversation to reveal their beliefs.

A third pattern revealed in this analysis was the difference in the ratio of inquiry to advocacy between VTB1 (disclosing beliefs)—a ratio of close to 1:2 and VTB2 (providing grounds)—a ratio of close to 1:7. Leaders are more likely to seek others’ reactions when they disclose their beliefs than when they give their grounds for those beliefs. This might suggest that leaders assume the validity of their own beliefs (and therefore do not see the need to inquire into grounds) or that they do not have the skills to share the grounds associated with the beliefs they hold.

Fourthly, there was an absence of attention to three of the VTBs outlined in our model—in only very few of the 329 validity testing utterances the 43 leaders used were they exploring difference (11 instances), examining logic (4 instances) or seeking agreement (22 instances). In Case 22, for example, the leader claimed that learning intentions should be displayed and understood by children and expressed concern that the teacher was not displaying them, and that her students thus did not understand the purpose of the activities they were doing. While the teacher signaled her disagreement with both of those claims—“I do learning intentions, it’s all in my modelling books I can show them to you if you want” and “I think the children know why they are learning what they are learning”—the fact that there were differences in their beliefs was not explicitly signaled, and the differences were not explored. The conversation went on, with each continuing to assume the accuracy of their own beliefs. They were unable to reach agreement on a solution to the problem because they had not established and explored the lack of agreement about the nature of the problem itself. We presume from these findings, and from our prior qualitative work in this field, that those VTBs are much more difficult, and therefore much less likely to be used than the behaviors of disclosing beliefs and providing grounds.

The relationship between belief type and validity testing behaviors

The relationship between belief type and category of validity testing behavior was significant ( Χ 2 (4) = 61.96,  p  < 0.001). It was notable that problem explanation beliefs were far less likely than problem description or problem solution beliefs to be subject to any validity testing (the validity of more than 80% of PEBs was not tested) and, when they were tested, it was typically with the more routine rather than robust VTBs (see Table 6 ).

Problem explanation beliefs were also most likely to not be tested at all; more than 80% of the problem explanation beliefs were not the focus of any validity testing. Further, problem description beliefs were less likely than problem solution beliefs to be the target of both routine and robust validity testing behaviors—12% of PDBs and 18% of PSBs were tested using both routine and robust VTBs.

Two important assumptions underpin the study reported here. The first is that problems of equity must be solved, not only in the macrosystem and exosystem, but also as they occur in the day to day practices of leaders and teachers in micro and mesosystems. The second is that conversations are the key practice in which problem solving occurs in the micro and mesosystems, and that is why we focused on conversation quality. We focused on validity testing as an indicator of quality by closely analyzing transcripts of conversations between 43 individual leaders and a teacher they were discussing problems with.

Our findings suggest a considerable gap between our normative model of effective problem solving conversations and the practices of our sample of leaders. While beliefs about what problems are, and proposed solutions to them are shared relatively often, rarely is attention given to beliefs about the causes of problems. Further, while leaders do seem to be able to disclose and provide grounds for their beliefs about problems, they do so less often for beliefs about problem cause than other belief types. In addition, the critical validity testing behaviors of exploring difference, examining logic, and seeking agreement are very rare. Learning how to test the validity of beliefs is, therefore, a relevant focus for educational leaders’ goals (Bendikson et al., 2020 ; Meyer et al., 2019 ; Sinnema & Robinson, 2012 ) as well as a means for achieving other goals.

The patterns we found are problematic from the point of view of problem solving in schools generally but are particularly problematic from the point of view of macrosystem problems relating to equity. In New Zealand, for example, the underachievement and attendance issues of Pasifika students is a macrosystem problem that has been the target of many attempts to address through a range of policies and initiatives. Those efforts include a Pasifika Education Plan (Ministry of Education, 2013 ) and a cultural competencies framework for teachers of Pasifika learners—‘Tapasa’ (Ministry of Education, 2018 ) At the level of the mesosystem, many schools have strategic plans and school-wide programmes for interactions seeking to address those issues.

Resolving such equity issues demands that macro and exosystem initiatives are also reflected in the interactions of educators—hence our investigation of leaders’ problem-solving conversations and attention to whether leaders have the skills required to solve problems in conversations that contribute to aspirations in the exo and macrosystem, include of excellence and equity in new and demanding national curricula (Sinnema et al., 2020a ; Sinnema, Stoll, 2020a ). An example of an exosystem framework—the competencies framework for teachers of Pacific students in New Zealand—is useful here. It requires that teachers “establish and maintain collaborative and respectful relationships and professional behaviors that enhance learning and wellbeing for Pasifika learners” (Ministry of Education, 2018 , p. 12). The success of this national framework is influenced by and also influences the success that leaders in school settings have at solving problems in the conversations they have about related micro and mesosystem problems.

To illustrate this point, we draw here on the example of one case from our sample that showed how problem-solving conversation capability is related to the success or otherwise of system level aspirations of this type. In the case of Leader 36, under-developed skill in problem solving talk likely stymied the success of the equity-focused system initiatives. Leader 36 had been alerted by the parents of a Pasifika student that their daughter “feels that she is being unfairly treated, picked on and being made to feel very uncomfortable in the teacher’s class.” In the conversation with Leader 36, the teacher described having established a good relationship with the student, but also having had a range of issues with her including that she was too talkative, that led the teacher to treat her in ways the teacher acknowledged could have made her feel picked on and consequently reluctant to come to school.

The teacher also told the leader that there were issues with uniform irregularities (which the teacher picked on) and general non conformity—“No, she doesn’t [conform]. She often comes with improper footwear, incorrect jacket, comes late to school, she puts make up on, there are quite a few things that aren’t going on correctly….”. The teacher suggested that the student was “drawing the wrong type of attention from me as a teacher, which has had a negative effect on her.” The teacher described to the leader a recent incident:

[The student] had come to class with her hair looking quite shabby so I quietly asked [the student] “Did you wake up late this morning?” and then she but I can’t remember, I made a comment like “it looks like you didn’t take too much interest in yourself.” To me, I thought there was nothing wrong with the comment as it did not happen publicly; it happened in class and I had walked up to her. Following that, [her] Mum sends another email about girls and image and [says] that I am picking on her again. I’m quite baffled as to what is happening here. (case 36)

This troubling example represented a critical discretionary moment. The pattern of belief validity testing identified through our analysis of this case (see Table 7 ), however, mirrors some of the patterns evident in the wider sample.

The leader, like the student’s parents, believed that the teacher had been offensive in her communication with the student and also that the relationship between the teacher and student would be negatively impacted as a result. These two problem description beliefs were disclosed by the leader during her conversation with the teacher. However, while her disclosure of her belief about the problem description involved both advocating the belief, and inquiring into the other’s perception of it, the provision of grounds for the belief involved advocacy only. She reported the basis of the concern (the email from the student’s parents about their daughter feeling unfairly treated, picked on, and uncomfortable in class) but did not explicitly inquire into the grounds. This may be explained in this case through the teacher offering her own account of the situation that matched the parent’s report. Leader 36 also disclosed in her conversation with the teacher, her problem solution key belief that they should hold a restorative meeting between the teacher, the student, and herself.

What Leader 36 did not disclose was her belief about the explanation for the problem—that the teacher did not adequately understand the student personally, or their culture. The problem explanation belief (KB4) that she did inquire into was one the teacher raised—suggesting that the student has “compliance issues” that led the teacher to respond negatively to the student’s communication style—and that the teacher agreed with. The leader did not use any of the more robust but important validity testing behaviors for any of the key beliefs they held, either about problem description, explanation or solutions. And most importantly, this conversation highlights how policies and initiatives developed by those in the macrosystem, aimed at addressing equity issues, can be thwarted through well-intentioned but ultimately unsuccessful efforts of educators as they operate in the micro and mesosystem in what we referred to earlier as a discretionary problem solving space. The teacher’s treatment of the Pasifika student in our example was in stark contrast to the respectful and strong relationships demanded by the exosystem policy, the framework for teachers of Pasifika students. Furthermore, while the leader recognized the problem, issues of culture were avoided—they were not skilled enough in disclosing and testing their beliefs in the course of the conversation to contribute to broader equity concerns. The skill gap resonates with the findings of much prior work in this field (Le Fevre et al., 2015 ; Robinson et al., 2020 ; Sinnema et al., 2013 ; Smith, 1997 ; Spillane et al., 2009 ; Timperley & Robinson, 1998 ; Zaccaro et al., 2000 ), and highlights the importance of leaders, and those working with them in leadership development efforts, to recognize the interactions between the eco-systems outlined in the nested model of problem solving detailed in Fig.  1 .

The reluctance of Leader 36 to disclose and discuss her belief that the teacher misunderstands the student and her culture is important given the wider research evidence about the nature of the beliefs teachers may hold about indigenous and minority learners. The expectations teachers hold for these groups are typically lower and more negative than for white students (Gay, 2005 ; Meissel et al., 2017 ). In evidence from the New Zealand context, Turner et al. ( 2015 ), for example, found expectations to differ according to ethnicity with higher expectations for Asian and European students than for Māori and Pasifika students, even when controlling for achievement, due to troubling teacher beliefs about students’ home backgrounds, motivations, and aspirations. These are just the kind of beliefs that leaders must be able to confront in conversations with their teachers.

We use this example to illustrate both the interrelatedness of problems across the ecosystem, and the urgency of leadership development intervention in this area. Our normative model of effective problem solving conversations (Fig.  2 ), we suggest, provides a useful framework for the design of educational leadership intervention in this area. It shows how validity testing behaviors should embody both advocacy and inquiry and be used to explore not only perceptions of problem descriptions and solutions, but also problem causes. In this way, we hope to offer insights into how the dilemma between trust and accountability (Ehren et al., 2020 ) might be solved through increased interpersonal effectiveness. The combination of inquiry with advocacy also marks this approach out from neo-liberal approaches that emphasize leaders staying in control and predominantly advocating authoritarian perspectives of educational leadership. The interpersonal effectiveness theory that we draw on (Argyris & Schön, 1974 ) positions such unilateral control as ineffective, arguing for a mutual learning alternative. The work of problem solving is, we argue, joint work, requiring shared commitment and control.

Our findings also call for more research explicitly designed to investigate linkages between the systems. Case studies are needed, of macro and exosystem inequity problems backward mapped to initiatives and interactions that occur in schools related to those problems and initiatives. Such research could capture the complex ways in which power plays out “in relation to structural inequalities (of class, disability, ethnicity, gender, nationality, race, sexuality, and so forth)” and in relation to “more shifting and fluid inequalities that play out at the symbolic and cultural levels (for example, in ways that construct who “has” potential)” (Burke & Whitty, 2018 , p. 274).

Leadership development in problem solving should be approached in ways that surface and test the validity of leaders’ beliefs, so that they similarly learn to surface and test others’ beliefs in their leadership work. That is important not only from a workforce development point of view, but also from a social justice point of view since leaders’ capabilities in this area are inextricably linked to the success of educational systems in tackling urgent equity concerns.

Allison, D., & Allison, P. (1993). Both ends of a telescope: Experience and expertise in principal problem solving. Educational Administration Quarterly, 29 (3), 302–322. https://doi.org/10.1177/0013161x93029003005

Article   Google Scholar  

Argyris, C., Schön, D. (1974). Theory in practice: Increasing professional effectiveness . Jossey-Bass.

Argyris, C., & Schön, D. (1978). Organizational learning: A theory of action perspective Addison-Wesley.

Argyris, C., & Schön, D. (1996). Organizational learning II: Theory, method and practice . Addison-Wesley.

Google Scholar  

Ball, D. L. (2018). Just dreams and imperatives: the power of teaching in the struggle for public education . New York, NYC: Annual Meeting of the American Educational Research Association.

Bedell-Avers, E., Hunter, S., & Mumford, M. (2008). Conditions of problem-solving and the performance of charismatic, ideological, and pragmatic leaders: A comparative experimental study. The Leadership Quarterly, 19 , 89–106.

Bendikson, L., Broadwith, M., Zhu, T., & Meyer, F. (2020). Goal pursuit practices in high schools: hitting the target?. Journal of Educational Administration , 56 (6), 713–728. https://doi.org/10.1108/JEA-01-2020-0020

Bonner, S. M., Diehl, K., & Trachtman, R. (2020). Teacher belief and agency development in bringing change to scale. Journal of Educational Change, 21 (2), 363–384. https://doi.org/10.1007/s10833-019-09360-4

Bronfenbrenner, U. (1992). Ecological systems theory. In R. Vasta, Six theories of child development: Revised formulations and current issues. Jessica Kingsley Publishers

Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32 (7), 513–531.

Burke Johnson, R., & Onwuegbuzie, A. (2004). Mixed methods research; a research paradigm whose time has come. Educational Researcher, 33 (7), 14–26.

Burke, P. J., & Whitty, G. (2018). Equity issues in teaching and teacher education. Peabody Journal of Education, 93 (3), 272–284. https://doi.org/10.1080/0161956X.2018.1449800

Copland, F. (2010). Causes of tension in post-observation feedback in pre-service teacher training: An alternative view. Teaching and Teacher Education, 26 (3), 466–472. https://doi.org/10.1016/j.tate.2009.06.001

Ehren, M., Paterson, A., & Baxter, J. (2020). Accountability and Trust: Two sides of the same coin? Journal of Educational Change, 21 (1), 183–213. https://doi.org/10.1007/s10833-019-09352-4

Fishbein, M., & Ajzen, I. (1975). Belief An Introduction to Theory and Research. Attitude, Intention and Behavior . Addison-Wesley.

Gay, G. (2005). Politics of multicultural teacher education. Journal of Teacher Education, 56 (3), 221–228.

Goldring, E., Cravens, X., Murphy, J., Porter, A., Elliott, S., & Carson, B. (2009). The evaluation of principals: What and how do states and urban disrticts assess leadership? The Elementary School Journal, 110 (1), 19–36.

Hannah, D., Sinnema, C., & Robinson. V. (2018). Theory of action accounts of problem-solving: How a Japanese school communicates student incidents to parents. Management in Education, 33 (2), 62–69. https://doi.org/10.1177/0892020618783809 .

Heifetz, R., Grashow, A., & Linsky, M. (2009). The practice of adaptive leadership: Tools and tactics for changing your organization and the world . Harvard Business Press.

Le Fevre, D., & Robinson, V. M. J. (2015). The interpersonal challenges of instructional leadership: Principals’ effectiveness in conversations about performance issues. Educational Administration Quarterly, 51 (1), 58–95.

Le Fevre, D., Robinson, V. M. J., & Sinnema, C. (2015). Genuine inquiry: Widely espoused yet rarely enacted. Educational Management Administration & Leadership, 43 (6), 883–899.

Leech, N. L., & Onwuegbuzie, A. J. (2009). A typology of mixed methods research designs. Quality & Quantity, 43 (2), 265–275. https://doi.org/10.1007/s11135-007-9105-3

Leithwood, K., & Steinbach, R. (1995). Expert problem solving: Evidence from school and district leaders . State University of New York Press.

Leithwood, K., & Stager, M. (1989). Expertise in Principals’ Problem Solving. Educational Administration Quarterly, 25 (2), 126–161. https://doi.org/10.1177/0013161x89025002003

Leithwood, K., & Steinbach, R. (1992). Improving the problem solving expertise of school administrators. Education and Urban Society, 24 (3), 317–345.

Marcy, R., & Mumford, M. (2010). Leader cognition: Improving leader performance through causal analysis. The Leadership Quarterly, 21 (1), 1–19.

Mavrogordato, M., & White, R. (2020). Leveraging policy implementation for social justice: How school leaders shape educational opportunity when implementing policy for English learners. Educational Administration Quarterly, 56 (1), 3–45. https://doi.org/10.1177/0013161X18821364

Meissel, K., Meyer, F., Yao, E. S., & Rubie-Davies, C. (2017). Subjectivity of Teacher Judgments: Exploring student characteristics that influence teacher judgments of student ability. Teaching and Teacher Education, 65 , 48–60. https://doi.org/10.1016/j.tate.2017.02.021

Meyer, F., Sinnema, C., & Patuawa, J. (2019). Novice principals setting goals for school improvement in New Zealand. School Leadership & Management , 39 (2), 198−221. https://doi.org/10.1080/13632434.2018.1473358

Ministry of Education. (2013). Pasifika education plan 2013–2017 Retrieved 9 July from https://www.education.govt.nz/assets/Documents/Ministry/Strategies-and-policies/PEPImplementationPlan20132017V2.pdf

Ministry of Education. (2018). Tapasā cultural competencies framework for teachers of Pacific learners . Ministry of Education.

Mumford, M., & Connelly, M. (1991). Leaders as creators: Leaders performance and problem solving in ill-defined domains. Leadership Quarterly, 2 (4), 289–315.

Mumford, M., Friedrich, T., Caughron, J., & Byrne, C. (2007). Leader cognition in real-world settings: How do leaders think about crises? The Leadership Quarterly, 18 , 515–543. https://doi.org/10.1016/j.leaqua.2007.09.002

Mumford, M., Zaccaro, S., Harding, F., Jacobs, T., & Fleishman, E. (2000). Leadership skills for a changing world: Solving complex social problems. Leadership Quarterly, 11 (1), 11–35.

Myran, S., & Sutherland, I. (2016). Problem posing in leadership education: using case study to foster more effective problem solving. Journal of Cases in Educational Leadership, 19 (4), 57–71. https://doi.org/10.1177/1555458916664763

Newell, A., & Simon. (1972). Human problem solving . Prentice-Hall.

Norman, S., Avolio, B., & Luthans, B. (2010). The impact of positivity and transparency on trust in leaders and their perceived effectiveness. The Leadership Quarterly, 21 , 350–364.

Patuawa, J., Robinson, V., Sinnema, C., & Zhu, T. (2021). Addressing inequity and underachievement: Middle leaders’ effectiveness in problem solving. Leading and Managing , 27 (1), 51–78. https://doi.org/10.3316/informit.925220205986712

Peeters, A., & Robinson, V. M. J. (2015). A teacher educator learns how to learn from mistakes: Single and double-loop learning for facilitators of in-service teacher education. Studying Teacher Education, 11 (3), 213–227.

Robinson, V. M. J., Meyer, F., Le Fevre, D., & Sinnema, C. (2020). The Quality of Leaders’ Problem-Solving Conversations: truth-seeking or truth-claiming? Leadership and Policy in Schools , 1–22.

Robinson, V. M. J. (1993). Problem-based methodology: Research for the improvement of practice . Pergamon Press.

Robinson, V. M. J. (1995). Organisational learning as organisational problem-solving. Leading & Managing, 1 (1), 63–78.

Robinson, V. M. J. (2001). Organizational learning, organizational problem solving and models of mind. In K. Leithwood & P. Hallinger (Eds.), Second international handbook of educational leadership and administration. Kluwer Academic.

Robinson, V. M. J. (2010). From instructional leadership to leadership capabilities: Empirical findings and methodological challenges. Leadership and Policy in Schools, 9 (1), 1–26.

Robinson, V. M. J. (2017). Reduce change to increase improvement . Corwin Press.

Robinson, V. M. J., & Le Fevre, D. (2011). Principals’ capability in challenging conversations: The case of parental complaints. Journal of Educational Administration, 49 (3), 227–255. https://doi.org/10.1108/09578231111129046

Sinnema, C., Robinson, V. (2012). Goal setting in principal evaluation: Goal quality and predictors of achievement. Leadership and Policy in schools . 11 (2), 135–167, https://doi.org/10.1080/15700763.2011.629767

Sinnema, C., Le Fevre, D., Robinson, V. M. J., & Pope, D. (2013). When others’ performance just isn’t good enough: Educational leaders’ framing of concerns in private and public. Leadership and Policy in Schools, 12 (4), 301–336. https://doi.org/10.1080/15700763.2013.857419

Sinnema, C., Ludlow, L. H., & Robinson, V. M. J. (2016a). Educational leadership effectiveness: A rasch analysis. Journal of Educational Administration , 54 (3), 305–339. https://doi.org/10.1108/JEA-12-2014-0140

Sinnema, C., Meyer, F., & Aitken, G. (2016b). Capturing the complex, situated, and sctive nature of teaching through inquiry-oriented standards for teaching. Journal of Teacher Education , 68 (1), 9–27. https://doi.org/10.1177/0022487116668017

Sinnema, C., Daly, A. J., Liou, Y.-h Sinnema, C., Daly, A. J., Liou, Y.-H., & Rodway, J. (2020a). Exploring the communities of learning policy in New Zealand using social network analysis: A case study of leadership, expertise, and networks. International Journal of Educational Research , 99 , 101492. https://doi.org/10.1016/j.ijer.2019.10.002

Sinnema, C., & Stoll, L. (2020b). Learning for and realising curriculum aspirations through schools as learning organisations. European Journal of Education, 55 , 9–23. https://doi.org/10.1111/ejed.12381

Sinnema, C., Nieveen, N., & Priestley, M. (2020c). Successful futures, successful curriculum: What can Wales learn from international curriculum reforms? The Curriculum Journal . https://doi.org/10.1002/curj.17

Sinnema, C., Hannah, D., Finnerty, A., & Daly, A. J. (2021a). A theory of action account of within and across school collaboration: The role of relational trust in collaboration actions and impacts. Journal of Educational Change .

Sinnema, C., Hannah, D., Finnerty, A. et al. (2021b). A theory of action account of an across-school collaboration policy in practice. Journal of Educational Change . https://doi.org/10.1007/s10833-020-09408-w

Sinnema, C., Liou, Y.-H., Daly, A., Cann, R., & Rodway, J. (2021c). When seekers reap rewards and providers pay a price: The role of relationships and discussion in improving practice in a community of learning. Teaching and Teacher Education, 107 , 103474. https://doi.org/10.1016/j.tate.2021.103474

Smith, G. (1997). Managerial problem solving: A problem-centered approach. In C. E. Zsambok & G. Klein (Eds.), Naturalistic Decision Making (pp. 371–380). Lawrence Erlbaum Associates.

Spiegel, J. (2012). Open-mindedness and intellectual humility. School Field, 10 (1), 27–38. https://doi.org/10.1177/1477878512437472

Spillane, J., Weitz White, K., & Stephan, J. (2009). School principal expertise: Putting expert aspiring principal differences in problem solving to the test. Leadership and Policy in Schools, 8 , 128–151.

Teddlie, C., & Tashakkori, A. (2006). A general typology of research designs featuring mixed methods. Research in the Schools, 13 , 12–28.

Thrupp, M., & Willmott, R. (2003). Education management in managerialist times: Beyond the textual apologists . Open University Press.

Timperley, H., & Parr, J. M. (2005). Theory competition and the process of change. Journal of Educational Change, 6 (3), 227–251. https://doi.org/10.1007/s10833-005-5065-3

Timperley, H. S., & Robinson, V. M. J. (1998). Collegiality in schools: Its nature and implications for problem solving. Educational Administration Quarterly, 34 (1), 608–629. https://doi.org/10.1177/0013161X980341003

Tjosvold, D., Sun, H., & Wan, P. (2005). Effects of openness, problem solving, and blaming on learning: An experiment in China. The Journal of Social Psychology, 145 (6), 629–644. https://doi.org/10.3200/SOCP.145.6.629-644

Tompkins, T. (2013). Groupthink and the Ladder of Inference : Increasing Effective Decision Making. The Journal of Human Resource and Adult Learning , 8 (2), 84–90.

Turner, H., Rubie-Davies, C. M., & Webber, M. (2015). Teacher expectations, ethnicity and the achievement gap. New Zealand Journal of Educational Studies, 50 (1), 55–69. https://doi.org/10.1007/s40841-015-0004-1

Zaccaro, S., Mumford, M., Connelly, M., Marks, M., & Gilbert, J. (2000). Assessment of leader problem-solving capabilities. Leadership Quarterly, 11 (1), 37–64.

Zand, D. (1972). Trust and managerial problem solving. Administrative Science Quarterly, 17 , 229–239.

Download references

Author information

Authors and affiliations.

The Faculty of Education and Social Work, The University of Auckland, Auckland, New Zealand

Claire Sinnema, Frauke Meyer, Deidre Le Fevre, Hamish Chalmers & Viviane Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Claire Sinnema .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table 8 .

See Table 9 .

See Table 10 .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sinnema, C., Meyer, F., Le Fevre, D. et al. Educational leaders’ problem-solving for educational improvement: Belief validity testing in conversations. J Educ Change 24 , 133–181 (2023). https://doi.org/10.1007/s10833-021-09437-z

Download citation

Accepted : 29 August 2021

Published : 01 October 2021

Issue Date : June 2023

DOI : https://doi.org/10.1007/s10833-021-09437-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Educational Change
  • Educational improvement
  • Problem solving
  • Problem-solving conversations
  • Educational leadership
  • Validity testing
  • Find a journal
  • Publish with us
  • Track your research

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform

Problem Solving in Education: A Global Imperative

author avatar

Pedagogical Shifts

Essential lessons, leadership challenges and opportunities.

Jamaludin, A., & Hung, D. W. L. (2016). Digital "learning trails": Scaling technology-facilitated curricular innovation in schools with a rhizomatic lens. Journal of Educational Change , 17 (3), 355–377.

Kahneman, D. (2011). Thinking, fast and slow . New York: Farrar, Strauss, &amp; Giroux.

National Academy of Sciences. (2010). Rising above the gathering storm, revisited: Rapidly approaching category 5. Washington, DC: National Academies Press.

McNeill, K. L., González-Howard, M., Katsh-Singer, R., & Loper, S. (2017). Moving beyond pseudoargumentation: Teachers' enactments of an educative science curriculum focused on argumentation. Science Education , 101 (3), 426–457.

Ng, P. T. (2017). Learning from Singapore: The power of paradoxes . New York: Routledge.

OECD. (2012). PISA 2012 results: Creative problem solving. Paris: OECD.

Patchen, A. K., Zhang, L., & Barnett, M. (2017). Growing plants and scientists: Fostering positive attitudes toward science among all participants in an afterschool hydroponics program. Journal of Science and Educational Technology , 26 (3), 279–294.

Prensky, M. R. (2012). From digital natives to digital wisdom: Hopeful essays for 21st century learning. Thousand Oaks, CA: Corwin.

Shirley, D. (2016). The new imperatives of educational change: Achievement with integrity . New York: Routledge.

• 1 Read more about the Jurong Secondary School project .

importance of problem solving skills in education pdf

He has conducted in-depth studies about school innovations in England, Germany, Canada, and South Korea. Shirley has been a visiting professor at Harvard University in the United States, Venice International University in Italy, the National Institute of Education in Singapore, the University of Barcelona in Spain, and the University of Stavanger in Norway. He is a fellow of the Royal Society of Arts. Shirley’s previous book is The New Imperatives of Educational Change: Achievement with Integrity .

importance of problem solving skills in education pdf

Pak Tee Ng is Associate Dean, Leadership Learning at the National Institute of Education of Nanyang Technological University in Singapore and the author of Learning from Singapore: The Power of Paradoxes (Routledge, 2017).

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action., related articles.

undefined

Creating Autonomy Within Fidelity

undefined

Taking Risks with Rough Draft Teaching

undefined

Been to a Good Lecture?

undefined

Picture Books Aren’t Just for the Youngest Students

undefined

Leading for a New Vision of Science Teaching

From our issue.

Product cover image 118064b.jpg

To process a transaction with a Purchase Order please send to [email protected]

importance of problem solving skills in education pdf

Explained: Importance of critical thinking, problem-solving skills in curriculum

F uture careers are no longer about domain expertise or technical skills. Rather, critical thinking and problem-solving skills in employees are on the wish list of every big organization today. Even curriculums and pedagogies across the globe and within India are now requiring skilled workers who are able to think critically and are analytical.

The reason for this shift in perspective is very simple.

These skills provide a staunch foundation for comprehensive learning that extends beyond books or the four walls of the classroom. In a nutshell, critical thinking and problem-solving skills are a part of '21st Century Skills' that can help unlock valuable learning for life.

Over the years, the education system has been moving away from the system of rote and other conventional teaching and learning parameters.

They are aligning their curriculums to the changing scenario which is becoming more tech-driven and demands a fusion of critical skills, life skills, values, and domain expertise. There's no set formula for success.

Rather, there's a defined need for humans to be more creative, innovative, adaptive, agile, risk-taking, and have a problem-solving mindset.

In today's scenario, critical thinking and problem-solving skills have become more important because they open the human mind to multiple possibilities, solutions, and a mindset that is interdisciplinary in nature.

Therefore, many schools and educational institutions are deploying AI and immersive learning experiences via gaming, and AR-VR technologies to give a more realistic and hands-on learning experience to their students that hone these abilities and help them overcome any doubt or fear.

ADVANTAGES OF CRITICAL THINKING AND PROBLEM-SOLVING IN CURRICULUM

Ability to relate to the real world:  Instead of theoretical knowledge, critical thinking, and problem-solving skills encourage students to look at their immediate and extended environment through a spirit of questioning, curiosity, and learning. When the curriculum presents students with real-world problems, the learning is immense.

Confidence, agility & collaboration : Critical thinking and problem-solving skills boost self-belief and confidence as students examine, re-examine, and sometimes fail or succeed while attempting to do something.

They are able to understand where they may have gone wrong, attempt new approaches, ask their peers for feedback and even seek their opinion, work together as a team, and learn to face any challenge by responding to it.

Willingness to try new things: When problem-solving skills and critical thinking are encouraged by teachers, they set a robust foundation for young learners to experiment, think out of the box, and be more innovative and creative besides looking for new ways to upskill.

It's important to understand that merely introducing these skills into the curriculum is not enough. Schools and educational institutions must have upskilling workshops and conduct special training for teachers so as to ensure that they are skilled and familiarized with new teaching and learning techniques and new-age concepts that can be used in the classrooms via assignments and projects.

Critical thinking and problem-solving skills are two of the most sought-after skills. Hence, schools should emphasise the upskilling of students as a part of the academic curriculum.

The article is authored by Dr Tassos Anastasiades, Principal- IB, Genesis Global School, Noida. 

Watch Live TV in English

Watch Live TV in Hindi

Explained: Importance of critical thinking, problem-solving skills in curriculum

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., ai copilots are changing how coding is taught, professors are shifting away from syntax and emphasizing higher-level skills.

Photo-illustration of a mini AI bot looking at a laptop atop a stock of books, sitting next to human hands on a laptop.

Generative AI is transforming the software development industry. AI-powered coding tools are assisting programmers in their workflows, while jobs in AI continue to increase. But the shift is also evident in academia—one of the major avenues through which the next generation of software engineers learn how to code.

Computer science students are embracing the technology, using generative AI to help them understand complex concepts, summarize complicated research papers, brainstorm ways to solve a problem, come up with new research directions, and, of course, learn how to code.

“Students are early adopters and have been actively testing these tools,” says Johnny Chang , a teaching assistant at Stanford University pursuing a master’s degree in computer science. He also founded the AI x Education conference in 2023, a virtual gathering of students and educators to discuss the impact of AI on education.

So as not to be left behind, educators are also experimenting with generative AI. But they’re grappling with techniques to adopt the technology while still ensuring students learn the foundations of computer science.

“It’s a difficult balancing act,” says Ooi Wei Tsang , an associate professor in the School of Computing at the National University of Singapore . “Given that large language models are evolving rapidly, we are still learning how to do this.”

Less Emphasis on Syntax, More on Problem Solving

The fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.

“We’re seeing a little upping of that skill, where students are getting code snippets from generative AI that they need to test for correctness,” says Jeanna Matthews , a professor of computer science at Clarkson University in Potsdam, N.Y.

Another vital expertise is problem decomposition. “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve,” says Leo Porter , an associate teaching professor of computer science at the University of California, San Diego . “It’s hard to find where in the curriculum that’s taught—maybe in an algorithms or software engineering class, but those are advanced classes. Now, it becomes a priority in introductory classes.”

“Given that large language models are evolving rapidly, we are still learning how to do this.” —Ooi Wei Tsang, National University of Singapore

As a result, educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro , an associate professor of computer science at the University of Toronto Mississauga . “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”

Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.

“It’s an opportunity for me to assess their learning process of the whole software development [life cycle]—not just code,” Zingaro says. “And I feel like my courses have opened up more and they’re much broader than they used to be. I can make students work on larger and more advanced projects.”

Ooi echoes that sentiment, noting that generative AI tools “will free up time for us to teach higher-level thinking—for example, how to design software, what is the right problem to solve, and what are the solutions. Students can spend more time on optimization, ethical issues, and the user-friendliness of a system rather than focusing on the syntax of the code.”

Avoiding AI’s Coding Pitfalls

But educators are cautious given an LLM’s tendency to hallucinate . “We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.

Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this overreliance can be a pitfall and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.

“We should be making AI a copilot—not the autopilot—for learning.” —Johnny Chang, Stanford University

Other drawbacks include copyright and bias. “I teach my students about the ethical constraints—that this is a model built off other people’s code and we’d recognize the ownership of that,” Porter says. “We also have to recognize that models are going to represent the bias that’s already in society.”

Adapting to the rise of generative AI involves students and educators working together and learning from each other. For her colleagues, Matthews’s advice is to “try to foster an environment where you encourage students to tell you when and how they’re using these tools. Ultimately, we are preparing our students for the real world, and the real world is shifting, so sticking with what you’ve always done may not be the recipe that best serves students in this transition.”

Porter is optimistic that the changes they’re applying now will serve students well in the future. “There’s this long history of a gap between what we teach in academia and what’s actually needed as skills when students arrive in the industry,” he says. “There’s hope on my part that we might help close the gap if we embrace LLMs.”

  • How Coders Can Survive—and Thrive—in a ChatGPT World ›
  • AI Coding Is Going From Copilot to Autopilot ›
  • OpenAI Codex ›

Rina Diane Caballar is a writer covering tech and its intersections with science, society, and the environment. An IEEE Spectrum Contributing Editor, she's a former software engineer based in Wellington, New Zealand.

Bruce Benson

Yes! Great summary of how things are evolving with AI. I’m a retired coder (BS comp sci) and understand the fundamentals of developing systems. Learning the lastest systems is now the greatest challenge. I was intrigued by Ansible to help me manage my homelab cluster, but who wants to learn one more scripting language? Turns out ChatGPT4 knows the syntax, semantics, and work flow of Ansible and all I do is tell is to “install log2ram on all my proxmox servers” and I get a playbook that does just that. The same with Docker Compose scripts. Wow.

Video Friday: Robot Bees

The new shadow hand can take a beating, commercial space stations approach launch phase, related stories, ai spam threatens the internet—ai can also protect it, what is generative ai, generative ai has a visual plagiarism problem.

COMMENTS

  1. PDF Problem solving: How can we help students overcome cognitive ...

    1 INTRODUCTION. An important and qualifying hallmark of teaching science is the ability to promote problem solving and critcal thinking skills. It is critcal that future citzens have skills in problem-solving to address the range of needs in their life and careers. Problem-solving is an important higher-order cognitve skill.

  2. (PDF) Teachers'professional development on problem solving: Theory and

    Teachers' role within the multifaceted process of developing problem-solving skills cannot be ignored. Especially, primary school teachers hold a significant role in shaping the learning ...

  3. The Development of Problem-Solving Skills for Aspiring Educational

    View Abstract View PDF Share Article. Jeremy D. Visone 10.12806/V17/I4/R3. Introduction. Solving problems is a quintessential aspect of the role of an educational leader. In particular, building leaders, such as principals, assistant principals, and deans of students, are frequently beset by situations that are complex, unique, and open-ended.

  4. PDF teaching critical thinking and Problem solving skills

    Critical thinking skills are important because they enable students "to deal effectively with social, scientific, and practical problems" (Shakirova, 2007, p. 42). Simply put, students who are able to think critically are able to solve problems effectively. Merely having knowledge or information is not enough.

  5. PDF Teaching Problem Solving and Decision Making

    A "decision" is a process involving a broad set of skills that incorporate problem solving and choice making to select one of several already identified options. Beyth-Marom and colleagues (1991, p. 21) suggested that the decision-making pro- cess includes some basic steps: 34 PROMOTING SELF-DETERMINATION. 1.

  6. PDF The fuTure of educaTion and skills

    develops through practical problem-solving, such as through design thinking and systems thinking. Students will need to apply their knowledge in unknown and evolving circumstances. For this, they will need a broad range of skills, including cognitive and meta-cognitive skills (e.g. critical thinking, creative thinking, learning to learn

  7. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  8. (PDF) Problem Solving Skills: Essential Skills in Providing Solutions

    In problem-solving, the brain uses all its cognitive abilities such as critical thinking, decision-making, and reflective thinking to process the information and provide resolutions to the ...

  9. PDF Pedagogical Approaches to Problem Solving in Higher Education

    The article includes analysis of pedagogical approaches like project-based, problem-based learning and enquiry-based learning, as well as describes the use of design thinking as one of the problem solving approaches in higher education. As mathematics studies at universities play an important role in developing problem solving skills, an ...

  10. Educational leaders' problem-solving for educational improvement

    Educational leaders' effectiveness in solving problems is vital to school and system-level efforts to address macrosystem problems of educational inequity and social injustice. Leaders' problem-solving conversation attempts are typically influenced by three types of beliefs—beliefs about the nature of the problem, about what causes it, and about how to solve it. Effective problem solving ...

  11. PDF PROMOTING PROBLEM-SOLVING

    actions towards another child), teachers are the ones modeling and leading the problem-solving; teachers acknowledge, reject, and challenge the oppression. In other situations, problem-solving skills training is an effective way to empower children to independently resolve social conflict with peers. Problem-Solving Steps

  12. (PDF) Enhancing students' problem-solving skills through context-based

    understand the problem-solving process (PSP). This study presents a three-stage, context-. based, problem-solving, learning activity that involves watching detective films, constructing a context ...

  13. PDF Empowering College Students Problem-Solving Skills through RICOSRE

    Educ. Sci.2022, 12, 196 3 of 17. RICOSRE learning syntax has many advantages: it can help students develop their creative thinking skills through various problem-solving tasks. Reading is the first stage in the RICOSRE learning model. Through reading, RICOSRE enables pupils to acquire higher-order thinking skills.

  14. Problem Solving in Education: A Global Imperative

    The issue is not merely academic. Problem solving is a new global imperative of educational change (Shirley, 2016). We stand today on the edge of a true international renaissance, unlike anything ever achieved in history. New technologies, higher levels of education, better health care, increasing life expectancy, and the interdependence of our ...

  15. PDF Problem solving in mathematics

    Therefore, high-quality assessment of problem solving in public tests and assessments1 is essential in order to ensure the effective learning and teaching of problem solving throughout primary and secondary education. Although the focus here is on the assessment of problem solving in mathematics, many of the ideas will be directly transferable ...

  16. PDF Teaching Problem Solving Skills For Making Teenagers Thinking Visible

    The problem-solving competence is personal and directed, therefore the process of solving the process can be influenced by subjective goals[5]. Individual knowledge and problem-solving skills help the individual determine the difficulties the situation faces. However, this stage is affected by motivational and affective

  17. PDF ANALYTICAL THINKING AND PROBLEM-SOLVING

    4 ANALYTICAL THINKING AND PROBLEM SOLVING IN PRACTICE When you are faced with a problem or you must make a decision, the most important thing is to form a clear picture of the situation by collecting as much information about the issue as possible and analyse it. This will allow you to make the best possible decisions and find the

  18. (PDF) Critical Thinking and it's Importance in Education

    Critical thinking occurs when students are. analyzing, evaluating, in terpreting, or synthesizing information and applying. creative thought to form an argument, solve a problem, or reach a ...

  19. Explained: Importance of critical thinking, problem-solving skills in

    Critical thinking and problem-solving skills are two of the most sought-after skills. Hence, schools should emphasise the upskilling of students as a part of the academic curriculum.

  20. AI Copilots Are Changing How Coding Is Taught

    Computer science students are embracing the technology, using generative AI to help them understand complex concepts, summarize complicated research papers, brainstorm ways to solve a problem ...