• Illinois Online
  • Illinois Remote

Exam Scoring

  • New Freshmen
  • New International Students
  • Info about COMPOSITION
  • Info about MATH
  • Info about SCIENCE
  • LOTE for Non-Native Speakers
  • Log-in Instructions
  • ALEKS PPL Math Placement Exam
  • Advanced Placement (AP) Credit
  • What is IB?
  • Advanced Level (A-Levels) Credit
  • Departmental Proficiency Exams
  • Departmental Proficiency Exams in LOTE ("Languages Other Than English")
  • Testing in Less Commonly Studied Languages
  • FAQ on placement testing
  • FAQ on proficiency testing
  • Legislation FAQ
  • 2024 Cutoff Scores Math
  • 2024 Cutoff Scores Chemistry
  • 2024 Cutoff Scores IMR-Biology
  • 2024 Cutoff Scores MCB
  • 2024 Cutoff Scores Physics
  • 2024 Cutoff Scores Rhetoric
  • 2024 Cutoff Scores ESL
  • 2024 Cutoff Scores Chinese
  • 2024 Cutoff Scores French
  • 2024 Cutoff Scores German
  • 2024 Cutoff Scores Latin
  • 2024 Cutoff Scores Spanish
  • 2024 Advanced Placement Program
  • 2024 International Baccalaureate Program
  • 2024 Advanced Level Exams
  • 2023 Cutoff Scores Math
  • 2023 Cutoff Scores Chemistry
  • 2023 Cutoff Scores IMR-Biology
  • 2023 Cutoff Scores MCB
  • 2023 Cutoff Scores Physics
  • 2023 Cutoff Scores Rhetoric
  • 2023 Cutoff Scores ESL
  • 2023 Cutoff Scores Chinese
  • 2023 Cutoff Scores French
  • 2023 Cutoff Scores German
  • 2023 Cutoff Scores Latin
  • 2023 Cutoff Scores Spanish
  • 2023 Advanced Placement Program
  • 2023 International Baccalaureate Program
  • 2023 Advanced Level Exams
  • 2022 Cutoff Scores Math
  • 2022 Cutoff Scores Chemistry
  • 2022 Cutoff Scores IMR-Biology
  • 2022 Cutoff Scores MCB
  • 2022 Cutoff Scores Physics
  • 2022 Cutoff Scores Rhetoric
  • 2022 Cutoff Scores ESL
  • 2022 Cutoff Scores Chinese
  • 2022 Cutoff Scores French
  • 2022 Cutoff Scores German
  • 2022 Cutoff Scores Latin
  • 2022 Cutoff Scores Spanish
  • 2022 Advanced Placement Program
  • 2022 International Baccalaureate Program
  • 2022 Advanced Level Exams
  • 2021 Cutoff Scores Math
  • 2021 Cutoff Scores Chemistry
  • 2021 Cutoff Scores IMR-Biology
  • 2021 Cutoff Scores MCB
  • 2021 Cutoff Scores Physics
  • 2021 Cutoff Scores Rhetoric
  • 2021 Cutoff Scores ESL
  • 2021 Cutoff Scores Chinese
  • 2021 Cutoff Scores French
  • 2021 Cutoff Scores German
  • 2021 Cutoff Scores Latin
  • 2021 Cutoff Scores Spanish
  • 2021 Advanced Placement Program
  • 2021 International Baccalaureate Program
  • 2021 Advanced Level Exams
  • 2020 Cutoff Scores Math
  • 2020 Cutoff Scores Chemistry
  • 2020 Cutoff Scores MCB
  • 2020 Cutoff Scores Physics
  • 2020 Cutoff Scores Rhetoric
  • 2020 Cutoff Scores ESL
  • 2020 Cutoff Scores Chinese
  • 2020 Cutoff Scores French
  • 2020 Cutoff Scores German
  • 2020 Cutoff Scores Latin
  • 2020 Cutoff Scores Spanish
  • 2020 Advanced Placement Program
  • 2020 International Baccalaureate Program
  • 2020 Advanced Level Exams
  • 2019 Cutoff Scores Math
  • 2019 Cutoff Scores Chemistry
  • 2019 Cutoff Scores MCB
  • 2019 Cutoff Scores Physics
  • 2019 Cutoff Scores Rhetoric
  • 2019 Cutoff Scores Chinese
  • 2019 Cutoff Scores ESL
  • 2019 Cutoff Scores French
  • 2019 Cutoff Scores German
  • 2019 Cutoff Scores Latin
  • 2019 Cutoff Scores Spanish
  • 2019 Advanced Placement Program
  • 2019 International Baccalaureate Program
  • 2019 Advanced Level Exams
  • 2018 Cutoff Scores Math
  • 2018 Cutoff Scores Chemistry
  • 2018 Cutoff Scores MCB
  • 2018 Cutoff Scores Physics
  • 2018 Cutoff Scores Rhetoric
  • 2018 Cutoff Scores ESL
  • 2018 Cutoff Scores French
  • 2018 Cutoff Scores German
  • 2018 Cutoff Scores Latin
  • 2018 Cutoff Scores Spanish
  • 2018 Advanced Placement Program
  • 2018 International Baccalaureate Program
  • 2018 Advanced Level Exams
  • 2017 Cutoff Scores Math
  • 2017 Cutoff Scores Chemistry
  • 2017 Cutoff Scores MCB
  • 2017 Cutoff Scores Physics
  • 2017 Cutoff Scores Rhetoric
  • 2017 Cutoff Scores ESL
  • 2017 Cutoff Scores French
  • 2017 Cutoff Scores German
  • 2017 Cutoff Scores Latin
  • 2017 Cutoff Scores Spanish
  • 2017 Advanced Placement Program
  • 2017 International Baccalaureate Program
  • 2017 Advanced Level Exams
  • 2016 Cutoff Scores Math
  • 2016 Cutoff Scores Chemistry
  • 2016 Cutoff Scores Physics
  • 2016 Cutoff Scores Rhetoric
  • 2016 Cutoff Scores ESL
  • 2016 Cutoff Scores French
  • 2016 Cutoff Scores German
  • 2016 Cutoff Scores Latin
  • 2016 Cutoff Scores Spanish
  • 2016 Advanced Placement Program
  • 2016 International Baccalaureate Program
  • 2016 Advanced Level Exams
  • 2015 Fall Cutoff Scores Math
  • 2016 Spring Cutoff Scores Math
  • 2015 Cutoff Scores Chemistry
  • 2015 Cutoff Scores Physics
  • 2015 Cutoff Scores Rhetoric
  • 2015 Cutoff Scores ESL
  • 2015 Cutoff Scores French
  • 2015 Cutoff Scores German
  • 2015 Cutoff Scores Latin
  • 2015 Cutoff Scores Spanish
  • 2015 Advanced Placement Program
  • 2015 International Baccalaureate (IB) Program
  • 2015 Advanced Level Exams
  • 2014 Cutoff Scores Math
  • 2014 Cutoff Scores Chemistry
  • 2014 Cutoff Scores Physics
  • 2014 Cutoff Scores Rhetoric
  • 2014 Cutoff Scores ESL
  • 2014 Cutoff Scores French
  • 2014 Cutoff Scores German
  • 2014 Cutoff Scores Latin
  • 2014 Cutoff Scores Spanish
  • 2014 Advanced Placement (AP) Program
  • 2014 International Baccalaureate (IB) Program
  • 2014 Advanced Level Examinations (A Levels)
  • 2013 Cutoff Scores Math
  • 2013 Cutoff Scores Chemistry
  • 2013 Cutoff Scores Physics
  • 2013 Cutoff Scores Rhetoric
  • 2013 Cutoff Scores ESL
  • 2013 Cutoff Scores French
  • 2013 Cutoff Scores German
  • 2013 Cutoff Scores Latin
  • 2013 Cutoff Scores Spanish
  • 2013 Advanced Placement (AP) Program
  • 2013 International Baccalaureate (IB) Program
  • 2013 Advanced Level Exams (A Levels)
  • 2012 Cutoff Scores Math
  • 2012 Cutoff Scores Chemistry
  • 2012 Cutoff Scores Physics
  • 2012 Cutoff Scores Rhetoric
  • 2012 Cutoff Scores ESL
  • 2012 Cutoff Scores French
  • 2012 Cutoff Scores German
  • 2012 Cutoff Scores Latin
  • 2012 Cutoff Scores Spanish
  • 2012 Advanced Placement (AP) Program
  • 2012 International Baccalaureate (IB) Program
  • 2012 Advanced Level Exams (A Levels)
  • 2011 Cutoff Scores Math
  • 2011 Cutoff Scores Chemistry
  • 2011 Cutoff Scores Physics
  • 2011 Cutoff Scores Rhetoric
  • 2011 Cutoff Scores French
  • 2011 Cutoff Scores German
  • 2011 Cutoff Scores Latin
  • 2011 Cutoff Scores Spanish
  • 2011 Advanced Placement (AP) Program
  • 2011 International Baccalaureate (IB) Program
  • 2010 Cutoff Scores Math
  • 2010 Cutoff Scores Chemistry
  • 2010 Cutoff Scores Rhetoric
  • 2010 Cutoff Scores French
  • 2010 Cutoff Scores German
  • 2010 Cutoff Scores Latin
  • 2010 Cutoff Scores Spanish
  • 2010 Advanced Placement (AP) Program
  • 2010 International Baccalaureate (IB) Program
  • 2009 Cutoff Scores Math
  • 2009 Cutoff Scores Chemistry
  • 2009 Cutoff Scores Rhetoric
  • 2009 Cutoff Scores French
  • 2009 Cutoff Scores German
  • 2009 Cutoff Scores Latin
  • 2009 Cutoff Scores Spanish
  • 2009 Advanced Placement (AP) Program
  • 2009 International Baccalaureate (IB) Program
  • 2008 Cutoff Scores Math
  • 2008 Cutoff Scores Chemistry
  • 2008 Cutoff Scores Rhetoric
  • 2008 Cutoff Scores French
  • 2008 Cutoff Scores German
  • 2008 Cutoff Scores Latin
  • 2008 Cutoff Scores Spanish
  • 2008 Advanced Placement (AP) Program
  • 2008 International Baccalaureate (IB) Program
  • Log in & Interpret Student Profiles
  • Mobius View
  • Classroom Test Analysis: The Total Report
  • Item Analysis
  • Error Report
  • Omitted or Multiple Correct Answers
  • QUEST Analysis
  • Assigning Course Grades

Improving Your Test Questions

  • ICES Online
  • Myths & Misperceptions
  • Longitudinal Profiles
  • List of Teachers Ranked as Excellent by Their Students
  • Focus Groups
  • IEF Question Bank

For questions or information:

  • Choosing between Objective and Subjective Test Items

Multiple-Choice Test Items

True-false test items, matching test items, completion test items, essay test items, problem solving test items, performance test items.

  • Two Methods for Assessing Test Item Quality
  • Assistance Offered by The Center for Innovation in Teaching and Learning (CITL)
  • References for Further Reading

I. Choosing Between Objective and Subjective Test Items

There are two general categories of test items: (1) objective items which require students to select the correct response from several alternatives or to supply a word or short phrase to answer a question or complete a statement; and (2) subjective or essay items which permit the student to organize and present an original answer. Objective items include multiple-choice, true-false, matching and completion, while subjective items include short-answer essay, extended-response essay, problem solving and performance test items. For some instructional purposes one or the other item types may prove more efficient and appropriate. To begin out discussion of the relative merits of each type of test item, test your knowledge of these two item types by answering the following questions.

Quiz Answers

1 Sax, G., & Collet, L. S. (1968). An empirical comparison of the effects of recall and multiple-choice tests on student achievement. J ournal of Educational Measurement, 5 (2), 169–173. doi:10.1111/j.1745-3984.1968.tb00622.x

Paterson, D. G. (1926). Do new and old type examinations measure different mental functions? School and Society, 24 , 246–248.

When to Use Essay or Objective Tests

Essay tests are especially appropriate when:

  • the group to be tested is small and the test is not to be reused.
  • you wish to encourage and reward the development of student skill in writing.
  • you are more interested in exploring the student's attitudes than in measuring his/her achievement.
  • you are more confident of your ability as a critical and fair reader than as an imaginative writer of good objective test items.

Objective tests are especially appropriate when:

  • the group to be tested is large and the test may be reused.
  • highly reliable test scores must be obtained as efficiently as possible.
  • impartiality of evaluation, absolute fairness, and freedom from possible test scoring influences (e.g., fatigue, lack of anonymity) are essential.
  • you are more confident of your ability to express objective test items clearly than of your ability to judge essay test answers correctly.
  • there is more pressure for speedy reporting of scores than for speedy test preparation.

Either essay or objective tests can be used to:

  • measure almost any important educational achievement a written test can measure.
  • test understanding and ability to apply principles.
  • test ability to think critically.
  • test ability to solve problems.
  • test ability to select relevant facts and principles and to integrate them toward the solution of complex problems. 

In addition to the preceding suggestions, it is important to realize that certain item types are  better suited  than others for measuring particular learning objectives. For example, learning objectives requiring the student  to demonstrate  or  to show , may be better measured by performance test items, whereas objectives requiring the student  to explain  or  to describe  may be better measured by essay test items. The matching of learning objective expectations with certain item types can help you select an appropriate kind of test item for your classroom exam as well as provide a higher degree of test validity (i.e., testing what is supposed to be tested). To further illustrate, several sample learning objectives and appropriate test items are provided on the following page.

After you have decided to use either an objective, essay or both objective and essay exam, the next step is to select the kind(s) of objective or essay item that you wish to include on the exam. To help you make such a choice, the different kinds of objective and essay items are presented in the following section. The various kinds of items are briefly described and compared to one another in terms of their advantages and limitations for use. Also presented is a set of general suggestions for the construction of each item variation. 

II. Suggestions for Using and Writing Test Items

The multiple-choice item consists of two parts: (a) the stem, which identifies the question or problem and (b) the response alternatives. Students are asked to select the one alternative that best completes the statement or answers the question. For example:

Sample Multiple-Choice Item

*correct response

Advantages in Using Multiple-Choice Items

Multiple-choice items can provide...

  • versatility in measuring all levels of cognitive ability.
  • highly reliable test scores.
  • scoring efficiency and accuracy.
  • objective measurement of student achievement or ability.
  • a wide sampling of content or objectives.
  • a reduced guessing factor when compared to true-false items.
  • different response alternatives which can provide diagnostic feedback.

Limitations in Using Multiple-Choice Items

Multiple-choice items...

  • are difficult and time consuming to construct.
  • lead an instructor to favor simple recall of facts.
  • place a high degree of dependence on the student's reading ability and instructor's writing ability.

Suggestions For Writing Multiple-Choice Test Items

Item alternatives.

13. Use at least four alternatives for each item to lower the probability of getting the item correct by guessing.

14. Randomly distribute the correct response among the alternative positions throughout the test having approximately the same proportion of alternatives a, b, c, d and e as the correct response.

15. Use the alternatives "none of the above" and "all of the above" sparingly. When used, such alternatives should occasionally be used as the correct response.

A true-false item can be written in one of three forms: simple, complex, or compound. Answers can consist of only two choices (simple), more than two choices (complex), or two choices plus a conditional completion response (compound). An example of each type of true-false item follows:

Sample True-False Item: Simple

Sample true-false item: complex, sample true-false item: compound, advantages in using true-false items.

True-False items can provide...

  • the widest sampling of content or objectives per unit of testing time.
  • an objective measurement of student achievement or ability.

Limitations In Using True-False Items

True-false items...

  • incorporate an extremely high guessing factor. For simple true-false items, each student has a 50/50 chance of correctly answering the item without any knowledge of the item's content.
  • can often lead an instructor to write ambiguous statements due to the difficulty of writing statements which are unequivocally true or false.
  • do not discriminate between students of varying ability as well as other item types.
  • can often include more irrelevant clues than do other item types.
  • can often lead an instructor to favor testing of trivial knowledge.

Suggestions For Writing True-False Test Items

In general, matching items consist of a column of stimuli presented on the left side of the exam page and a column of responses placed on the right side of the page. Students are required to match the response associated with a given stimulus. For example:

Sample Matching Test Item

Advantages in using matching items.

Matching items...

  • require short periods of reading and response time, allowing you to cover more content.
  • provide objective measurement of student achievement or ability.
  • provide highly reliable test scores.
  • provide scoring efficiency and accuracy.

Limitations in Using Matching Items

  • have difficulty measuring learning objectives requiring more than simple recall of information.
  • are difficult to construct due to the problem of selecting a common set of stimuli and responses.

Suggestions for Writing Matching Test Items

5.  Keep matching items brief, limiting the list of stimuli to under 10.

6.  Include more responses than stimuli to help prevent answering through the process of elimination.

7.  When possible, reduce the amount of reading time by including only short phrases or single words in the response list.

The completion item requires the student to answer a question or to finish an incomplete statement by filling in a blank with the correct word or phrase. For example,

Sample Completion Item

According to Freud, personality is made up of three major systems, the _________, the ________ and the ________.

Advantages in Using Completion Items

Completion items...

  • can provide a wide sampling of content.
  • can efficiently measure lower levels of cognitive ability.
  • can minimize guessing as compared to multiple-choice or true-false items.
  • can usually provide an objective measure of student achievement or ability.

Limitations of Using Completion Items

  • are difficult to construct so that the desired response is clearly indicated.
  • are more time consuming to score when compared to multiple-choice or true-false items.
  • are more difficult to score since more than one answer may have to be considered correct if the item was not properly prepared.

Suggestions for Writing Completion Test Items

7.  Avoid lifting statements directly from the text, lecture or other sources.

8.  Limit the required response to a single word or phrase.

The essay test is probably the most popular of all types of teacher-made tests. In general, a classroom essay test consists of a small number of questions to which the student is expected to demonstrate his/her ability to (a) recall factual knowledge, (b) organize this knowledge and (c) present the knowledge in a logical, integrated answer to the question. An essay test item can be classified as either an extended-response essay item or a short-answer essay item. The latter calls for a more restricted or limited answer in terms of form or scope. An example of each type of essay item follows.

Sample Extended-Response Essay Item

Explain the difference between the S-R (Stimulus-Response) and the S-O-R (Stimulus-Organism-Response) theories of personality. Include in your answer (a) brief descriptions of both theories, (b) supporters of both theories and (c) research methods used to study each of the two theories. (10 pts.  20 minutes)

Sample Short-Answer Essay Item

Identify research methods used to study the S-R (Stimulus-Response) and S-O-R (Stimulus-Organism-Response) theories of personality. (5 pts.  10 minutes)

Advantages In Using Essay Items

Essay items...

  • are easier and less time consuming to construct than are most other item types.
  • provide a means for testing student's ability to compose an answer and present it in a logical manner.
  • can efficiently measure higher order cognitive objectives (e.g., analysis, synthesis, evaluation).

Limitations In Using Essay Items

  • cannot measure a large amount of content or objectives.
  • generally provide low test and test scorer reliability.
  • require an extensive amount of instructor's time to read and grade.
  • generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the grader).

Suggestions for Writing Essay Test Items

4.  Ask questions that will elicit responses on which experts could agree that one answer is better than another.

5.  Avoid giving the student a choice among optional items as this greatly reduces the reliability of the test.

6.  It is generally recommended for classroom examinations to administer several short-answer items rather than only one or two extended-response items.

Suggestions for Scoring Essay Items

Examples essay item and grading models.

"Americans are a mixed-up people with no sense of ethical values. Everyone knows that baseball is far less necessary than food and steel, yet they pay ball players a lot more than farmers and steelworkers."

WHY? Use 3-4 sentences to indicate how an economist would explain the above situation.

Analytical Scoring

Global quality.

Assign scores or grades on the overall quality of the written response as compared to an ideal answer. Or, compare the overall quality of a response to other student responses by sorting the papers into three stacks:

Read and sort each stack again divide into three more stacks

In total, nine discriminations can be used to assign test grades in this manner. The number of stacks or discriminations can vary to meet your needs.

  • Try not to allow factors which are irrelevant to the learning outcomes being measured affect your grading (i.e., handwriting, spelling, neatness).
  • Read and grade all class answers to one item before going on to the next item.
  • Read and grade the answers without looking at the students' names to avoid possible preferential treatment.
  • Occasionally shuffle papers during the reading of answers to help avoid any systematic order effects (i.e., Sally's "B" work always followed Jim's "A" work thus it looked more like "C" work).
  • When possible, ask another instructor to read and grade your students' responses.

Another form of a subjective test item is the problem solving or computational exam question. Such items present the student with a problem situation or task and require a demonstration of work procedures and a correct solution, or just a correct solution. This kind of test item is classified as a subjective type of item due to the procedures used to score item responses. Instructors can assign full or partial credit to either correct or incorrect solutions depending on the quality and kind of work procedures presented. An example of a problem solving test item follows.

Example Problem Solving Test Item

It was calculated that 75 men could complete a strip on a new highway in 70 days. When work was scheduled to commence, it was found necessary to send 25 men on another road project. How many days longer will it take to complete the strip? Show your work for full or partial credit.

Advantages In Using Problem Solving Items

Problem solving items...

  • minimize guessing by requiring the students to provide an original response rather than to select from several alternatives.
  • are easier to construct than are multiple-choice or matching items.
  • can most appropriately measure learning objectives which focus on the ability to apply skills or knowledge in the solution of problems.
  • can measure an extensive amount of content or objectives.

Limitations in Using Problem Solving Items

  • require an extensive amount of instructor time to read and grade.
  • generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the grader when partial credit is given).

Suggestions For Writing Problem Solving Test Items

6.  Ask questions that elicit responses on which experts could agree that one solution and one or more work procedures are better than others.

7.  Work through each problem before classroom administration to double-check accuracy.

A performance test item is designed to assess the ability of a student to perform correctly in a simulated situation (i.e., a situation in which the student will be ultimately expected to apply his/her learning). The concept of simulation is central in performance testing; a performance test will simulate to some degree a real life situation to accomplish the assessment. In theory, a performance test could be constructed for any skill and real life situation. In practice, most performance tests have been developed for the assessment of vocational, managerial, administrative, leadership, communication, interpersonal and physical education skills in various simulated situations. An illustrative example of a performance test item is provided below.

Sample Performance Test Item

Assume that some of the instructional objectives of an urban planning course include the development of the student's ability to effectively use the principles covered in the course in various "real life" situations common for an urban planning professional. A performance test item could measure this development by presenting the student with a specific situation which represents a "real life" situation. For example,

An urban planning board makes a last minute request for the professional to act as consultant and critique a written proposal which is to be considered in a board meeting that very evening. The professional arrives before the meeting and has one hour to analyze the written proposal and prepare his critique. The critique presentation is then made verbally during the board meeting; reactions of members of the board or the audience include requests for explanation of specific points or informed attacks on the positions taken by the professional.

The performance test designed to simulate this situation would require that the student to be tested role play the professional's part, while students or faculty act the other roles in the situation. Various aspects of the "professional's" performance would then be observed and rated by several judges with the necessary background. The ratings could then be used both to provide the student with a diagnosis of his/her strengths and weaknesses and to contribute to an overall summary evaluation of the student's abilities.

Advantages In Using Performance Test Items

Performance test items...

  • can most appropriately measure learning objectives which focus on the ability of the students to apply skills or knowledge in real life situations.
  • usually provide a degree of test validity not possible with standard paper and pencil test items.
  • are useful for measuring learning objectives in the psychomotor domain.

Limitations In Using Performance Test Items

  • are difficult and time consuming to construct.
  • are primarily used for testing students individually and not for testing groups. Consequently, they are relatively costly, time consuming, and inconvenient forms of testing.
  • generally do not provide an objective measure of student achievement or ability (subject to bias on the part of the observer/grader).

Suggestions For Writing Performance Test Items

  • Prepare items that elicit the type of behavior you want to measure.
  • Clearly identify and explain the simulated situation to the student.
  • Make the simulated situation as "life-like" as possible.
  • Provide directions which clearly inform the students of the type of response called for.
  • When appropriate, clearly state time and activity limitations in the directions.
  • Adequately train the observer(s)/scorer(s) to ensure that they are fair in scoring the appropriate behaviors.

III. TWO METHODS FOR ASSESSING TEST ITEM QUALITY

This section presents two methods for collecting feedback on the quality of your test items. The two methods include using self-review checklists and student evaluation of test item quality. You can use the information gathered from either method to identify strengths and weaknesses in your item writing. 

Checklist for Evaluating Test Items

EVALUATE YOUR TEST ITEMS BY CHECKING THE SUGGESTIONS WHICH YOU FEEL YOU HAVE FOLLOWED.  

Grading Essay Test Items

Student evaluation of test item quality , using ices questionnaire items to assess your test item quality .

The following set of ICES (Instructor and Course Evaluation System) questionnaire items can be used to assess the quality of your test items. The items are presented with their original ICES catalogue number. You are encouraged to include one or more of the items on the ICES evaluation form in order to collect student opinion of your item writing quality.

IV. ASSISTANCE OFFERED BY THE CENTER FOR INNOVATION IN TEACHING AND LEARNING (CITL)

The information on this page is intended for self-instruction. However, CITL staff members will consult with faculty who wish to analyze and improve their test item writing. The staff can also consult with faculty about other instructional problems. Instructors wishing to acquire CITL assistance can contact [email protected]

V. REFERENCES FOR FURTHER READING

Ebel, R. L. (1965). Measuring educational achievement . Prentice-Hall. Ebel, R. L. (1972). Essentials of educational measurement . Prentice-Hall. Gronlund, N. E. (1976). Measurement and evaluation in teaching (3rd ed.). Macmillan. Mehrens W. A. & Lehmann I. J. (1973). Measurement and evaluation in education and psychology . Holt, Rinehart & Winston. Nelson, C. H. (1970). Measurement and evaluation in the classroom . Macmillan. Payne, D. A. (1974).  The assessment of learning: Cognitive and affective . D.C. Heath & Co. Scannell, D. P., & Tracy D. B. (1975). Testing and measurement in the classroom . Houghton Mifflin. Thorndike, R. L. (1971). Educational measurement (2nd ed.). American Council on Education.

Center for Innovation in Teaching & Learning

249 Armory Building 505 East Armory Avenue Champaign, IL 61820

217 333-1462

Email: [email protected]

Office of the Provost

Banner

Succeed at Exams

  • Study for Exams
  • Set Up Your Study Environment
  • Write Online Exams
  • Write In-Person Exams
  • Manage Exam Stress
  • Multiple Choice Exams
  • Multiple Choice Sample Questions
  • Short-Answer Exams
  • Prepare for Essay Exams
  • Write Essay Exams

What is a problem-solving exam?

What are some strategies for studying for problem-solving exams, what are some strategies for writing problem-solving exams, what can i do after the exam.

  • Analyze Exam Errors

Ask Us: Chat, email, visit or call

Click to chat: contact the library

More help: Studying

  • Book Studying Appointments
  • Problem-solving exams come in a variety of formats, from multiple choice, to short answer, to long calculations.
  • They frequently test your ability to apply the problem-solving skills you’ve learned in lectures, labs, and readings to new types of questions.
  • Pay attention to example problems emphasized in class, the text, and assignments, especially those that appear in more than one of these places.
  • Don’t assume, however, that the same or similar problems will appear on the exam. The exam will likely test your ability to apply what you’ve learned about solving problems to new types of questions, rather than your ability to memorize and regurgitate examples you’ve already seen.
  • Focus on the process the instructor used for solving the problem.  Think about your own problem-solving strategies.
  • Practise, practise, practise! The more problems, and more importantly, the more types of problems you solve, the better prepared you’ll be.
  • Look for connections between concepts and equations and note how to choose the correct equation in complex practice problems.
  • Generate your own test questions with a study group or partner. Practise answering questions within a limited time frame.
  • Review previous tests, quizzes, or midterms, and figure out why you lost marks before.
  • Look over the entire exam before beginning and budget your time according to how much each question is worth.  Leave enough time to read over your answers at the end of the exam.
  • Before starting, find a blank page on your exam and write down equations, concepts, and constants that you memorized.
  • Do the easy questions first to warm up your brain and calm exam nerves.
  • Read the questions carefully and rephrase them in your own words.
  • Keep track of all units. Convert values to keep the units consistent. Be aware of +/- signs.
  • Clearly mark assumptions, if they are necessary, and place them at or near the beginning of the solution whenever possible.
  • It can be helpful to look over your exam if you get it back to see where you have gone wrong, and what you have done well. If your instructor doesn’t routinely return exams, ask if you can see your exam to learn from your errors.
  • Use this information to help you study more effectively next time.
  • Check out the error analysis LibGuide to help you go over feedback.
  • << Previous: Write Essay Exams
  • Next: Analyze Exam Errors >>
  • Last Updated: Jun 22, 2023 8:01 AM
  • URL: https://guides.lib.uoguelph.ca/SucceedatExams

Suggest an edit to this guide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Center for Teaching

Teaching problem solving.

Print Version

Tips and Techniques

Expert vs. novice problem solvers, communicate.

  • Have students  identify specific problems, difficulties, or confusions . Don’t waste time working through problems that students already understand.
  • If students are unable to articulate their concerns, determine where they are having trouble by  asking them to identify the specific concepts or principles associated with the problem.
  • In a one-on-one tutoring session, ask the student to  work his/her problem out loud . This slows down the thinking process, making it more accurate and allowing you to access understanding.
  • When working with larger groups you can ask students to provide a written “two-column solution.” Have students write up their solution to a problem by putting all their calculations in one column and all of their reasoning (in complete sentences) in the other column. This helps them to think critically about their own problem solving and helps you to more easily identify where they may be having problems. Two-Column Solution (Math) Two-Column Solution (Physics)

Encourage Independence

  • Model the problem solving process rather than just giving students the answer. As you work through the problem, consider how a novice might struggle with the concepts and make your thinking clear
  • Have students work through problems on their own. Ask directing questions or give helpful suggestions, but  provide only minimal assistance and only when needed to overcome obstacles.
  • Don’t fear  group work ! Students can frequently help each other, and talking about a problem helps them think more critically about the steps needed to solve the problem. Additionally, group work helps students realize that problems often have multiple solution strategies, some that might be more effective than others

Be sensitive

  • Frequently, when working problems, students are unsure of themselves. This lack of confidence may hamper their learning. It is important to recognize this when students come to us for help, and to give each student some feeling of mastery. Do this by providing  positive reinforcement to let students know when they have mastered a new concept or skill.

Encourage Thoroughness and Patience

  • Try to communicate that  the process is more important than the answer so that the student learns that it is OK to not have an instant solution. This is learned through your acceptance of his/her pace of doing things, through your refusal to let anxiety pressure you into giving the right answer, and through your example of problem solving through a step-by step process.

Experts (teachers) in a particular field are often so fluent in solving problems from that field that they can find it difficult to articulate the problem solving principles and strategies they use to novices (students) in their field because these principles and strategies are second nature to the expert. To teach students problem solving skills,  a teacher should be aware of principles and strategies of good problem solving in his or her discipline .

The mathematician George Polya captured the problem solving principles and strategies he used in his discipline in the book  How to Solve It: A New Aspect of Mathematical Method (Princeton University Press, 1957). The book includes  a summary of Polya’s problem solving heuristic as well as advice on the teaching of problem solving.

problem solving test instruction

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules
  • Units and Programs

Make a Gift

  • Academic Support
  • Free Classes & Workshops
  • Study Strategies
  • Instructor Resources
  • Graduate Student Resources
  • Meet Our Staff

Make an Appointment

  • Sign up for one-on-one tutoring
  • Request a workshop
  • Request a Peer Academic Coach
  • Talk to a Learning Specialist
  • Work on a speech

Beauford H. Jester Center ( JES ) Room A332 201 E. 21st Street Austin, TX – 78705 512-471-3614

  • Visit the Sanger Learning Center on Facebook
  • Follow the Sanger Learning Center on Twitter
  • Follow the Sanger Learning Center on Instagram

Problem-solving Tests

The single best way to prepare for problem-solving tests is to solve problems—lots of them. Be sure to work problems not previously assigned.

Before the Test

  • Go over class notes and reading. Identify the major concepts and formulas from both.
  • Highlight topics or problems your instructor emphasized. Note why these items are important.
  • Look for fundamental problem types. Typically a course has recognizable groups or types of problems. Make sure you can tell them apart and know how to approach them.

Solve a Few

  • What concepts, formulas, rules and methods can I apply?
  • How do I begin?
  • Have I seen this problem before? Is it like other problems?
  • Could I work this problem another way or simplify what I did?
  • How does my solution compare with examples from the book and class?
  • Next to each problem-solving step, write what you did. Spell out what you did and why in your own words. This will make problem-solving techniques more concrete in your mind.
  • Practice working problems out of sequence. For example, work a problem from Chapter 7, Chapter 5, then Chapter 10. This will reveal how problems relate to each other and simulate the test-taking experience.
  • Work with a time limit. Aim to solve as many problems as you will have on the test within the test time limit (e.g. 30 problems in 50 minutes).
  • Create a practice test. Try cutting and pasting a test together using homework as a source for questions as well similar problems from your textbook.

During the Test

  • Write down what you need. Before starting the test, turn it over and jot down all the formulas, relationships, and definitions you need to remember.
  • Review the test. Skim questions and develop a plan for your work. If any thoughts come to you immediately, write them in the margin.
  • Start with easier problems. Begin with those for which you can identify a solution method quickly. This will reduce anxiety and facilitate clear thinking.
  • Watch the clock. Allow more time for high point value problems, and reserve time at the end for reviewing your work and fixing any emergencies.
  • Try all test problems. If your mind goes blank, relax for a moment and contemplate the problem. Or mark it and return to it later.
  • For more difficult questions, have a plan. Be certain that you understand the problem. Mark key words, identify the givens and unknowns in your own words, sketch a diagram or picture of the problem, or try to anticipate the form and characteristics of the solution. For complex problems, list the formulas you consider relevant to the solution, then decide which you will need to get started.

If you still have no solution method, try these tips.

  • If possible, write out an equation to express the relationships among some or all the givens and unknowns.
  • Think back to similar practice problems.
  • Work backwards. Ask yourself, “What do I need to get the answer?”
  • Solve a simpler form of the problem or substitute simple numbers for unknowns; try to reduce the amount of abstract thinking required.
  • Break a problem into a series of smaller problems, then work each part.
  • Guess an answer and then check it. The checking process may suggest a solution method.

If all else fails, mark the problem and return to it later. You may find clues in subsequent problems that will help you find a solution. If you’re running out of time and still have problems remaining, try to set the problem up in a solution plan. This means you’ll have a chance of receiving partial credit.

Analyzing Returned Problem-solving Tests

  • Read the comments and suggestions from your professor.
  • Locate the source of the test questions. Did they come from the lectures, textbook, or homework?
  • Note any alterations. How were the problems changed from those in the notes, text, and homework?
  • Did your errors result from carelessness? For example, did you fail to carry a negative sign from one step to another?
  • Did you misread questions? For example, did you fail to account for all the given data in your solution method?
  • Could you produce the formulas, or did you recall them incorrectly?
  • Did you consistently miss the same kind of problem?
  • Did you have difficulty on the test because you were too anxious to focus on the questions?
  • Were you unable to finish the test because you ran out of time?
  • Were you unable to solve problems because you didn’t practice similar ones?

LSE Home

Evidence-Based Teaching Guides → Evidence-Based Teaching Guides → Problem Solving

Instruction followed by problem solving.

  • In the PLTL approach, the instruction phase takes place in the traditional classroom, often in the form of lecture, and the problem-solving phase takes place as students work in collaborative groups (typically ranging from 6-10 students) facilitated by a trained undergraduate peer leader for 90−120 minutes each week.
  • This approach provides facilitated help to students in their courses, improves students’ problem-solving skills, enhances students’ communication abilities, and provides an active-learning experience for students.
  • The peer leader should not help solve the problems with the students in the group, but guides them to discuss their reasoning by asking probing questions and to equally participate by using different collaborative learning strategies (such as round robin, scribe, and pairs). Students decide as a group whether the answer is correct or not, which encourages the students to consider the problem more deeply.
  • PLTL is used in many STEM undergraduate disciplines including biology, chemistry, mathematics, physics, psychology, and computer science, and in all types of institutions. It has been used at all different levels of undergraduate courses. If done well, PLTL can improve course grades, course and series retention, standardized and course exam performance, and DWF rates.
  • PLTL can also be beneficial to peer leaders, self-reporting greater content learning, improved study skills, improved interpersonal skills, increased leadership skills, and confidence.
  • In the constructivist framework, teaching is not the transmission of knowledge from the instructor to the student. The instructor is a facilitator or guide, giving structure to the learning process.
  • Students construct meaning (e.g., develop concepts and models) through active involvement with the material and by making sense of their experiences.
  • Social constructivism assumes that students’ understanding and sense-making are developed jointly in collaboration with other students.
  • The peer leader is considered to be an effective guide because they are in the ZPD of the students in their PLTL group.
  • The PLTL program is integral to the course and integrated with other course components.
  • how to effectively create a community of practice within their group such that students will make joint decisions while solving the problems, discuss multiple approaches to solve the problems, and practice professional social and communication skills;
  • practice with questioning strategies to support students in deepening their discussion to include explanations for their ideas and problem-solving processes;
  • learning about how students learn based on psychology and education research, and how to apply this information while facilitating their group.
  • require students to work collaboratively to solve problems;
  • encourage students to engage deeply with the content (i.e., include prompts asking them to explain their reasoning or process), disciplinary vocabulary (i.e., include prompts asking them to define terms in their own words), and essential skills;
  • become more complex throughout the problem set while ensuring that the students are always within their Zone of Proximal Development.
  • Organizational arrangements promote active learning via focus on group size, room space, length of session, and low noise level.
  • The institution and department encourage and support innovative teaching.
  • In worked examples, the instruction takes the form of example problems. These problems include a problem statement and a step-by-step procedure for solving the problem, intended to show how an expert might solve this type of problem. After this explicit instruction, students complete practice problems like the worked examples.
  • Worked examples plus practice problems have been found to be beneficial when compared to instruction followed by problem solving alone. This benefit is observed for novices learning to solve complex problems but is lost as learners become more expert in the domain and is not observed for simple problems.
  • Worked examples provide guidance that can help students learn to do analogous problems (near transfer) and may have similar benefits to productive failure and scaffolded guided inquiry for near transfer.
  • Worked examples help students in early-to-intermediate stages of cognitive skill development as they are learning to abstract general principles for solving a given type of problem.
  • Comparing worked examples that focus on different types of problems can also help students identify deep features and abstract general principles that help them know when to use a given problem solving approach.
  • Problem-solving practice that incorporates strategies like retrieval and interleaving become more effective as students seek to become faster and more accurate.
  • Integrating sources of information (e.g., images integrated with explanatory text or auditory explanations),
  • Including visual cues to help students readily follow the explanation,
  • Fostering identification of subgoals within a problem, either by labeling or visually separating chunks of a problem solution corresponding to a subgoal.
  • Lessons should include at least two worked examples for a type of problem.
  • Worked examples should be accompanied by practice problems and should be interspersed throughout a lesson rather than combined in one section of the lesson.
  • Different problem types should use similar cover stories to emphasize deep problem features.
  • Relate solutions to abstract principles
  • Compare different examples and self-explain key similarities and differences.
  • If multiple worked examples for a given type of problem are used, it can be beneficial to remove guidance in stages (also known as fading). Backwards fading (leaving blanks later in the problems first, then earlier and earlier) has been found to be more beneficial than forward fading.

How To Use This Guide

Return to Map

How to assess reasoning skills

problem solving test instruction

Identifying individuals with excellent reasoning skills is a common goal when making new hires. The ability of your employees to analyze information, think critically, and draw logical conclusions is crucial in today’s dynamic professional landscape. 

Pre-employment assessments offer great value by effectively assessing these essential capabilities. 

TestGorilla’s assessments objectively gauge a candidate’s ability to solve problems, evaluate arguments, and draw logical inferences. By leveraging these assessments, you can secure candidates with the cognitive skills necessary for analytical thinking and decision-making.

Table of contents

What is a reasoning skills assessment, why are reasoning skills important, what skills and traits do employees with good reasoning have, tests for evaluating reasoning skills, how testgorilla can help you find candidates with reasoning skills.

A reasoning skills assessment is a valuable tool that can provide insights into a candidate’s ability to analyze information, think critically, and make logical deductions. 

This assessment aims to evaluate an individual’s cognitive skills related to problem-solving, decision-making, and analytical thinking.

There are several types of cognitive ability tests that can aid in assessing reasoning. During a reasoning skills assessment, candidates are presented with various scenarios, questions, or problems that require them to apply logical thinking and problem-solving techniques. 

It can involve evaluating arguments, identifying patterns, making inferences, or solving puzzles. 

Assessments often use standardized tests or exercises that measure different aspects of reasoning. They’re designed to objectively evaluate a candidate’s cognitive abilities rather than simply relying on qualifications or experience. 

Using a reasoning skills assessment, you can make more informed decisions about a candidate’s aptitude for sound reasoning, problem-solving, and decision-making.

Why are legal assistant skills important graphic list

Effective problem-solving

Employees with solid reasoning skills can tackle complex problems with clarity and efficiency. They can analyze information, identify patterns, and make logical connections, enabling them to devise smart ways to meet challenges. 

Their problem-solving ability enhances productivity, streamlines work processes, and drives continuous organizational improvement. This is why you need analytic skills testing in your hiring process if you want to find the best candidates.

Quality decision-making

Reasoning skills contribute to effective decision-making. Employees who think critically and can logically evaluate information are more likely to make informed decisions based on evidence and careful analysis. 

Their ability to weigh options, consider potential outcomes, and anticipate risks helps mitigate errors.

Adaptability and flexibility

Individuals who can think critically and analyze situations from different angles are better equipped to embrace new challenges, adjust their approach, and find new strategies. 

Their adaptability fosters resilience, enabling them to thrive in fast-paced industries and contribute to organizational growth and success.

Enhanced innovation

Reasoning skills are at the core of innovative thinking. Employees who excel in reasoning can identify gaps, find opportunities, and connect seemingly unrelated ideas or concepts. 

Their ability to analyze data, draw logical conclusions, and come up with creative new tactics drives innovation. Hiring individuals with superb reasoning skills encourage the development of new groundbreaking ideas.

Effective risk management

Employees with exemplary reasoning abilities can evaluate potential risks, weigh their impact, and consider mitigation strategies. 

Their ability to anticipate challenges and make calculated decisions reduces the likelihood of costly errors or setbacks, contributing to effective risk management within your organization.

Continued learning and growth

People with great reasoning skills tend to be lifelong learners. They have a natural curiosity and a desire to expand their knowledge and skills. 

Their ability to think critically and adapt enables them to embrace new information, learn from experiences, and grow professionally. 

Effective communication and collaboration

Employees with reasoning skills can think critically and express their ideas clearly. They can engage in meaningful discussions, contribute valuable insights, and articulate their viewpoints. 

They can also understand and respect diverse perspectives, leading to enhanced teamwork, collaboration, and the generation of new, exciting courses of action through collective intelligence.

Critical thinking

Individuals with good reasoning skills demonstrate strong critical thinking abilities. They can analyze information objectively, evaluate arguments, and identify logical inconsistencies. 

Their critical thinking skills enable them to approach problems and challenges with a logical and rational mindset, enabling them to make sound decisions and solve complex issues effectively.

Problem-solving aptitude

Excellent reasoning skills often go hand in hand with exceptional problem-solving aptitude. Candidates who excel in reasoning can break down complex problems into manageable components, identify patterns, and come up with innovative new strategies. 

They exhibit a natural curiosity, a willingness to explore different approaches, and the ability to think outside the box, enabling them to overcome obstacles and find creative resolutions.

Analytical thinking

A key trait of individuals with good reasoning skills is their ability to think analytically. They can dissect complex information, identify key components, and draw connections between various data points. 

With their analytical thinking skills, they can examine data objectively, discern trends or patterns, and make informed decisions based on evidence and logical deductions.

Logical reasoning

Strong reasoning skills are often indicative of individuals who possess logical reasoning abilities. They can follow sequences, identify cause-and-effect relationships, and draw conclusions based on deductive or inductive reasoning. 

Their logical reasoning skills enable them to evaluate options, anticipate potential outcomes, and choose the most appropriate course of action.

Flexible thinking

Employees with good reasoning skills often exhibit cognitive flexibility. They can adapt their thinking and approach to different situations, incorporating new information and adjusting their perspectives as needed. 

Their cognitive flexibility lets them consider multiple viewpoints, explore alternative options, and navigate complex challenges with an open mind. They re-evaluate assumptions and revise their thinking based on new insights or evidence.

Communication skills

For reasoning skills to be effective in the workplace, communication is key. It’s important that employees can articulate their thoughts clearly, present logical arguments, and express complex ideas in a concise manner. 

The ability to communicate effectively helps to convey the reasoning process, engage in meaningful discussions, and collaborate with others, fostering better teamwork and understanding within the organization. 

Workplace communication tests can evaluate candidates’ ability to communicate at work.

Individuals with good reasoning skills demonstrate a natural curiosity and a thirst for continuous learning. 

They have a genuine interest in expanding their knowledge, exploring new ideas, and seeking out information to enhance their understanding. 

Their curiosity drives them to stay updated on industry trends, engage in self-improvement, and continuously develop their reasoning abilities.

When it comes to assessing a candidate’s reasoning skills, it’s important to delve deeper beyond surface-level observations. Understanding their critical thinking, problem-solving, and decision-making abilities is crucial. That’s where TestGorilla can lend a hand. 

Our extensive test library is a treasure trove of options to suit your needs. You can mix and match tests to create an assessment that aligns perfectly with your company’s requirements. 

Whether you’re searching for top-notch analysts or logical thinkers who thrive in challenging situations, our tests can help you discover exceptional candidates with the cognitive skills to excel.

Here are some of our most popular tests for assessing reasoning skills:

Critical Thinking test

At TestGorilla, we understand the significance of this test in evaluating a candidate’s ability to analyze information, make logical connections, and approach problems from multiple perspectives.

By incorporating the Critical Thinking test into your reasoning skills assessment, you gain valuable insights into an individual’s cognitive abilities and capacity to think critically in real-world scenarios. 

This test goes beyond simple memorization or rote learning; it assesses how candidates can apply their knowledge, reason through complex situations, and arrive at sound conclusions.

Verbal Reasoning test

Our Verbal Reasoning test is essential because it assesses language comprehension, critical thinking, and problem-solving abilities. It evaluates an individual’s capacity to understand written information and draw logical conclusions. 

This test also indirectly measures language proficiency and communication skills. Verbal reasoning tests are widely used because they predict academic and occupational success, and they provide a fair and accessible assessment tool for individuals from diverse backgrounds.

Spatial Reasoning test

TestGorilla’s Spatial Reasoning test assesses a candidate’s capacity to perceive and understand spatial relationships, shapes, and patterns. 

This skill is particularly relevant in fields such as engineering, architecture, design, and logistics, where professionals often encounter complex spatial problems. 

The Spatial Reasoning test also assesses a candidate’s capacity to mentally visualize and manipulate objects in space. These abilities are essential for tasks that involve spatial planning, such as interpreting maps, organizing physical spaces, or understanding 3D models. 

Candidates who perform well in spatial reasoning tests demonstrate a heightened ability to think ahead, anticipate outcomes, and develop effective strategies based on spatial information. 

Numerical Reasoning test

The Numerical Reasoning test provides valuable insights into a job candidate’s reasoning skills, particularly in terms of quantitative analysis, problem-solving, and logical thinking. 

By assessing a candidate’s proficiency in interpreting numerical data and making accurate deductions, this test assists you in identifying those who possess the numerical acumen necessary for roles involving financial analysis, data-driven decision-making, and problem-solving using quantitative methods.

Mechanical Reasoning test

While not all job roles require mechanical reasoning, this test is pertinent for positions that involve machinery, engineering, or technical operations by providing crucial insights into a candidate’s reasoning abilities in these areas.

The Mechanical Reasoning test evaluates a candidate’s understanding of mechanical principles and ability to apply that knowledge to solve problems. 

This test presents candidates with scenarios and questions that require them to analyze mechanical systems, interpret diagrams, and make logical deductions.

Problem Solving test

Problem-solving tests evaluate a candidate’s aptitude for analyzing issues from different perspectives, breaking them down into manageable components, and applying logical reasoning to reach effective resolutions. 

The Problem Solving test measures a candidate’s ability to think critically, make sound judgments, and adapt their problem-solving approach as necessary. 

Strong problem-solving skills are not limited to specific industries or job roles; they are highly transferable and valuable across various fields, including business, technology, healthcare, and customer service.

Attention to Detail (Textual) test

TestGorilla’s Attention to Detail (Textual) test offers valuable insights into a job candidate’s reasoning skills, particularly in assessing their ability to analyze and comprehend written information with precision and accuracy.

In most professional settings, the ability to pay close attention to detail is paramount. The Attention to Detail (Textual) test assesses a candidate’s proficiency in reading, comprehending, and scrutinizing written information, ensuring accuracy and completeness.

Big 5 (OCEAN) test

Reasoning skills are not solely dependent on cognitive abilities but are also influenced by an individual’s personality traits. 

The Big 5 (OCEAN) test assesses a candidate’s personality dimensions, providing a deeper understanding of their approach to challenges, level of openness to new ideas, organizational skills, propensity for collaboration, and emotional stability. 

For example, candidates with a high score in Conscientiousness demonstrate meticulous attention to detail and a structured approach to problem-solving, while those who get a high score in Openness exhibit creativity and a willingness to explore new ways of moving forward. 

By considering these traits alongside reasoning skills, you can gain a comprehensive understanding of a candidate’s potential to excel in tasks requiring critical thinking and reasoning.

Understanding Instructions test

The Understanding Instructions test plays a useful role in evaluating a job candidate’s reasoning skills, specifically their ability to understand and execute tasks based on given instructions accurately. 

This test focuses on assessing an individual’s attention to detail, critical thinking, and capacity to analyze and interpret instructions. 

It offers valuable insights into a candidate’s logical reasoning, problem-solving skills, and potential for success in roles that require close adherence to guidelines.

If you’re looking to identify candidates with exceptional reasoning skills, TestGorilla is here to support your hiring journey. With our extensive range of scientifically designed tests, we provide you with a powerful tool to assess and evaluate critical thinking and problem-solving abilities. 

By incorporating TestGorilla’s assessments into your hiring process, you’ll gain valuable insights into each candidate’s capacity to analyze, strategize, and make informed decisions, setting the stage for building a team of exceptional talent.

At TestGorilla, we understand that finding individuals who can think critically and adapt to complex challenges is crucial for your company’s success. Our tests are carefully crafted to gauge candidates’ logical reasoning, analytical skills, and cognitive abilities, giving you a comprehensive understanding of their reasoning prowess. 

By relying on TestGorilla’s innovative assessment platform, you can confidently identify top-tier candidates who will contribute fresh perspectives, creativity, and ingenuity to your organization.

Let us help you identify candidates with the critical thinking, problem-solving, and decision-making abilities your company needs to thrive.

Sign up for TestGorilla’s free plan today and experience the power of our reasoning skills assessments firsthand.

Related posts

The tech skills gap: What it is, why it exists, and how to close it in 2024 featured image

The tech skills gap: What it is, why it exists, and how to close it in 2024

7 hiring trends for 2024 and beyond featured image

7 hiring trends for 2024

 Most accurate 16 personalities tests for assessing candidates featured image

Most accurate 16 personalities tests for assessing candidates

Hire the best candidates with TestGorilla

Create pre-employment assessments in minutes to screen candidates, save time, and hire the best talent.

problem solving test instruction

Latest posts

12 essential front end developer skills and how to assess them featured image

The best advice in pre-employment testing, in your inbox.

No spam. Unsubscribe at any time.

Hire the best. No bias. No stress.

Our screening tests identify the best candidates and make your hiring decisions faster, easier, and bias-free.

Free resources

problem solving test instruction

This checklist covers key features you should look for when choosing a skills testing platform

problem solving test instruction

This resource will help you develop an onboarding checklist for new hires.

problem solving test instruction

How to assess your candidates' attention to detail.

problem solving test instruction

Learn how to get human resources certified through HRCI or SHRM.

problem solving test instruction

Learn how you can improve the level of talent at your company.

problem solving test instruction

Learn how CapitalT reduced hiring bias with online skills assessments.

problem solving test instruction

Learn how to make the resume process more efficient and more effective.

Recruiting metrics

Improve your hiring strategy with these 7 critical recruitment metrics.

problem solving test instruction

Learn how Sukhi decreased time spent reviewing resumes by 83%!

problem solving test instruction

Hire more efficiently with these hacks that 99% of recruiters aren't using.

problem solving test instruction

Make a business case for diversity and inclusion initiatives with this data.

loading

How it works

For Business

Join Mind Tools

Self-Assessment • 20 min read

How Good Is Your Problem Solving?

Use a systematic approach..

By the Mind Tools Content Team

problem solving test instruction

Good problem solving skills are fundamentally important if you're going to be successful in your career.

But problems are something that we don't particularly like.

They're time-consuming.

They muscle their way into already packed schedules.

They force us to think about an uncertain future.

And they never seem to go away!

That's why, when faced with problems, most of us try to eliminate them as quickly as possible. But have you ever chosen the easiest or most obvious solution – and then realized that you have entirely missed a much better solution? Or have you found yourself fixing just the symptoms of a problem, only for the situation to get much worse?

To be an effective problem-solver, you need to be systematic and logical in your approach. This quiz helps you assess your current approach to problem solving. By improving this, you'll make better overall decisions. And as you increase your confidence with solving problems, you'll be less likely to rush to the first solution – which may not necessarily be the best one.

Once you've completed the quiz, we'll direct you to tools and resources that can help you make the most of your problem-solving skills.

How Good Are You at Solving Problems?

Instructions.

For each statement, click the button in the column that best describes you. Please answer questions as you actually are (rather than how you think you should be), and don't worry if some questions seem to score in the 'wrong direction'. When you are finished, please click the 'Calculate My Total' button at the bottom of the test.

Answering these questions should have helped you recognize the key steps associated with effective problem solving.

This quiz is based on Dr Min Basadur's Simplexity Thinking problem-solving model. This eight-step process follows the circular pattern shown below, within which current problems are solved and new problems are identified on an ongoing basis. This assessment has not been validated and is intended for illustrative purposes only.

Below, we outline the tools and strategies you can use for each stage of the problem-solving process. Enjoy exploring these stages!

Step 1: Find the Problem (Questions 7, 12)

Some problems are very obvious, however others are not so easily identified. As part of an effective problem-solving process, you need to look actively for problems – even when things seem to be running fine. Proactive problem solving helps you avoid emergencies and allows you to be calm and in control when issues arise.

These techniques can help you do this:

PEST Analysis helps you pick up changes to your environment that you should be paying attention to. Make sure too that you're watching changes in customer needs and market dynamics, and that you're monitoring trends that are relevant to your industry.

Risk Analysis helps you identify significant business risks.

Failure Modes and Effects Analysis helps you identify possible points of failure in your business process, so that you can fix these before problems arise.

After Action Reviews help you scan recent performance to identify things that can be done better in the future.

Where you have several problems to solve, our articles on Prioritization and Pareto Analysis help you think about which ones you should focus on first.

Step 2: Find the Facts (Questions 10, 14)

After identifying a potential problem, you need information. What factors contribute to the problem? Who is involved with it? What solutions have been tried before? What do others think about the problem?

If you move forward to find a solution too quickly, you risk relying on imperfect information that's based on assumptions and limited perspectives, so make sure that you research the problem thoroughly.

Step 3: Define the Problem (Questions 3, 9)

Now that you understand the problem, define it clearly and completely. Writing a clear problem definition forces you to establish specific boundaries for the problem. This keeps the scope from growing too large, and it helps you stay focused on the main issues.

A great tool to use at this stage is CATWOE . With this process, you analyze potential problems by looking at them from six perspectives, those of its Customers; Actors (people within the organization); the Transformation, or business process; the World-view, or top-down view of what's going on; the Owner; and the wider organizational Environment. By looking at a situation from these perspectives, you can open your mind and come to a much sharper and more comprehensive definition of the problem.

Cause and Effect Analysis is another good tool to use here, as it helps you think about the many different factors that can contribute to a problem. This helps you separate the symptoms of a problem from its fundamental causes.

Step 4: Find Ideas (Questions 4, 13)

With a clear problem definition, start generating ideas for a solution. The key here is to be flexible in the way you approach a problem. You want to be able to see it from as many perspectives as possible. Looking for patterns or common elements in different parts of the problem can sometimes help. You can also use metaphors and analogies to help analyze the problem, discover similarities to other issues, and think of solutions based on those similarities.

Traditional brainstorming and reverse brainstorming are very useful here. By taking the time to generate a range of creative solutions to the problem, you'll significantly increase the likelihood that you'll find the best possible solution, not just a semi-adequate one. Where appropriate, involve people with different viewpoints to expand the volume of ideas generated.

Tip: Don't evaluate your ideas until step 5. If you do, this will limit your creativity at too early a stage.

Step 5: Select and Evaluate (Questions 6, 15)

After finding ideas, you'll have many options that must be evaluated. It's tempting at this stage to charge in and start discarding ideas immediately. However, if you do this without first determining the criteria for a good solution, you risk rejecting an alternative that has real potential.

Decide what elements are needed for a realistic and practical solution, and think about the criteria you'll use to choose between potential solutions.

Paired Comparison Analysis , Decision Matrix Analysis and Risk Analysis are useful techniques here, as are many of the specialist resources available within our Decision-Making section . Enjoy exploring these!

Step 6: Plan (Questions 1, 16)

You might think that choosing a solution is the end of a problem-solving process. In fact, it's simply the start of the next phase in problem solving: implementation. This involves lots of planning and preparation. If you haven't already developed a full Risk Analysis in the evaluation phase, do so now. It's important to know what to be prepared for as you begin to roll out your proposed solution.

The type of planning that you need to do depends on the size of the implementation project that you need to set up. For small projects, all you'll often need are Action Plans that outline who will do what, when, and how. Larger projects need more sophisticated approaches – you'll find out more about these in the article What is Project Management? And for projects that affect many other people, you'll need to think about Change Management as well.

Here, it can be useful to conduct an Impact Analysis to help you identify potential resistance as well as alert you to problems you may not have anticipated. Force Field Analysis will also help you uncover the various pressures for and against your proposed solution. Once you've done the detailed planning, it can also be useful at this stage to make a final Go/No-Go Decision , making sure that it's actually worth going ahead with the selected option.

Step 7: Sell the Idea (Questions 5, 8)

As part of the planning process, you must convince other stakeholders that your solution is the best one. You'll likely meet with resistance, so before you try to “sell” your idea, make sure you've considered all the consequences.

As you begin communicating your plan, listen to what people say, and make changes as necessary. The better the overall solution meets everyone's needs, the greater its positive impact will be! For more tips on selling your idea, read our article on Creating a Value Proposition and use our Sell Your Idea Skillbook.

Step 8: Act (Questions 2, 11)

Finally, once you've convinced your key stakeholders that your proposed solution is worth running with, you can move on to the implementation stage. This is the exciting and rewarding part of problem solving, which makes the whole process seem worthwhile.

This action stage is an end, but it's also a beginning: once you've completed your implementation, it's time to move into the next cycle of problem solving by returning to the scanning stage. By doing this, you'll continue improving your organization as you move into the future.

Problem solving is an exceptionally important workplace skill.

Being a competent and confident problem solver will create many opportunities for you. By using a well-developed model like Simplexity Thinking for solving problems, you can approach the process systematically, and be comfortable that the decisions you make are solid.

Given the unpredictable nature of problems, it's very reassuring to know that, by following a structured plan, you've done everything you can to resolve the problem to the best of your ability.

This assessment has not been validated and is intended for illustrative purposes only. It is just one of many Mind Tool quizzes that can help you to evaluate your abilities in a wide range of important career skills.

If you want to reproduce this quiz, you can purchase downloadable copies in our Store .

You've accessed 1 of your 2 free resources.

Get unlimited access

Discover more content

4 logical fallacies.

Avoid Common Types of Faulty Reasoning

Problem Solving

Add comment

Comments (2)

Afkar Hashmi

😇 This tool is very useful for me.

over 1 year

Very impactful

problem solving test instruction

Team Management

Learn the key aspects of managing a team, from building and developing your team, to working with different types of teams, and troubleshooting common problems.

Sign-up to our newsletter

Subscribing to the Mind Tools newsletter will keep you up-to-date with our latest updates and newest resources.

Subscribe now

Business Skills

Personal Development

Leadership and Management

Member Extras

Most Popular

Newest Releases

Article amtbj63

SWOT Analysis

Article a4wo118

SMART Goals

Mind Tools Store

About Mind Tools Content

Discover something new today

How to stop procrastinating.

Overcoming the Habit of Delaying Important Tasks

What Is Time Management?

Working Smarter to Enhance Productivity

How Emotionally Intelligent Are You?

Boosting Your People Skills

Self-Assessment

What's Your Leadership Style?

Learn About the Strengths and Weaknesses of the Way You Like to Lead

Recommended for you

Networked for learning.

How Different Organizations Use Technology to Collaborate, Innovate and Learn

Case Studies

Business Operations and Process Management

Strategy Tools

Customer Service

Business Ethics and Values

Handling Information and Data

Project Management

Knowledge Management

Self-Development and Goal Setting

Time Management

Presentation Skills

Learning Skills

Career Skills

Communication Skills

Negotiation, Persuasion and Influence

Working With Others

Difficult Conversations

Creativity Tools

Self-Management

Work-Life Balance

Stress Management and Wellbeing

Coaching and Mentoring

Change Management

Managing Conflict

Delegation and Empowerment

Performance Management

Leadership Skills

Developing Your Team

Talent Management

Decision Making

Member Podcast

  • Brain Development
  • Childhood & Adolescence
  • Diet & Lifestyle
  • Emotions, Stress & Anxiety
  • Learning & Memory
  • Thinking & Awareness
  • Alzheimer's & Dementia
  • Childhood Disorders
  • Immune System Disorders
  • Mental Health
  • Neurodegenerative Disorders
  • Infectious Disease
  • Neurological Disorders A-Z
  • Body Systems
  • Cells & Circuits
  • Genes & Molecules
  • The Arts & the Brain
  • Law, Economics & Ethics
  • Neuroscience in the News
  • Supporting Research
  • Tech & the Brain
  • Animals in Research
  • BRAIN Initiative
  • Meet the Researcher
  • Neuro-technologies
  • Tools & Techniques
  • Core Concepts
  • For Educators
  • Ask an Expert
  • The Brain Facts Book

BrainFacts.org

Test Your Problem-Solving Skills

Personalize your emails.

Personalize your monthly updates from BrainFacts.org by choosing the topics that you care about most!

Find a Neuroscientist

Engage local scientists to educate your community about the brain.

Image of the Week

Check out the Image of the Week Archive.

Facebook

SUPPORTING PARTNERS

Dana Foundation logo

  • Privacy Policy
  • Accessibility Policy
  • Terms and Conditions
  • Manage Cookies

Some pages on this website provide links that require Adobe Reader to view.

Problem Solving Test

About the test:

The problem solving test evaluates a candidate's ability to understand instructions, analyze data and respond to complex problems or situations. The questions are designed to get insights into their problem solving, learning agility and coachability.

Covered skills:

Abstract Reasoning

Deductive reasoning, pattern matching, critical thinking, inductive reasoning, spatial reasoning.

Test Duration

Difficulty Level

  • 4 Logical Reasoning MCQs
  • 4 Data Interpretation MCQs
  • 4 Spatial Reasoning MCQs
  • 4 Abstract Reasoning MCQs
  • 4 Critical Thinking MCQs

Availability

Ready to use

Adaface Problem Solving Assessment Test is the most accurate way to shortlist Fresherss

Tests for on-the-job skills.

The Problem Solving Test helps recruiters and hiring managers identify qualified candidates from a pool of resumes, and helps in taking objective hiring decisions. It reduces the administrative overhead of interviewing too many candidates and saves time by filtering out unqualified candidates at the first step of the hiring process.

The test screens for the following skills that hiring managers look for in candidates:

  • Logical reasoning
  • Data interpretation
  • Spatial reasoning
  • Abstract reasoning
  • Critical thinking
  • Deductive reasoning
  • Inductive reasoning
  • Pattern matching

No trick questions

no trick questions

Traditional assessment tools use trick questions and puzzles for the screening, which creates a lot of frustration among candidates about having to go through irrelevant screening assessments.

The main reason we started Adaface is that traditional pre-employment assessment platforms are not a fair way for companies to evaluate candidates. At Adaface, our mission is to help companies find great candidates by assessing on-the-job skills required for a role.

  • Non-googleable questions

We have a very high focus on the quality of questions that test for on-the-job skills. Every question is non-googleable and we have a very high bar for the level of subject matter experts we onboard to create these questions. We have crawlers to check if any of the questions are leaked online. If/ when a question gets leaked, we get an alert. We change the question for you & let you know.

These are just a small sample from our library of 10,000+ questions. The actual questions on this Problem Solving Test will be non-googleable.

1200+ customers in 75 countries

customers in 75 countries

With Adaface, we were able to optimise our initial screening process by upwards of 75%, freeing up precious time for both hiring managers and our talent acquisition team alike!

Brandon Lee, Head of People, Love, Bonito

Designed for elimination, not selection

The most important thing while implementing the pre-employment Problem Solving Test in your hiring process is that it is an elimination tool, not a selection tool. In other words: you want to use the test to eliminate the candidates who do poorly on the test, not to select the candidates who come out at the top. While they are super valuable, pre-employment tests do not paint the entire picture of a candidate’s abilities, knowledge, and motivations. Multiple easy questions are more predictive of a candidate's ability than fewer hard questions. Harder questions are often "trick" based questions, which do not provide any meaningful signal about the candidate's skillset.

1 click candidate invites

Email invites : You can send candidates an email invite to the Problem Solving Test from your dashboard by entering their email address.

Public link : You can create a public link for each test that you can share with candidates.

API or integrations : You can invite candidates directly from your ATS by using our pre-built integrations with popular ATS systems or building a custom integration with your in-house ATS.

invite candidates

Detailed scorecards & benchmarks

High completion rate.

Adaface tests are conversational, low-stress, and take just 25-40 mins to complete.

This is why Adaface has the highest test-completion rate (86%), which is more than 2x better than traditional assessments.

test completion rate

Advanced Proctoring

ChatGPT protection

  • Screen proctoring
  • Plagiarism detection

User authentication

  • IP proctoring
  • Web proctoring
  • Webcam proctoring

Full screen proctoring

  • Copy paste protection

About the Problem Solving Online Test

Why you should use pre-employment problem solving test.

The Problem Solving Test makes use of scenario-based questions to test for on-the-job skills as opposed to theoretical knowledge, ensuring that candidates who do well on this screening test have the relavant skills. The questions are designed to covered following on-the-job aspects:

  • Logical reasoning and problem-solving skills
  • Data interpretation and analysis
  • Spatial reasoning and visualization
  • Abstract reasoning and pattern recognition
  • Critical thinking and analytical skills
  • Deductive reasoning and logical deductions
  • Inductive reasoning and logical conclusions
  • Pattern matching and identifying trends
  • Spatial reasoning and mental rotation
  • Understanding complex problems and breaking them down

Once the test is sent to a candidate, the candidate receives a link in email to take the test. For each candidate, you will receive a detailed report with skills breakdown and benchmarks to shortlist the top candidates from your pool.

What topics are covered in the Problem Solving Test?

Abstract reasoning is the ability to analyze and solve problems that involve complex patterns or scenarios. It assesses an individual's capacity to think logically and make connections between different elements, even when the information provided is vague or unfamiliar. This skill is included in the test to evaluate the candidate's ability to think critically and approach problems from an abstract perspective.

Critical thinking is the process of objectively evaluating and analyzing information to form judgments or make decisions. It involves logical reasoning, evidence-based analysis, and the ability to identify and weigh different perspectives. Evaluating candidates' critical thinking skills in this test helps recruiters assess their potential to solve problems effectively and make informed decisions in various work situations.

Deductive reasoning involves drawing logical conclusions based on specific premises or principles. It requires the ability to apply general rules or theories to specific situations and make predictions or solve problems based on the given information. This skill is measured in the test to gauge candidates' ability to make logical connections and reach accurate conclusions using deductive reasoning.

Inductive reasoning is the ability to derive general principles or rules based on specific instances or observations. It involves identifying patterns or trends and using them to make predictions or solve problems. Assessing candidates' inductive reasoning skills in this test helps recruiters evaluate their ability to analyze information, identify underlying patterns, and make reliable predictions or decisions.

Pattern matching is the cognitive ability to identify and recognize similarities or regularities within a set of stimuli or information. It involves discerning patterns, sequences, or relationships in order to solve problems or make accurate judgments. Measuring candidates' pattern matching skills in this test provides insights into their aptitude for recognizing patterns and applying them to problem-solving in a range of contexts.

Spatial reasoning refers to the capacity to mentally visualize and manipulate spatial relationships or configurations. It involves understanding and interpreting spatial information, such as shapes, directions, and distances, and using this understanding to solve problems or navigate spatial environments. Evaluating candidates' spatial reasoning skills in this test helps recruiters assess their ability to think three-dimensionally, solve spatial problems, and comprehend spatial data.

Full list of covered topics

The actual topics of the questions in the final test will depend on your job description and requirements. However, here's a list of topics you can expect the questions for Problem Solving Test to be based on.

What roles can I use the Problem Solving Test for?

  • Entry level roles
  • Programmers

How is the Problem Solving Test customized for senior candidates?

For intermediate/ experienced candidates, we customize the assessment questions to include advanced topics and increase the difficulty level of the questions. This might include adding questions on topics like

  • Identifying cause and effect relationships
  • Analyzing and interpreting data
  • Applying logical thinking to real-world scenarios
  • Identifying patterns and making predictions
  • Evaluating arguments and making sound judgments
  • Understanding and analyzing complex systems
  • Recognizing and solving problems in a structured manner
  • Utilizing critical thinking to find innovative solutions
  • Applying deductive reasoning to reach logical conclusions
  • Applying inductive reasoning to make generalizations
  • Identifying and analyzing patterns in complex data

Singapore government logo

The hiring managers felt that through the technical questions that they asked during the panel interviews, they were able to tell which candidates had better scores, and differentiated with those who did not score as well. They are highly satisfied with the quality of candidates shortlisted with the Adaface screening.

Human Resources Manager

Singapore Government

Problem Solving Hiring Test FAQs

Can i combine multiple skills into one custom assessment.

Yes, absolutely. Custom assessments are set up based on your job description, and will include questions on all must-have skills you specify. Here's a quick guide on how you can request a custom test.

Do you have any anti-cheating or proctoring features in place?

We have the following anti-cheating features in place:

  • Secure browser

Read more about the proctoring features .

How do I interpret test scores?

The primary thing to keep in mind is that an assessment is an elimination tool, not a selection tool. A skills assessment is optimized to help you eliminate candidates who are not technically qualified for the role, it is not optimized to help you find the best candidate for the role. So the ideal way to use an assessment is to decide a threshold score (typically 55%, we help you benchmark) and invite all candidates who score above the threshold for the next rounds of interview.

What experience level can I use this test for?

Each Adaface assessment is customized to your job description/ ideal candidate persona (our subject matter experts will pick the right questions for your assessment from our library of 10000+ questions). This assessment can be customized for any experience level.

Does every candidate get the same questions?

Yes, it makes it much easier for you to compare candidates. Options for MCQ questions and the order of questions are randomized. We have anti-cheating/ proctoring features in place. In our enterprise plan, we also have the option to create multiple versions of the same assessment with questions of similar difficulty levels.

I'm a candidate. Can I try a practice test?

No. Unfortunately, we do not support practice tests at the moment. However, you can use our sample questions for practice.

What is the cost of using this test?

You can check out our pricing plans .

Can I get a free trial?

Yes, you can sign up for free and preview this test.

I just moved to a paid plan. How can I request a custom assessment?

Here is a quick guide on how to request a custom assessment on Adaface.

Related Tests

Numerical reasoning test.

  • 15 Numerical Reasoning MCQs

Logical Reasoning Test

  • 15 Logical Reasoning MCQs

Abstract Reasoning Test

  • 15 Abstract Reasoning MCQs

Inductive Reasoning Test

  • 15 Inductive Reasoning MCQs

Deductive Reasoning Test

  • 15 Deductive Reasoning MCQs

Spatial Reasoning Test

  • 15 Spatial Reasoning MCQs

Critical Thinking Test

  • 14 Critical Thinking MCQs

Error Checking Test

  • 15 Error Checking MCQs

customers across world

[email protected]

  • Product Tour
  • Integrations
  • AI Resume Parser
  • Aptitude Tests
  • Coding Tests
  • Psychometric Tests
  • Personality Tests
  • Skills assessment tools
  • 52 pre-employment tools compared
  • Compare Adaface
  • Compare Codility vs Adaface
  • Compare HackerRank vs Adaface
  • Compare Mettl vs Adaface
  • Online Compiler
  • Guide to pre-employment tests
  • Check out all tools

Singapore (HQ)

32 Carpenter Street, Singapore 059911

Contact: +65 9447 0488

WeWork Prestige Atlanta, 80 Feet Main Road, Koramangala 1A Block, Bengaluru, Karnataka, 560034

Contact: +91 6305713227

🌎 Pick your language

ada

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, mathematical problem-solving through cooperative learning—the importance of peer acceptance and friendships.

www.frontiersin.org

  • 1 Department of Education, Uppsala University, Uppsala, Sweden
  • 2 Department of Education, Culture and Communication, Malardalen University, Vasteras, Sweden
  • 3 School of Natural Sciences, Technology and Environmental Studies, Sodertorn University, Huddinge, Sweden
  • 4 Faculty of Education, Gothenburg University, Gothenburg, Sweden

Mathematical problem-solving constitutes an important area of mathematics instruction, and there is a need for research on instructional approaches supporting student learning in this area. This study aims to contribute to previous research by studying the effects of an instructional approach of cooperative learning on students’ mathematical problem-solving in heterogeneous classrooms in grade five, in which students with special needs are educated alongside with their peers. The intervention combined a cooperative learning approach with instruction in problem-solving strategies including mathematical models of multiplication/division, proportionality, and geometry. The teachers in the experimental group received training in cooperative learning and mathematical problem-solving, and implemented the intervention for 15 weeks. The teachers in the control group received training in mathematical problem-solving and provided instruction as they would usually. Students (269 in the intervention and 312 in the control group) participated in tests of mathematical problem-solving in the areas of multiplication/division, proportionality, and geometry before and after the intervention. The results revealed significant effects of the intervention on student performance in overall problem-solving and problem-solving in geometry. The students who received higher scores on social acceptance and friendships for the pre-test also received higher scores on the selected tests of mathematical problem-solving. Thus, the cooperative learning approach may lead to gains in mathematical problem-solving in heterogeneous classrooms, but social acceptance and friendships may also greatly impact students’ results.

Introduction

The research on instruction in mathematical problem-solving has progressed considerably during recent decades. Yet, there is still a need to advance our knowledge on how teachers can support their students in carrying out this complex activity ( Lester and Cai, 2016 ). Results from the Program for International Student Assessment (PISA) show that only 53% of students from the participating countries could solve problems requiring more than direct inference and using representations from different information sources ( OECD, 2019 ). In addition, OECD (2019) reported a large variation in achievement with regard to students’ diverse backgrounds. Thus, there is a need for instructional approaches to promote students’ problem-solving in mathematics, especially in heterogeneous classrooms in which students with diverse backgrounds and needs are educated together. Small group instructional approaches have been suggested as important to promote learning of low-achieving students and students with special needs ( Kunsch et al., 2007 ). One such approach is cooperative learning (CL), which involves structured collaboration in heterogeneous groups, guided by five principles to enhance group cohesion ( Johnson et al., 1993 ; Johnson et al., 2009 ; Gillies, 2016 ). While CL has been well-researched in whole classroom approaches ( Capar and Tarim, 2015 ), few studies of the approach exist with regard to students with special educational needs (SEN; McMaster and Fuchs, 2002 ). This study contributes to previous research by studying the effects of the CL approach on students’ mathematical problem-solving in heterogeneous classrooms, in which students with special needs are educated alongside with their peers.

Group collaboration through the CL approach is structured in accordance with five principles of collaboration: positive interdependence, individual accountability, explicit instruction in social skills, promotive interaction, and group processing ( Johnson et al., 1993 ). First, the group tasks need to be structured so that all group members feel dependent on each other in the completion of the task, thus promoting positive interdependence. Second, for individual accountability, the teacher needs to assure that each group member feels responsible for his or her share of work, by providing opportunities for individual reports or evaluations. Third, the students need explicit instruction in social skills that are necessary for collaboration. Fourth, the tasks and seat arrangements should be designed to promote interaction among group members. Fifth, time needs to be allocated to group processing, through which group members can evaluate their collaborative work to plan future actions. Using these principles for cooperation leads to gains in mathematics, according to Capar and Tarim (2015) , who conducted a meta-analysis on studies of cooperative learning and mathematics, and found an increase of .59 on students’ mathematics achievement scores in general. However, the number of reviewed studies was limited, and researchers suggested a need for more research. In the current study, we focused on the effect of CL approach in a specific area of mathematics: problem-solving.

Mathematical problem-solving is a central area of mathematics instruction, constituting an important part of preparing students to function in modern society ( Gravemeijer et al., 2017 ). In fact, problem-solving instruction creates opportunities for students to apply their knowledge of mathematical concepts, integrate and connect isolated pieces of mathematical knowledge, and attain a deeper conceptual understanding of mathematics as a subject ( Lester and Cai, 2016 ). Some researchers suggest that mathematics itself is a science of problem-solving and of developing theories and methods for problem-solving ( Hamilton, 2007 ; Davydov, 2008 ).

Problem-solving processes have been studied from different perspectives ( Lesh and Zawojewski, 2007 ). Problem-solving heuristics Pólya, (1948) has largely influenced our perceptions of problem-solving, including four principles: understanding the problem, devising a plan, carrying out the plan, and looking back and reflecting upon the suggested solution. Schoenfield, (2016) suggested the use of specific problem-solving strategies for different types of problems, which take into consideration metacognitive processes and students’ beliefs about problem-solving. Further, models and modelling perspectives on mathematics ( Lesh and Doerr, 2003 ; Lesh and Zawojewski, 2007 ) emphasize the importance of engaging students in model-eliciting activities in which problem situations are interpreted mathematically, as students make connections between problem information and knowledge of mathematical operations, patterns, and rules ( Mousoulides et al., 2010 ; Stohlmann and Albarracín, 2016 ).

Not all students, however, find it easy to solve complex mathematical problems. Students may experience difficulties in identifying solution-relevant elements in a problem or visualizing appropriate solution to a problem situation. Furthermore, students may need help recognizing the underlying model in problems. For example, in two studies by Degrande et al. (2016) , students in grades four to six were presented with mathematical problems in the context of proportional reasoning. The authors found that the students, when presented with a word problem, could not identify an underlying model, but rather focused on superficial characteristics of the problem. Although the students in the study showed more success when presented with a problem formulated in symbols, the authors pointed out a need for activities that help students distinguish between different proportional problem types. Furthermore, students exhibiting specific learning difficulties may need additional support in both general problem-solving strategies ( Lein et al., 2020 ; Montague et al., 2014 ) and specific strategies pertaining to underlying models in problems. The CL intervention in the present study focused on supporting students in problem-solving, through instruction in problem-solving principles ( Pólya, 1948 ), specifically applied to three models of mathematical problem-solving—multiplication/division, geometry, and proportionality.

Students’ problem-solving may be enhanced through participation in small group discussions. In a small group setting, all the students have the opportunity to explain their solutions, clarify their thinking, and enhance understanding of a problem at hand ( Yackel et al., 1991 ; Webb and Mastergeorge, 2003 ). In fact, small group instruction promotes students’ learning in mathematics by providing students with opportunities to use language for reasoning and conceptual understanding ( Mercer and Sams, 2006 ), to exchange different representations of the problem at hand ( Fujita et al., 2019 ), and to become aware of and understand groupmates’ perspectives in thinking ( Kazak et al., 2015 ). These opportunities for learning are created through dialogic spaces characterized by openness to each other’s perspectives and solutions to mathematical problems ( Wegerif, 2011 ).

However, group collaboration is not only associated with positive experiences. In fact, studies show that some students may not be given equal opportunities to voice their opinions, due to academic status differences ( Langer-Osuna, 2016 ). Indeed, problem-solvers struggling with complex tasks may experience negative emotions, leading to uncertainty of not knowing the definite answer, which places demands on peer support ( Jordan and McDaniel, 2014 ; Hannula, 2015 ). Thus, especially in heterogeneous groups, students may need additional support to promote group interaction. Therefore, in this study, we used a cooperative learning approach, which, in contrast to collaborative learning approaches, puts greater focus on supporting group cohesion through instruction in social skills and time for reflection on group work ( Davidson and Major, 2014 ).

Although cooperative learning approach is intended to promote cohesion and peer acceptance in heterogeneous groups ( Rzoska and Ward, 1991 ), previous studies indicate that challenges in group dynamics may lead to unequal participation ( Mulryan, 1992 ; Cohen, 1994 ). Peer-learning behaviours may impact students’ problem-solving ( Hwang and Hu, 2013 ) and working in groups with peers who are seen as friends may enhance students’ motivation to learn mathematics ( Deacon and Edwards, 2012 ). With the importance of peer support in mind, this study set out to investigate whether the results of the intervention using the CL approach are associated with students’ peer acceptance and friendships.

The Present Study

In previous research, the CL approach has shown to be a promising approach in teaching and learning mathematics ( Capar and Tarim, 2015 ), but fewer studies have been conducted in whole-class approaches in general and students with SEN in particular ( McMaster and Fuchs, 2002 ). This study aims to contribute to previous research by investigating the effect of CL intervention on students’ mathematical problem-solving in grade 5. With regard to the complexity of mathematical problem-solving ( Lesh and Zawojewski, 2007 ; Degrande et al., 2016 ; Stohlmann and Albarracín, 2016 ), the CL approach in this study was combined with problem-solving principles pertaining to three underlying models of problem-solving—multiplication/division, geometry, and proportionality. Furthermore, considering the importance of peer support in problem-solving in small groups ( Mulryan, 1992 ; Cohen, 1994 ; Hwang and Hu, 2013 ), the study investigated how peer acceptance and friendships were associated with the effect of the CL approach on students’ problem-solving abilities. The study aimed to find answers to the following research questions:

a) What is the effect of CL approach on students’ problem-solving in mathematics?

b) Are social acceptance and friendship associated with the effect of CL on students’ problem-solving in mathematics?

Participants

The participants were 958 students in grade 5 and their teachers. According to power analyses prior to the start of the study, 1,020 students and 51 classes were required, with an expected effect size of 0.30 and power of 80%, provided that there are 20 students per class and intraclass correlation is 0.10. An invitation to participate in the project was sent to teachers in five municipalities via e-mail. Furthermore, the information was posted on the website of Uppsala university and distributed via Facebook interest groups. As shown in Figure 1 , teachers of 1,165 students agreed to participate in the study, but informed consent was obtained only for 958 students (463 in the intervention and 495 in the control group). Further attrition occurred at pre- and post-measurement, resulting in 581 students’ tests as a basis for analyses (269 in the intervention and 312 in the control group). Fewer students (n = 493) were finally included in the analyses of the association of students’ social acceptance and friendships and the effect of CL on students’ mathematical problem-solving (219 in the intervention and 274 in the control group). The reasons for attrition included teacher drop out due to sick leave or personal circumstances (two teachers in the control group and five teachers in the intervention group). Furthermore, some students were sick on the day of data collection and some teachers did not send the test results to the researchers.

www.frontiersin.org

FIGURE 1 . Flow chart for participants included in data collection and data analysis.

As seen in Table 1 , classes in both intervention and control groups included 27 students on average. For 75% of the classes, there were 33–36% of students with SEN. In Sweden, no formal medical diagnosis is required for the identification of students with SEN. It is teachers and school welfare teams who decide students’ need for extra adaptations or special support ( Swedish National Educational Agency, 2014 ). The information on individual students’ type of SEN could not be obtained due to regulations on the protection of information about individuals ( SFS 2009 ). Therefore, the information on the number of students with SEN on class level was obtained through teacher reports.

www.frontiersin.org

TABLE 1 . Background characteristics of classes and teachers in intervention and control groups.

Intervention

The intervention using the CL approach lasted for 15 weeks and the teachers worked with the CL approach three to four lessons per week. First, the teachers participated in two-days training on the CL approach, using an especially elaborated CL manual ( Klang et al., 2018 ). The training focused on the five principles of the CL approach (positive interdependence, individual accountability, explicit instruction in social skills, promotive interaction, and group processing). Following the training, the teachers introduced the CL approach in their classes and focused on group-building activities for 7 weeks. Then, 2 days of training were provided to teachers, in which the CL approach was embedded in activities in mathematical problem-solving and reading comprehension. Educational materials containing mathematical problems in the areas of multiplication and division, geometry, and proportionality were distributed to the teachers ( Karlsson and Kilborn, 2018a ). In addition to the specific problems, adapted for the CL approach, the educational materials contained guidance for the teachers, in which problem-solving principles ( Pólya, 1948 ) were presented as steps in problem-solving. Following the training, the teachers applied the CL approach in mathematical problem-solving lessons for 8 weeks.

Solving a problem is a matter of goal-oriented reasoning, starting from the understanding of the problem to devising its solution by using known mathematical models. This presupposes that the current problem is chosen from a known context ( Stillman et al., 2008 ; Zawojewski, 2010 ). This differs from the problem-solving of the textbooks, which is based on an aim to train already known formulas and procedures ( Hamilton, 2007 ). Moreover, it is important that students learn modelling according to their current abilities and conditions ( Russel, 1991 ).

In order to create similar conditions in the experiment group and the control group, the teachers were supposed to use the same educational material ( Karlsson and Kilborn, 2018a ; Karlsson and Kilborn, 2018b ), written in light of the specified view of problem-solving. The educational material is divided into three areas—multiplication/division, geometry, and proportionality—and begins with a short teachers’ guide, where a view of problem solving is presented, which is based on the work of Polya (1948) and Lester and Cai (2016) . The tasks are constructed in such a way that conceptual knowledge was in focus, not formulas and procedural knowledge.

Implementation of the Intervention

To ensure the implementation of the intervention, the researchers visited each teachers’ classroom twice during the two phases of the intervention period, as described above. During each visit, the researchers observed the lesson, using a checklist comprising the five principles of the CL approach. After the lesson, the researchers gave written and oral feedback to each teacher. As seen in Table 1 , in 18 of the 23 classes, the teachers implemented the intervention in accordance with the principles of CL. In addition, the teachers were asked to report on the use of the CL approach in their teaching and the use of problem-solving activities embedding CL during the intervention period. As shown in Table 1 , teachers in only 11 of 23 classes reported using the CL approach and problem-solving activities embedded in the CL approach at least once a week.

Control Group

The teachers in the control group received 2 days of instruction in enhancing students’ problem-solving and reading comprehension. The teachers were also supported with educational materials including mathematical problems Karlsson and Kilborn (2018b) and problem-solving principles ( Pólya, 1948 ). However, none of the activities during training or in educational materials included the CL approach. As seen in Table 1 , only 10 of 25 teachers reported devoting at least one lesson per week to mathematical problem-solving.

Tests of Mathematical Problem-Solving

Tests of mathematical problem-solving were administered before and after the intervention, which lasted for 15 weeks. The tests were focused on the models of multiplication/division, geometry, and proportionality. The three models were chosen based on the syllabus of the subject of mathematics in grades 4 to 6 in the Swedish National Curriculum ( Swedish National Educational Agency, 2018 ). In addition, the intention was to create a variation of types of problems to solve. For each of these three models, there were two tests, a pre-test and a post-test. Each test contained three tasks with increasing difficulty ( Supplementary Appendix SA ).

The tests of multiplication and division (Ma1) were chosen from different contexts and began with a one-step problem, while the following two tasks were multi-step problems. Concerning multiplication, many students in grade 5 still understand multiplication as repeated addition, causing significant problems, as this conception is not applicable to multiplication beyond natural numbers ( Verschaffel et al., 2007 ). This might be a hindrance in developing multiplicative reasoning ( Barmby et al., 2009 ). The multi-step problems in this study were constructed to support the students in multiplicative reasoning.

Concerning the geometry tests (Ma2), it was important to consider a paradigm shift concerning geometry in education that occurred in the mid-20th century, when strict Euclidean geometry gave way to other aspects of geometry like symmetry, transformation, and patterns. van Hiele (1986) prepared a new taxonomy for geometry in five steps, from a visual to a logical level. Therefore, in the tests there was a focus on properties of quadrangles and triangles, and how to determine areas by reorganising figures into new patterns. This means that structure was more important than formulas.

The construction of tests of proportionality (M3) was more complicated. Firstly, tasks on proportionality can be found in many different contexts, such as prescriptions, scales, speeds, discounts, interest, etc. Secondly, the mathematical model is complex and requires good knowledge of rational numbers and ratios ( Lesh et al., 1988 ). It also requires a developed view of multiplication, useful in operations with real numbers, not only as repeated addition, an operation limited to natural numbers ( Lybeck, 1981 ; Degrande et al., 2016 ). A linear structure of multiplication as repeated addition leads to limitations in terms of generalization and development of the concept of multiplication. This became evident in a study carried out in a Swedish context ( Karlsson and Kilborn, 2018c ). Proportionality can be expressed as a/b = c/d or as a/b = k. The latter can also be expressed as a = b∙k, where k is a constant that determines the relationship between a and b. Common examples of k are speed (km/h), scale, and interest (%). An important pre-knowledge in order to deal with proportions is to master fractions as equivalence classes like 1/3 = 2/6 = 3/9 = 4/12 = 5/15 = 6/18 = 7/21 = 8/24 … ( Karlsson and Kilborn, 2020 ). It was important to take all these aspects into account when constructing and assessing the solutions of the tasks.

The tests were graded by an experienced teacher of mathematics (4 th author) and two students in their final year of teacher training. Prior to grading, acceptable levels of inter-rater reliability were achieved by independent rating of students’ solutions and discussions in which differences between the graders were resolved. Each student response was to be assigned one point when it contained a correct answer and two points when the student provided argumentation for the correct answer and elaborated on explanation of his or her solution. The assessment was thus based on quality aspects with a focus on conceptual knowledge. As each subtest contained three questions, it generated three student solutions. So, scores for each subtest ranged from 0 to 6 points and for the total scores from 0 to 18 points. To ascertain that pre- and post-tests were equivalent in degree of difficulty, the tests were administered to an additional sample of 169 students in grade 5. Test for each model was conducted separately, as students participated in pre- and post-test for each model during the same lesson. The order of tests was switched for half of the students in order to avoid the effect of the order in which the pre- and post-tests were presented. Correlation between students’ performance on pre- and post-test was .39 ( p < 0.000) for tests of multiplication/division; .48 ( p < 0.000) for tests of geometry; and .56 ( p < 0.000) for tests of proportionality. Thus, the degree of difficulty may have differed between pre- and post-test.

Measures of Peer Acceptance and Friendships

To investigate students’ peer acceptance and friendships, peer nominations rated pre- and post-intervention were used. Students were asked to nominate peers who they preferred to work in groups with and who they preferred to be friends with. Negative peer nominations were avoided due to ethical considerations raised by teachers and parents ( Child and Nind, 2013 ). Unlimited nominations were used, as these are considered to have high ecological validity ( Cillessen and Marks, 2017 ). Peer nominations were used as a measure of social acceptance, and reciprocated nominations were used as a measure of friendship. The number of nominations for each student were aggregated and divided by the number of nominators to create a proportion of nominations for each student ( Velásquez et al., 2013 ).

Statistical Analyses

Multilevel regression analyses were conducted in R, lme4 package Bates et al. (2015) to account for nestedness in the data. Students’ classroom belonging was considered as a level 2 variable. First, we used a model in which students’ results on tests of problem-solving were studied as a function of time (pre- and post) and group belonging (intervention and control group). Second, the same model was applied to subgroups of students who performed above and below median at pre-test, to explore whether the CL intervention had a differential effect on student performance. In this second model, the results for subgroups of students could not be obtained for geometry tests for subgroup below median and for tests of proportionality for subgroup above median. A possible reason for this must have been the skewed distribution of the students in these subgroups. Therefore, another model was applied that investigated students’ performances in math at both pre- and post-test as a function of group belonging. Third, the students’ scores on social acceptance and friendships were added as an interaction term to the first model. In our previous study, students’ social acceptance changed as a result of the same CL intervention ( Klang et al., 2020 ).

The assumptions for the multilevel regression were assured during the analyses ( Snijders and Bosker, 2012 ). The assumption of normality of residuals were met, as controlled by visual inspection of quantile-quantile plots. For subgroups, however, the plotted residuals deviated somewhat from the straight line. The number of outliers, which had a studentized residual value greater than ±3, varied from 0 to 5, but none of the outliers had a Cook’s distance value larger than 1. The assumption of multicollinearity was met, as the variance inflation factors (VIF) did not exceed a value of 10. Before the analyses, the cases with missing data were deleted listwise.

What Is the Effect of the CL Approach on Students’ Problem-Solving in Mathematics?

As seen in the regression coefficients in Table 2 , the CL intervention had a significant effect on students’ mathematical problem-solving total scores and students’ scores in problem solving in geometry (Ma2). Judging by mean values, students in the intervention group appeared to have low scores on problem-solving in geometry but reached the levels of problem-solving of the control group by the end of the intervention. The intervention did not have a significant effect on students’ performance in problem-solving related to models of multiplication/division and proportionality.

www.frontiersin.org

TABLE 2 . Mean scores (standard deviation in parentheses) and unstandardized multilevel regression estimates for tests of mathematical problem-solving.

The question is, however, whether CL intervention affected students with different pre-test scores differently. Table 2 includes the regression coefficients for subgroups of students who performed below and above median at pre-test. As seen in the table, the CL approach did not have a significant effect on students’ problem-solving, when the sample was divided into these subgroups. A small negative effect was found for intervention group in comparison to control group, but confidence intervals (CI) for the effect indicate that it was not significant.

Is Social Acceptance and Friendships Associated With the Effect of CL on Students’ Problem-Solving in Mathematics?

As seen in Table 3 , students’ peer acceptance and friendship at pre-test were significantly associated with the effect of the CL approach on students’ mathematical problem-solving scores. Changes in students’ peer acceptance and friendships were not significantly associated with the effect of the CL approach on students’ mathematical problem-solving. Consequently, it can be concluded that being nominated by one’s peers and having friends at the start of the intervention may be an important factor when participation in group work, structured in accordance with the CL approach, leads to gains in mathematical problem-solving.

www.frontiersin.org

TABLE 3 . Mean scores (standard deviation in parentheses) and unstandardized multilevel regression estimates for tests of mathematical problem-solving, including scores of social acceptance and friendship in the model.

In light of the limited number of studies on the effects of CL on students’ problem-solving in whole classrooms ( Capar and Tarim, 2015 ), and for students with SEN in particular ( McMaster and Fuchs, 2002 ), this study sought to investigate whether the CL approach embedded in problem-solving activities has an effect on students’ problem-solving in heterogeneous classrooms. The need for the study was justified by the challenge of providing equitable mathematics instruction to heterogeneous student populations ( OECD, 2019 ). Small group instructional approaches as CL are considered as promising approaches in this regard ( Kunsch et al., 2007 ). The results showed a significant effect of the CL approach on students’ problem-solving in geometry and total problem-solving scores. In addition, with regard to the importance of peer support in problem-solving ( Deacon and Edwards, 2012 ; Hwang and Hu, 2013 ), the study explored whether the effect of CL on students’ problem-solving was associated with students’ social acceptance and friendships. The results showed that students’ peer acceptance and friendships at pre-test were significantly associated with the effect of the CL approach, while change in students’ peer acceptance and friendships from pre- to post-test was not.

The results of the study confirm previous research on the effect of the CL approach on students’ mathematical achievement ( Capar and Tarim, 2015 ). The specific contribution of the study is that it was conducted in classrooms, 75% of which were composed of 33–36% of students with SEN. Thus, while a previous review revealed inconclusive findings on the effects of CL on student achievement ( McMaster and Fuchs, 2002 ), the current study adds to the evidence of the effect of the CL approach in heterogeneous classrooms, in which students with special needs are educated alongside with their peers. In a small group setting, the students have opportunities to discuss their ideas of solutions to the problem at hand, providing explanations and clarifications, thus enhancing their understanding of problem-solving ( Yackel et al., 1991 ; Webb and Mastergeorge, 2003 ).

In this study, in accordance with previous research on mathematical problem-solving ( Lesh and Zawojewski, 2007 ; Degrande et al., 2016 ; Stohlmann and Albarracín, 2016 ), the CL approach was combined with training in problem-solving principles Pólya (1948) and educational materials, providing support in instruction in underlying mathematical models. The intention of the study was to provide evidence for the effectiveness of the CL approach above instruction in problem-solving, as problem-solving materials were accessible to teachers of both the intervention and control groups. However, due to implementation challenges, not all teachers in the intervention and control groups reported using educational materials and training as expected. Thus, it is not possible to draw conclusions of the effectiveness of the CL approach alone. However, in everyday classroom instruction it may be difficult to separate the content of instruction from the activities that are used to mediate this content ( Doerr and Tripp, 1999 ; Gravemeijer, 1999 ).

Furthermore, for successful instruction in mathematical problem-solving, scaffolding for content needs to be combined with scaffolding for dialogue ( Kazak et al., 2015 ). From a dialogical perspective ( Wegerif, 2011 ), students may need scaffolding in new ways of thinking, involving questioning their understandings and providing arguments for their solutions, in order to create dialogic spaces in which different solutions are voiced and negotiated. In this study, small group instruction through CL approach aimed to support discussions in small groups, but the study relies solely on quantitative measures of students’ mathematical performance. Video-recordings of students’ discussions may have yielded important insights into the dialogic relationships that arose in group discussions.

Despite the positive findings of the CL approach on students’ problem-solving, it is important to note that the intervention did not have an effect on students’ problem-solving pertaining to models of multiplication/division and proportionality. Although CL is assumed to be a promising instructional approach, the number of studies on its effect on students’ mathematical achievement is still limited ( Capar and Tarim, 2015 ). Thus, further research is needed on how CL intervention can be designed to promote students’ problem-solving in other areas of mathematics.

The results of this study show that the effect of the CL intervention on students’ problem-solving was associated with students’ initial scores of social acceptance and friendships. Thus, it is possible to assume that students who were popular among their classmates and had friends at the start of the intervention also made greater gains in mathematical problem-solving as a result of the CL intervention. This finding is in line with Deacon and Edwards’ study of the importance of friendships for students’ motivation to learn mathematics in small groups ( Deacon and Edwards, 2012 ). However, the effect of the CL intervention was not associated with change in students’ social acceptance and friendship scores. These results indicate that students who were nominated by a greater number of students and who received a greater number of friends did not benefit to a great extent from the CL intervention. With regard to previously reported inequalities in cooperation in heterogeneous groups ( Cohen, 1994 ; Mulryan, 1992 ; Langer Osuna, 2016 ) and the importance of peer behaviours for problem-solving ( Hwang and Hu, 2013 ), teachers should consider creating inclusive norms and supportive peer relationships when using the CL approach. The demands of solving complex problems may create negative emotions and uncertainty ( Hannula, 2015 ; Jordan and McDaniel, 2014 ), and peer support may be essential in such situations.

Limitations

The conclusions from the study must be interpreted with caution, due to a number of limitations. First, due to the regulation of protection of individuals ( SFS 2009 ), the researchers could not get information on type of SEN for individual students, which limited the possibilities of the study for investigating the effects of the CL approach for these students. Second, not all teachers in the intervention group implemented the CL approach embedded in problem-solving activities and not all teachers in the control group reported using educational materials on problem-solving. The insufficient levels of implementation pose a significant challenge to the internal validity of the study. Third, the additional investigation to explore the equivalence in difficulty between pre- and post-test, including 169 students, revealed weak to moderate correlation in students’ performance scores, which may indicate challenges to the internal validity of the study.

Implications

The results of the study have some implications for practice. Based on the results of the significant effect of the CL intervention on students’ problem-solving, the CL approach appears to be a promising instructional approach in promoting students’ problem-solving. However, as the results of the CL approach were not significant for all subtests of problem-solving, and due to insufficient levels of implementation, it is not possible to conclude on the importance of the CL intervention for students’ problem-solving. Furthermore, it appears to be important to create opportunities for peer contacts and friendships when the CL approach is used in mathematical problem-solving activities.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by the Uppsala Ethical Regional Committee, Dnr. 2017/372. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author Contributions

NiK was responsible for the project, and participated in data collection and data analyses. NaK and WK were responsible for intervention with special focus on the educational materials and tests in mathematical problem-solving. PE participated in the planning of the study and the data analyses, including coordinating analyses of students’ tests. MK participated in the designing and planning the study as well as data collection and data analyses.

The project was funded by the Swedish Research Council under Grant 2016-04,679.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We would like to express our gratitude to teachers who participated in the project.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2021.710296/full#supplementary-material

Barmby, P., Harries, T., Higgins, S., and Suggate, J. (2009). The array representation and primary children's understanding and reasoning in multiplication. Educ. Stud. Math. 70 (3), 217–241. doi:10.1007/s10649-008-914510.1007/s10649-008-9145-1

CrossRef Full Text | Google Scholar

Bates, D., Mächler, M., Bolker, B., and Walker, S. (2015). Fitting Linear Mixed-Effects Models Usinglme4. J. Stat. Soft. 67 (1), 1–48. doi:10.18637/jss.v067.i01

Capar, G., and Tarim, K. (2015). Efficacy of the cooperative learning method on mathematics achievement and attitude: A meta-analysis research. Educ. Sci-theor Pract. 15 (2), 553–559. doi:10.12738/estp.2015.2.2098

Child, S., and Nind, M. (2013). Sociometric methods and difference: A force for good - or yet more harm. Disabil. Soc. 28 (7), 1012–1023. doi:10.1080/09687599.2012.741517

Cillessen, A. H. N., and Marks, P. E. L. (2017). Methodological choices in peer nomination research. New Dir. Child Adolesc. Dev. 2017, 21–44. doi:10.1002/cad.20206

PubMed Abstract | CrossRef Full Text | Google Scholar

Clarke, B., Cheeseman, J., and Clarke, D. (2006). The mathematical knowledge and understanding young children bring to school. Math. Ed. Res. J. 18 (1), 78–102. doi:10.1007/bf03217430

Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Rev. Educ. Res. 64 (1), 1–35. doi:10.3102/00346543064001001

Davidson, N., and Major, C. H. (2014). Boundary crossings: Cooperative learning, collaborative learning, and problem-based learning. J. Excell. Coll. Teach. 25 (3-4), 7.

Google Scholar

Davydov, V. V. (2008). Problems of developmental instructions. A Theoretical and experimental psychological study . New York: Nova Science Publishers, Inc .

Deacon, D., and Edwards, J. (2012). Influences of friendship groupings on motivation for mathematics learning in secondary classrooms. Proc. Br. Soc. Res. into Learn. Math. 32 (2), 22–27.

Degrande, T., Verschaffel, L., and van Dooren, W. (2016). “Proportional word problem solving through a modeling lens: a half-empty or half-full glass?,” in Posing and Solving Mathematical Problems, Research in Mathematics Education . Editor P. Felmer.

Doerr, H. M., and Tripp, J. S. (1999). Understanding how students develop mathematical models. Math. Thinking Learn. 1 (3), 231–254. doi:10.1207/s15327833mtl0103_3

Fujita, T., Doney, J., and Wegerif, R. (2019). Students' collaborative decision-making processes in defining and classifying quadrilaterals: a semiotic/dialogic approach. Educ. Stud. Math. 101 (3), 341–356. doi:10.1007/s10649-019-09892-9

Gillies, R. (2016). Cooperative learning: Review of research and practice. Ajte 41 (3), 39–54. doi:10.14221/ajte.2016v41n3.3

Gravemeijer, K. (1999). How Emergent Models May Foster the Constitution of Formal Mathematics. Math. Thinking Learn. 1 (2), 155–177. doi:10.1207/s15327833mtl0102_4

Gravemeijer, K., Stephan, M., Julie, C., Lin, F.-L., and Ohtani, M. (2017). What mathematics education may prepare students for the society of the future? Int. J. Sci. Math. Educ. 15 (S1), 105–123. doi:10.1007/s10763-017-9814-6

Hamilton, E. (2007). “What changes are needed in the kind of problem-solving situations where mathematical thinking is needed beyond school?,” in Foundations for the Future in Mathematics Education . Editors R. Lesh, E. Hamilton, and Kaput (Mahwah, NJ: Lawrence Erlbaum ), 1–6.

Hannula, M. S. (2015). “Emotions in problem solving,” in Selected Regular Lectures from the 12 th International Congress on Mathematical Education . Editor S. J. Cho. doi:10.1007/978-3-319-17187-6_16

Hwang, W.-Y., and Hu, S.-S. (2013). Analysis of peer learning behaviors using multiple representations in virtual reality and their impacts on geometry problem solving. Comput. Edu. 62, 308–319. doi:10.1016/j.compedu.2012.10.005

Johnson, D. W., Johnson, R. T., and Johnson Holubec, E. (2009). Circle of Learning: Cooperation in the Classroom . Gurgaon: Interaction Book Company .

Johnson, D. W., Johnson, R. T., and Johnson Holubec, E. (1993). Cooperation in the Classroom . Gurgaon: Interaction Book Company .

Jordan, M. E., and McDaniel, R. R. (2014). Managing uncertainty during collaborative problem solving in elementary school teams: The role of peer influence in robotics engineering activity. J. Learn. Sci. 23 (4), 490–536. doi:10.1080/10508406.2014.896254

Karlsson, N., and Kilborn, W. (2018a). Inclusion through learning in group: tasks for problem-solving. [Inkludering genom lärande i grupp: uppgifter för problemlösning] . Uppsala: Uppsala University .

Karlsson, N., and Kilborn, W. (2018c). It's enough if they understand it. A study of teachers 'and students' perceptions of multiplication and the multiplication table [Det räcker om de förstår den. En studie av lärares och elevers uppfattningar om multiplikation och multiplikationstabellen]. Södertörn Stud. Higher Educ. , 175.

Karlsson, N., and Kilborn, W. (2018b). Tasks for problem-solving in mathematics. [Uppgifter för problemlösning i matematik] . Uppsala: Uppsala University .

Karlsson, N., and Kilborn, W. (2020). “Teacher’s and student’s perception of rational numbers,” in Interim Proceedings of the 44 th Conference of the International Group for the Psychology of Mathematics Education , Interim Vol., Research Reports . Editors M. Inprasitha, N. Changsri, and N. Boonsena (Khon Kaen, Thailand: PME ), 291–297.

Kazak, S., Wegerif, R., and Fujita, T. (2015). Combining scaffolding for content and scaffolding for dialogue to support conceptual breakthroughs in understanding probability. ZDM Math. Edu. 47 (7), 1269–1283. doi:10.1007/s11858-015-0720-5

Klang, N., Olsson, I., Wilder, J., Lindqvist, G., Fohlin, N., and Nilholm, C. (2020). A cooperative learning intervention to promote social inclusion in heterogeneous classrooms. Front. Psychol. 11, 586489. doi:10.3389/fpsyg.2020.586489

Klang, N., Fohlin, N., and Stoddard, M. (2018). Inclusion through learning in group: cooperative learning [Inkludering genom lärande i grupp: kooperativt lärande] . Uppsala: Uppsala University .

Kunsch, C. A., Jitendra, A. K., and Sood, S. (2007). The effects of peer-mediated instruction in mathematics for students with learning problems: A research synthesis. Learn. Disabil Res Pract 22 (1), 1–12. doi:10.1111/j.1540-5826.2007.00226.x

Langer-Osuna, J. M. (2016). The social construction of authority among peers and its implications for collaborative mathematics problem solving. Math. Thinking Learn. 18 (2), 107–124. doi:10.1080/10986065.2016.1148529

Lein, A. E., Jitendra, A. K., and Harwell, M. R. (2020). Effectiveness of mathematical word problem solving interventions for students with learning disabilities and/or mathematics difficulties: A meta-analysis. J. Educ. Psychol. 112 (7), 1388–1408. doi:10.1037/edu0000453

Lesh, R., and Doerr, H. (2003). Beyond Constructivism: Models and Modeling Perspectives on Mathematics Problem Solving, Learning and Teaching . Mahwah, NJ: Erlbaum .

Lesh, R., Post, T., and Behr, M. (1988). “Proportional reasoning,” in Number Concepts and Operations in the Middle Grades . Editors J. Hiebert, and M. Behr (Hillsdale, N.J.: Lawrence Erlbaum Associates ), 93–118.

Lesh, R., and Zawojewski, (2007). “Problem solving and modeling,” in Second Handbook of Research on Mathematics Teaching and Learning: A Project of the National Council of Teachers of Mathematics . Editor L. F. K. Lester (Charlotte, NC: Information Age Pub ), vol. 2.

Lester, F. K., and Cai, J. (2016). “Can mathematical problem solving be taught? Preliminary answers from 30 years of research,” in Posing and Solving Mathematical Problems. Research in Mathematics Education .

Lybeck, L. (1981). “Archimedes in the classroom. [Arkimedes i klassen],” in Göteborg Studies in Educational Sciences (Göteborg: Acta Universitatis Gotoburgensis ), 37.

McMaster, K. N., and Fuchs, D. (2002). Effects of Cooperative Learning on the Academic Achievement of Students with Learning Disabilities: An Update of Tateyama-Sniezek's Review. Learn. Disabil Res Pract 17 (2), 107–117. doi:10.1111/1540-5826.00037

Mercer, N., and Sams, C. (2006). Teaching children how to use language to solve maths problems. Lang. Edu. 20 (6), 507–528. doi:10.2167/le678.0

Montague, M., Krawec, J., Enders, C., and Dietz, S. (2014). The effects of cognitive strategy instruction on math problem solving of middle-school students of varying ability. J. Educ. Psychol. 106 (2), 469–481. doi:10.1037/a0035176

Mousoulides, N., Pittalis, M., Christou, C., and Stiraman, B. (2010). “Tracing students’ modeling processes in school,” in Modeling Students’ Mathematical Modeling Competencies . Editor R. Lesh (Berlin, Germany: Springer Science+Business Media ). doi:10.1007/978-1-4419-0561-1_10

Mulryan, C. M. (1992). Student passivity during cooperative small groups in mathematics. J. Educ. Res. 85 (5), 261–273. doi:10.1080/00220671.1992.9941126

OECD (2019). PISA 2018 Results (Volume I): What Students Know and Can Do . Paris: OECD Publishing . doi:10.1787/5f07c754-en

CrossRef Full Text

Pólya, G. (1948). How to Solve it: A New Aspect of Mathematical Method . Princeton, N.J.: Princeton University Press .

Russel, S. J. (1991). “Counting noses and scary things: Children construct their ideas about data,” in Proceedings of the Third International Conference on the Teaching of Statistics . Editor I. D. Vere-Jones (Dunedin, NZ: University of Otago ), 141–164., s.

Rzoska, K. M., and Ward, C. (1991). The effects of cooperative and competitive learning methods on the mathematics achievement, attitudes toward school, self-concepts and friendship choices of Maori, Pakeha and Samoan Children. New Zealand J. Psychol. 20 (1), 17–24.

Schoenfeld, A. H. (2016). Learning to think mathematically: Problem solving, metacognition, and sense making in mathematics (reprint). J. Edu. 196 (2), 1–38. doi:10.1177/002205741619600202

SFS 2009:400. Offentlighets- och sekretesslag. [Law on Publicity and confidentiality] . Retrieved from https://www.riksdagen.se/sv/dokument-lagar/dokument/svensk-forfattningssamling/offentlighets--och-sekretesslag-2009400_sfs-2009-400 on the 14th of October .

Snijders, T. A. B., and Bosker, R. J. (2012). Multilevel Analysis. An Introduction to Basic and Advanced Multilevel Modeling . 2nd Ed. London: SAGE .

Stillman, G., Brown, J., and Galbraith, P. (2008). Research into the teaching and learning of applications and modelling in Australasia. In H. Forgasz, A. Barkatas, A. Bishop, B. Clarke, S. Keast, W. Seah, and P. Sullivan (red.), Research in Mathematics Education in Australasiae , 2004-2007 , p.141–164. Rotterdam: Sense Publishers .doi:10.1163/9789087905019_009

Stohlmann, M. S., and Albarracín, L. (2016). What is known about elementary grades mathematical modelling. Edu. Res. Int. 2016, 1–9. doi:10.1155/2016/5240683

Swedish National Educational Agency (2014). Support measures in education – on leadership and incentives, extra adaptations and special support [Stödinsatser I utbildningen – om ledning och stimulans, extra anpassningar och särskilt stöd] . Stockholm: Swedish National Agency of Education .

Swedish National Educational Agency (2018). Syllabus for the subject of mathematics in compulsory school . Retrieved from https://www.skolverket.se/undervisning/grundskolan/laroplan-och-kursplaner-for-grundskolan/laroplan-lgr11-for-grundskolan-samt-for-forskoleklassen-och-fritidshemmet?url=-996270488%2Fcompulsorycw%2Fjsp%2Fsubject.htm%3FsubjectCode%3DGRGRMAT01%26tos%3Dgr&sv.url=12.5dfee44715d35a5cdfa219f ( on the 32nd of July, 2021).

van Hiele, P. (1986). Structure and Insight. A Theory of Mathematics Education . London: Academic Press .

Velásquez, A. M., Bukowski, W. M., and Saldarriaga, L. M. (2013). Adjusting for Group Size Effects in Peer Nomination Data. Soc. Dev. 22 (4), a–n. doi:10.1111/sode.12029

Verschaffel, L., Greer, B., and De Corte, E. (2007). “Whole number concepts and operations,” in Second Handbook of Research on Mathematics Teaching and Learning: A Project of the National Council of Teachers of Mathematics . Editor F. K. Lester (Charlotte, NC: Information Age Pub ), 557–628.

Webb, N. M., and Mastergeorge, A. (2003). Promoting effective helping behavior in peer-directed groups. Int. J. Educ. Res. 39 (1), 73–97. doi:10.1016/S0883-0355(03)00074-0

Wegerif, R. (2011). “Theories of Learning and Studies of Instructional Practice,” in Theories of learning and studies of instructional Practice. Explorations in the learning sciences, instructional systems and Performance technologies . Editor T. Koschmann (Berlin, Germany: Springer ). doi:10.1007/978-1-4419-7582-9

Yackel, E., Cobb, P., and Wood, T. (1991). Small-group interactions as a source of learning opportunities in second-grade mathematics. J. Res. Math. Edu. 22 (5), 390–408. doi:10.2307/749187

Zawojewski, J. (2010). Problem Solving versus Modeling. In R. Lesch, P. Galbraith, C. R. Haines, and A. Hurford (red.), Modelling student’s mathematical modelling competencies: ICTMA , p. 237–243. New York, NY: Springer .doi:10.1007/978-1-4419-0561-1_20

Keywords: cooperative learning, mathematical problem-solving, intervention, heterogeneous classrooms, hierarchical linear regression analysis

Citation: Klang N, Karlsson N, Kilborn W, Eriksson P and Karlberg M (2021) Mathematical Problem-Solving Through Cooperative Learning—The Importance of Peer Acceptance and Friendships. Front. Educ. 6:710296. doi: 10.3389/feduc.2021.710296

Received: 15 May 2021; Accepted: 09 August 2021; Published: 24 August 2021.

Reviewed by:

Copyright © 2021 Klang, Karlsson, Kilborn, Eriksson and Karlberg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nina Klang, [email protected]

Get the mobile app for the best Kahoot! experience!

Eilert Hanoa Avatar

Back to blog

Kahoot! stands with Ukraine

Kahoot! is committed to supporting Ukrainian educators and learners affected by the current crisis. To protect the integrity of our platform and our users, we will suspend offering Kahoot!’s services in Russia, with the exception of self-study.

problem solving test instruction

Ukrainian educators and learners need our support

We are deeply troubled and concerned by the violence and loss of life resulting from the Russian invasion of Ukraine. We stand with the people of Ukraine and we hope for the swiftest and most peaceful possible end to the current crisis. 

Kahoot! has received a number of requests from schools and educators in Ukraine requesting the help of our services to continue teaching despite the disruption of the war. We have supported each of these and we are now offering Kahoot! EDU solutions for free for both K-12 and higher education institutions for one year to Ukrainian schools in need. In addition, we are fast-tracking translation and localization of the Kahoot! platform into Ukrainian. 

Suspending commercial services and sales in Russia

Our commercial footprint in the Russian market is very limited. We do not have offices or representation in the country, nor do we have any physical operations or data services there. The overwhelming majority of our users in Russia are teachers and students using our free service.

Kahoot! is abiding by the international sanctions regime, and does not allow sales to sanctioned individuals or entities in Russia. Shortly after the Russian invasion of Ukraine, Kahoot! initiated a process to suspend offering of all commercial services in Russia. This includes but is not limited to online sales, assisted sales, app store sales and prohibiting sales to Russian corporations and organizations.

Prioritizing safe and secure use of the Kahoot! platform

As part of our mission to make learning awesome, and as education remains a fundamental human right, we offer teachers, students and personal users free access to our platform. We do this in more than 200 countries and regions in a spirit similar to public commons services, such as Wikipedia. 

Similarly, inclusivity is one of Kahoot!’s overarching values. As such, our aim is to, whenever and wherever possible, offer children, schools and others the opportunity to use digital tools for impactful education and learning, irrespective of their background or location. This has been our guiding principle also for offering our service in Russia.

Among our first responses to the crisis was to swiftly expand our global moderation team’s monitoring on all Russia-related content to safeguard the integrity of the platform. 

However, as the situation continues to escalate, it is vital that we are able to ensure that our platform is used according to our own guidelines and standards. Therefore, in addition to suspending sales, we will be taking all possible and necessary steps to suspend access to Kahoot! services in Russia, with the eventual exception of self-study mode which will feature only content verified by Kahoot!.

This will enable students, school children and other individual users to continue their learning journeys both safely and responsibly. We will continue to assess ways in which our services can be offered safely and responsibly to support all learners and educators, also those based in Russia. 

Supporting our employees 

At Kahoot!, we are not just a team in name, we are a team in practice. As such, we are committed to the well-being of our employees, especially those with ties to Ukraine, or those that in other ways are particularly affected by the war. We are providing these colleagues with any support we can. 

Acknowledging the current situation, the Kahoot! Group made an emergency aid donation to Save the Children and the Norwegian Refugee Council. This is a contribution to support life-saving assistance and protection for innocent Ukrainian children, families and refugees. 

As the situation in Ukraine continues to develop our teams across the company are actively monitoring the crisis so that we can respond in the most responsible and supportive way possible. 

Our hearts go out to the people of Ukraine, their loved ones, and anyone affected by this crisis. 

Related articles

Kahoot! ERB

Strengthen students’ reasoning skills with new learning content...

Put your students’ problem-solving, critical thinking, and reasoning skills to the test with these compelling new kahoots!

problem solving test instruction

Get inspired to explore the future of teaching and learning with prof...

Unlock engaging professional development for educators inspired by the brightest minds in education! Start learning with these exclusive Kahoot! courses featuring thought leaders from...

Inspire students to explore climate change and solutions with Our Cli...

Empower students with knowledge about climate change and how we can take action for a brighter future!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Campbell Syst Rev
  • v.19(3); 2023 Sep
  • PMC10357100

Logo of csysrev

PROTOCOL: Problem solving before instruction (PS‐I) to promote learning and motivation in child and adult students

Eduardo gonzález‐cabañes.

1 Department of Psychology, Psychology Faculty, University of Oviedo, Oviedo Asturias, Spain

Trinidad Garcia

Catherine chase.

2 Department of Human Development, Teachers College, Columbia University, New York New York, USA

Jose Carlos Núñez

Associated data.

This is the protocol for a Campbell systematic review. The objectives are as follows: The purpose of this review is to synthesize the evidence about the efficacy of problem solving before instruction (PS‐I) to promote learning and motivation in students. Specifically, this review is designed to answer the following questions: To what degree does PS‐I affect learning and motivation, relative to alternative learning approaches? To what extent is the efficacy of PS‐I associated with the use of different design features within it, including the use of group work, contrasting cases, and metacognitive guidance in the initial problem‐solving activity, and the use of explanations that build upon students' solutions in the explicit instruction phase? To what extent is the relative efficacy of PS‐I associated with the contextual factors of activities used as control, age of students, duration of the interventions, and learning domain? What is the quality of the existent evidence to evaluate these questions in terms of number of studies included and potential biases derived from publication and methodological restrictions?

1. BACKGROUND

1.1. description of the condition.

A typical form of instruction is for teachers to explain a new concept or procedure and then ask students to apply it in a set of activities. However, some research suggests that it may be more beneficial to first give students the opportunity to problem‐solve in relation to the new contents before providing any explicit instruction on them. This review asks how these two approaches to instruction compare in promoting motivation and learning. For example, before explaining how to measure statistical variability, or how to solve an equation, or a psychological theory to explain our attentional experiences, would it be better if students first try to find their own solutions to these problems (e.g., Carnero‐Sierra,  2020 ; Fyfe,  2014 ; Kapur,  2012 ), or would it be better to start by providing them with instructions and concepts for solving the problems?

Problem‐solving activities are highly valued in education because they offer the opportunity for students to practice at their own pace (Jackson,  2021 ). Allowing students to take their time is an important part of the reflection processes. However, deep reflection processes are not always activated in problem‐solving activities. When students know the basic procedures to solve them, the problems often become a routine that students solve mechanically without devoting enough attention to the structural aspects (Moore,  1998 ). To encourage these reflection processes, it might be useful to give students the opportunity to problem‐solve before they receive any explanation about the relevant procedures (Schwartz,  2004 ).

These types of interventions that combine an initial phase of problem‐solving and a following phase of explicit instruction have been formulated in different specific approaches, such as the Productive Failure approach (Kapur,  2012a ), the Invention approach (Schwartz,  2004 ), or problem‐solving before instruction (Loibl,  2014a ). In this review we will generally refer to all these related approaches as problem‐solving before instruction (PS‐I).

It has been argued that PS‐I interventions can have important implications in both learning and motivation. Specifically, generating solutions in the initial problem‐solving phase can help students become more aware of their knowledge gaps (Loibl,  2014 ), activate and differentiate prior knowledge (Kapur,  2012 ), and adopt healthy motivations (Belenky,  2012 ). In this regard, several studies have shown that students who learned through PS‐I, in comparison to students who directly received explanations of the target concepts and procedures, reported higher interest in the content taught (Glogger‐Frey,  2015 ; Weaver,  2018 ). Also, they demonstrated higher understanding of the content and greater capacity to transfer this understanding to novel situations (Glogger‐Frey,  2017 ; Kapur,  2014 ; Schwartz,  2011 ; Weaver,  2018 ).

However, PS‐I can sometimes produce negative reactions related to learning and motivation. During the initial problem‐solving, students can feel overchallenged with the search for many possible solutions. They might spend most of the time paying attention to irrelevant aspects (Clark,  2012 ). Also, the uncertainty of not finding correct solutions in this task can be frustrating, and students might end up acquiring a passive role (Clark,  2012 ). There are some studies that have shown greater negative affect (Lamnina,  2019 ), lower motivation (Glogger‐Frey,  2015 ), and reduced learning (Fyfe,  2014 ; Glogger‐Frey,  2015 ; Newman,  2019 ) for students in PS‐I interventions than for students in alternative interventions.

Considering this variability of results in the literature, it is important to systematically review the evidence concerning efficacy of PS‐I, and concerning conditions that can influence this efficacy. Also, this review may have important implications for educational practice. Many instructors have negative attitudes towards the uncertainty of starting lessons with problem‐solving activities in which students can experience initial failures and negative affect (Pan,  2020 ), and empirical evidence of PS‐I's efficacy might help these instructors to reduce this uncertainty and take more informed decisions.

In terms of learning, it is important to evaluate to what extent PS‐I can promote the development of conceptual knowledge, which refers to the understanding of the content taught, and transfer, which refers to the capacity to apply this understanding to novel situations. Several national and international evaluations suggest that a great proportion of students learn by memorizing procedures (Mallart,  2014 ; OECD,  2016 ; Silver,  2000 ). For example, in the Spanish math examinations to access the university, it was noted that the majority of students passed the exams because they were able to solve problems that were similar to those seen in class. However, the majority failed to answer comprehension questions, or to correctly solve problems where they had to flexibly apply the procedures learnt (Mallart,  2014 ). Considering that real‐life situations generally differ from class situations, acquiring deeper forms of knowledge is of great relevance in the future autonomy of students, especially in a world that is increasingly changing because of globalization and development of new technologies (OECD,  2014 ).

Of no less importance is the potential efficacy of PS‐I to promote motivation for learning. Several evaluations suggest that a great proportion of students are not motivated to learn class content (Council National Research,  2003 ; OECD,  2018 ). A recent PISA evaluation with high school students from 79 countries showed that most students reported studying before or after their last class, but only 48% of these students reported interest as one of their motives (OECD,  2018 ). Promoting motivation for learning is of great importance, because, rather than just being a predisposition that can help learning (Chiu,  2008 ; Liu,  2020 ; Mega,  2014 ), it is a main factor that determines the well‐being of students during the learning process (Ryan,  2009 ).

Four reviews have provided interesting evidence about the comparison of PS‐I versus other educational interventions (Darabi,  2018 ; Jackson,  2021 ; Loibl,  2017 ; Sinha,  2021 ). They suggested a general efficacy of PS‐I to promote learning (Darabi,  2018 ), and more specifically to promote conceptual knowledge and transfer, but not procedural knowledge (Loibl,  2017 ; Sinha,  2021 ). Furthermore, the qualitative review of Loibl,  2017 and the meta‐analysis of Sinha,  2021 suggested that PS‐I was associated with higher efficacy when it was presented with guidance strategies to help students become more aware of their knowledge gaps and focus their attention on relevant features of the contents. However, these reviews are limited because they only used a small number of databases or search techniques for the identification of studies. Additionally, there are important aspects not addressed in these previous reviews, such as the evaluation of motivational outcomes, or the consideration of the types of control activities used for the comparisons.

The upcoming review aims to address these aspects and to update the studies included in these previous reviews. We will review studies in which PS‐I interventions are systematically compared with alternative interventions that provide the same contents and activities but provide explicit instruction from the beginning, and that quantify the results with conceptual knowledge tests, transfer tests, and self‐reports of motivation for learning. Yet, additional exploratory analyses might include studies either that have more general measures of learning, or in which there is not such a strict control of the equivalence of learning activities between conditions. The general goal is to provide educators and policy makers with information that can help them make decisions about introducing PS‐I in educational practice, and the various factors that can influence the efficacy of PS‐I.

1.2. Description of the intervention

The uniqueness of PS‐I educational interventions resides in the combination of two phases: an initial problem‐solving activity to explore a concept that students have not yet learned, and a subsequent explicit instruction phase to explain the concept. The initial problem‐solving activity consist of one or several problems that students can explore with their prior knowledge, but that require students to completely develop their own criteria to solve, because the main solution procedures are based in concepts they have not yet learned, and no content‐related guidance is given during problem‐solving. Students are not expected to find the correct solutions. However, this initial exploration is thought to prepare them to learn from subsequent explicit instruction. The second phase of explicit instruction consists of any activity in which students can read or listen to explanations of the target concepts, such as a lecture, a video, or an interactive discussion with concept explanations.

A typical approach in which PS‐I has been conducted is called Invention (Schwartz,  2004 ). In this approach the initial problem‐solving activity is generally formulated with invention goals, which refers to instructions to infer a general procedure, rule, or concept. Also, the data of the problem is often presented in small data sets or contrasting cases, which are examples that differ on a few key features (e.g., Section 1a of Table  1 ). This type of data presentation has been applied in a great variety of learning areas, including physics (Schwartz,  2011 ), statistics (González‐Cabañes,  2021 ), and educational sciences (Glogger‐Frey,  2015 ). The combination of invention goals and contrasting cases is meant to encourage students to discern and actively integrate relevant problem features (Schwartz,  2004 ; Schwartz,  2011 ; refer to How the intervention might work for a further description of these features).

Examples of design features in problem solving before instruction (PS‐I).

Note : Table adapted from Loibl ( 2017 ) using transformations of the PS‐I intervention in Kapur ( 2012 ) as examples.

Another typical PS‐I approach is Productive Failure (Kapur,  2012a ). Studies following this approach present students with rich problems, in which the data is complex and relevant features are not highlighted (e.g., Section 1b of Table  1 ). The problem generally allows several possible solutions, and the ensuing explicit instruction includes explanations that build on students’ solutions, commenting on the affordances and limitations of students’ solutions in comparison to the affordances and limitations of the correct solutions (e.g., Section 3a of Table  1 ; refer to How the intervention might work for a further description). The Productive Failure approach has also been used in a great variety of learning areas, including physics (Kapur,  2010 ), statistics (Kapur,  2012 ), maths (Mazziotti,  2019 ), and biology (Song,  2018 ). It is emphasized in this approach that ‘failures’ to reach the correct solutions in the initial problem are not conceived as failures, but as exploration opportunities that help students activate prior knowledge, and as opportunities to comprehend relevant features and relations when these solutions are compared with the correct ones.

To give a sense of the variability across PS‐I interventions, it is also important to consider the context in which the interventions are implemented. PS‐I interventions are generally implemented by the teachers or by the researchers who conduct the evaluations. They can be applied to different types of domains. In the literature they have been generally applied in math or science domains such as statistics or physics (e.g., González‐Cabañes,  2020 ; Kapur,  2014 ; Newman,  2019 ; Schwartz,  2011 ), but also in other domains such as psychology, or pedagogy (e.g., Carnero‐Sierra,  2020 ; Glogger‐Frey,  2015 ; Schwartz,  1998 ). PS‐I methods have been applied successfully with students from age 12 through adulthood (e.g., González‐Cabañes,  2020 ; Kapur,  2014 ; Schwartz,  2011 ), and a few studies have used PS‐I methods with primary school children (e.g., Chase,  2017 ; Mazziotti,  2019 ). PS‐I interventions have also been conducted in collaborative contexts, in which groups of two or more students work on the initial problem‐solving activity (e.g., Glogger‐Frey,  2017 ; Kapur,  2012a ), or in individual contexts, in which students work by themselves on the learning activities (e.g., Glogger‐Frey,  2015 ; González‐Cabañes,  2020 ). For the purpose of generalizing our findings broadly, in the upcoming review we will include studies conducted with students of any age, in any type of course, with collaborative or individual work, and with any kind of implementer.

There is also great potential variability regarding the intensity of PS‐I interventions. The duration of the initial problem‐solving phase generally spans between 12 min (e.g., González‐Cabañes,  2020 ) and 100 min (e.g., Kapur,  2012 ). The number of problems included in this initial activity can also vary, generally ranging between one (e.g., Glogger‐Frey,  2015 ; Weaver,  2018 ) and two (e.g., Chase,  2017 ; Glogger‐Frey,  2017 ). In regard to the times in which PS‐I is implemented within an intervention, generally PS‐I is only applied once for one specific lesson (e.g., Chase,  2017 ; Glogger‐Frey,  2017 ; González‐Cabañes,  2020 ; Kapur,  2012 ; Weaver,  2018 ), but there are other studies that applied it repeatedly over a longer time frame (e.g., Likourezos,  2017 ). Considering all these factors and the different potential durations of the explicit instruction phase, there are interventions with a great variety of total time invested.

It is just as important to consider the variability of interventions used as controls to compare the efficacy of PS‐I. It is often the case that PS‐I is compared with what are generally called ‘Instruction before Problem‐Solving’ interventions (I‐PS). Similarly than PS‐I, these interventions also include a problem‐solving phase related to the target contents, but only after students have received some explicit instruction about them. This initial instruction is often provided through lecture (e.g., Kapur,  2014 ; Weaver,  2018 ) or through worked examples to study (e.g., Schwartz,  2011 ). Worked examples refer to problems that show the resolution procedures. Interventions with no problem‐solving phase are also used, in which the initial problem‐solving activity of PS‐I is substituted with a worked example study (e.g., Glogger‐Frey,  2015 ; Glogger‐Frey,  2017 ). Lastly, other comparative interventions are more alike to the PS‐I interventions in the sense that they also start with the initial problem‐solving activity, but they provide some content guidance along with it. For example, it is common that some parts of the solution procedures are written (e.g., Likourezos,  2017 ; Newman,  2019 ). All of these types of comparisons will be considered within this review in as much as they have the same activities included in the instruction phase of the PS‐I intervention. Yet, separate meta‐analysis will be used for each type of comparison.

1.3. How the intervention might work

There are several mechanisms of PS‐I interventions that can influence learning and motivation, either positively or negatively. These mechanisms can interact, either compensating or reinforcing each other. Figure  1 depicts a proposal of these mechanisms, which is based on the theoretical proposal of Loibl ( 2017 ) regarding the cognitive PS‐I mechanisms that influence learning, but it aims to integrate motivational mechanisms within it.

An external file that holds a picture, illustration, etc.
Object name is CL2-19-e1337-g001.jpg

Theoretical model of different variables that might be associated with the efficacy of problem solving before instruction (PS‐I).

1.3.1. Potential PS‐I learning mechanisms

One potential mechanism through which PS‐I can favour learning is the opportunity in the initial problem‐solving activity to activate, differentiate, and generate prior knowledge in relation to the concepts that will be explained later (Kapur,  2012a ). As students try to explore different solutions, they can become familiar with the problem situation and relevant features of the concepts to be explained. This familiarization can help students to more easily understand and integrate the explanations given later.

Furthermore, PS‐I also gives students a creative role during the exploration of the initial problem. Students can generate solutions using their own ideas, including ideas seen in previous classes, but also ideas from real life experiences. Ideas from real life experiences are accessible to all students and can constitute an additional and important support to integrated learning (Kapur,  2011a ).

Several studies support the efficacy of activating prior knowledge with this creative component of generating personal ideas. Relative to other interventions where prior knowledge is activated through exploratory activities without this creative component, PS‐I led to greater conceptual understanding and transfer at the end of the lesson (Glogger‐Frey,  2017 ; Kapur,  2011 ; Kapur,  2014a ; Schwartz,  2011 ). Also, it is interesting that the number of solutions generated by students during the initial problem‐solving phase of PS‐I have been associated with greater conceptual knowledge and transfer, regardless of whether the generated solutions were right or wrong (Kapur,  2011a ; Kapur,  2012 ).

Another complementary mechanism of PS‐I that can favour learning is its potential to increase awareness of knowledge gaps. Humans often process information superficially, and unconsciously use this superficial knowledge to support a false illusion of understanding (Kahneman,  2011 ). In this regard, the experience of impasses within the initial problem can help students to become more aware of their knowledge gaps, which in turn can facilitate further exploration and recognition of deep features (Chi,  2000 ; VanLehn,  1999 ).

In support to these claims, several studies showed that students in PS‐I interventions reported higher awareness of knowledge gaps than students in alternative interventions in which the new concepts were directly explained from the beginning (Glogger‐Frey,  2015 ; Loibl,  2014 ). Also, students in PS‐I interventions had a better memory of the structural components of the problems presented than students in other interventions where the same problems were presented after some explanations (Schwartz,  2011 ) or together with explanations (Glogger‐Frey,  2017 ).

As discussed in Loibl ( 2017 ), it is important to consider the synergy that can occur between these mechanisms. It is likely that the greater the creative role assumed by the students, the greater the number of solutions they try, and the greater their activation of prior knowledge. In turn, it is also likely that the greater number of solutions attempted leads to greater opportunities to make mistakes, find impasses, and become aware of their learning gaps (see diagonal arrows in Figure  1 towards conceptual knowledge). The process can also be recursive. As learners become more aware of their learning gaps, prior knowledge activation is also more likely to be related to relevant aspects of concepts.

Lastly, another mechanism that can synergically reinforce these learning processes is the potential effect of PS‐I to increase motivation for learning, which is discussed in the following section. Motivation is defined as our desire to engage in the learning activity (Núñez,  2009 ), and can increase engagement in all learning processes previously mentioned (see wide white arrow coming out of Motivation for learning in Figure  1 ).

1.3.2. Potential PS‐I motivation mechanisms

PS‐I can increase motivation for learning through several potential mechanisms (see horizontal arrows in Figure  1 ). First, it can be facilitated by the previously hypothesized PS‐I effect of promoting awareness of knowledge gaps. Some theories assume that we are intrinsically driven to acquire knowledge, and the mere perception of knowledge gaps often triggers curiosity, or the desire to fill that gap (Golman,  2018 ; Loewenstein,  1994 ). Several studies have shown that students who learned through PS‐I experienced greater curiosity and interest than students who started the learning process with explicit instruction (Glogger‐Frey,  2015 ; Lamnina,  2019 ; Weaver,  2018 ). Also, the study of Glogger‐Frey,  2015 found associations between the perception of knowledge gaps in PS‐I and curiosity.

The creative role that students can adopt in PS‐I can stimulate achievement motivation, which refers to the motivation to perform well in the learning activities (Núñez,  2009 ). Students in the initial problem‐solving activity of PS‐I are creating information, rather than just assimilating information given from outside, which can trigger a sense of responsibility and ownership in the task of constructing knowledge (Ryan,  2009 ; Urdan,  2006 ). Based on that, some students might try to perform the best they can to see their performance capabilities, and to maximize their learning. Literature in this sense is very scarce, but it was observed in one study that high school students in a PS‐I condition for learning geometry experienced higher achievement motivation than students in more guided interventions (Likourezos,  2017 ).

It is also important to consider the interrelation between learning and motivation. The hypothesized higher learning in PS‐I can lead to a higher sense of self‐efficacy for students in this condition, which in turn can increase motivation for learning (Núñez,  2009 ).

1.3.3. Potential negative effects of PS‐I on learning

Although the experiencing of impasses can have important benefits when triggering awareness of knowledge gaps, it might have some negative implications. Given the inexperience of students in the topic, it is likely that they spend time attending to irrelevant information when they try to solve the impasses. In turn, this can generate extraneous cognitive load (Clark,  2012 ), which refers to a saturation in our attentional capacity due to processing irrelevant aspects, and can reduce the attentional resources available to focus on important information (Sweller,  2019 ). Therefore, this extraneous cognitive load can interfere in the recognition of deep features during the initial problem (Clark,  2012 ).

In this regard, several studies have shown that students in the initial problem‐solving activity of PS‐I reported higher extraneous cognitive load than students who faced similar problems that included explanations of the solution processes (Glogger‐Frey,  2015 ; Glogger‐Frey,  2017 ; Likourezos,  2017 ). In turn, this higher extraneous cognitive load was associated with lower learning (Glogger‐Frey,  2015 ; Glogger‐Frey,  2017 ), even when the final learning was higher within students in the PS‐I condition (Glogger‐Frey,  2017 ). Overall, these results suggest a complex interaction of negative and positive effects in PS‐I interventions, in which the positive mechanisms that we have described is balanced with the extraneous cognitive load that some students can experience (see the dashed arrow coming from extraneous cognitive load towards recognition of deep features in Figure  1 ).

1.3.4. Potential negative effects of PS‐I on motivation

A potential factor that can demotivate students in PS‐I interventions is the frustration they can feel in the initial problem‐solving phase (Clark,  2012 ). Frustration can arouse in the initial problem‐solving activity because of the experiencing of extraneous cognitive load or because of the sensation of failing to achieve the correct solution within the impasse experiences. In turn, frustration can have demotivating effects, such as reducing intrinsic motivation (Loderer,  2018 ), or contributing to fatigue (Pekrun,  2011 ; Pekrun,  2012 ). Recent studies have shown higher frustration (González‐Cabañes,  2020 ) or negative affect (Lamnina,  2019 ) in PS‐I students versus students in a typical instruction condition where they started the learning process with explanations.

1.3.5. Hypotheses about general effects

Considering this variety of potential positive and negative mechanisms, it is of great importance to study the final effects in learning and motivation. In line with the previous reviews of Loibl ( 2017 ) and Sinha ( 2021 ), we expect that PS‐I, in comparison with alternative interventions, will be associated with the higher performance of students in post‐tests of conceptual knowledge taken after the lesson, but not with performance in concurrent post‐tests of procedural knowledge. Procedural knowledge can be acquired through memorization, and therefore the potential described PS‐I mechanisms of promoting activation of prior knowledge and awareness of knowledge gaps might have little influence on it. Yet, we expect that these potential mental processes can greatly impact conceptual knowledge, which refers to the understanding of principles and relationships that underlie concepts and procedures. Conceptual knowledge not only relies on memorization, but also on the identification of structural features of the concepts. These mental processes can also have a great impact in transfer. Transfer can be facilitated by the activation of prior knowledge, as it can rely on the integration between prior knowledge and new knowledge (Loibl,  2017 ), and generally by the acquisition of conceptual knowledge (Mayer,  2003 ). Only with a clear mental representation of how the procedures work can we perceive whether the procedures generalize to other contexts.

Although there is no previous review regarding motivation for learning, we expect that PS‐I interventions will be associated with higher scores in self‐reports of interest taken after the lesson, because of the effects PS‐I can have on achievement motivation and curiosity.

1.3.6. Factors that can moderate the efficacy of PS‐I

In spite of these hypothesized general effects of PS‐I on learning and motivation, a great variety of factors can moderate these effects. Among them, are design features of the interventions, intensity of the interventions, age of students, learning domain, and activities used as control. It is important to note that, as previoulsy described, learning and motivation can benefit each other, and therefore we will consider all moderators as potentially influencing both.

Design features

The different design features used in PS‐I interventions can have different effects on the previously described mechanisms, and therefore in the general efficacy of PS‐I on learning and motivation. Below we describe some of the design features frequently discussed and used in the literature, which we will consider as potential moderators in the present review.

  • Contrasting cases . Contrasting cases is a form of guidance that is often used to present the data of the initial problem (e.g., Loibl,  2020 ; Schwartz,  2011 ). It consists of examples that differ on a few features that are relevant to the target knowledge (Schwartz,  2004 ). For example, in the contrasting cases shown in Section 1 of Table  1 , which was designed for learning about statistical variability of data distributions, we can see how the distributions of scores for player A and player B differ in the range, but not in other features of the distribution such as the mean, the number of scores, and the spread of scores. In contrast, the distribution of player B and player C's scores differ in the spread of the scores, but not in the range or other characteristics. It has been argued that the comparison of cases can help students focus on the relevant features of the problem (Loibl,  2017 ; Salmerón,  2013 ; Schwartz,  2004 ). Also, contrasting cases can help students become aware of their knowledge gaps, because students can rank the cases and self‐evaluate their solutions in regard to it (Loibl,  2017 ; Schwartz,  2004 ). Finally, contrasting cases may reduce extraneous cognitive load during problem‐solving and its associated frustration.
  • Metacognitive guidance . The initial problem is often presented with metacognitive guidance during problem solving (e.g., Holmes,  2014 ; Roll,  2012 ). Metacognitive guidance refers to prompts that do not address content, but rather are meant to stimulate conscious mental strategies such as monitoring and reflection processes. This type of guidance can stimulate mental processes that can lead students to become more aware of knowledge gaps and to recognize deep features (Holmes,  2014 ; Roll,  2012 ). For example, the metacognitive guidance in Section 2 of Table  1 can trigger students to reflect on critical features they perceive in the data distributions and the limitations of the solution ideas they generate.
  • Collaborative work . Allowing students to work on the initial problem‐solving activity in small groups, rather than asking them to work individually, might influence the relative efficacy of PS‐I (Mazziotti,  2019 ). Collaborative problem‐solving is a context that brings opportunities for elaborating and critiquing ideas (Kapur,  2012a ; Webb,  2014 ). Several studies have found that problem‐solving in pairs was associated with higher performance than working individually (Teasley,  1995 ; Webb,  1993 ). Also, the extent to which students engage in dialectical argumentation with each other's ideas and explain their problem‐solving strategies has been associated with higher problem‐solving achievement and higher acquisition of conceptual knowledge (Asterhan,  2009 ; Webb,  2014 ). Based on this, we expect that working on the initial problem in groups will be associated with higher efficacy for PS‐I than working individually.
  • Building explanations of explicit instruction phase on students' solutions . PS‐I interventions often include explanations in the explicit instruction phase that draws students' attention to the affordances and limitations of students' typical solutions given in the previous problem‐solving phase (e.g., Kapur,  2012a ; Kapur,  2014 ; Loibl,  2014a ). An example can be seen in Section 3 of Table  1 . It can be considered a form of guidance for the problem‐solving activity that, rather than given during the problem‐solving phase, it is provided afterwards to help students reorganize the ideas they activated during the problem‐resolution. It has been argued that this feedback can help students to become more aware of their knowledge gaps, and to focus on the relevant features within the complexity of the target concepts (Kapur,  2012a ; Loibl,  2017 ).

Duration of the PS‐I intervention

The duration of the PS‐I intervention can have an important effect on its efficacy to promote learning and motivation, because of a higher dosage of the hypothesized PS‐I effects. We expect a higher efficacy of PS‐I in longer interventions.

Age of students

As considered in the prior review of Sinha ( 2021 ), age might be associated with the relative efficacy of PS‐I because of the relation between age and metacognitive development. Metacognition refers to the awareness of our own mental processes and the control we have over them (Schraw,  1994 ), and has been argued to have an important influence on several PS‐I mechanisms (Glogger‐Frey,  2017 ). First, it can help students to become aware of their knowledge gaps. Students with low metacognitive skills might not relate the limitations of the solutions they generated with the solutions explained later (Roll,  2012 ). Second, metacognition can also help students to discern what information is relevant from the information that is not, which can reduce the extraneous cognitive load experienced during the initial problem‐solving phase and its associated frustration. However, these metacognitive capacities might develop slowly with age (Veenman,  2005 ). Based on these assumptions, we expect that the higher the age of the students, the higher the efficacy of PS‐I to promote conceptual knowledge, transfer, and motivation for learning.

Learning domain

The learning domain in which PS‐I is applied might have an important influence on the efficacy of PS‐I. In math and science domains (e.g., statistics, physics) conceptual structures are often abstract and complex, and the deep learning processes expected to be promoted in PS‐I interventions might be more significant in these domains. Specifically, in this review we expect that higher efficacy of PS‐I will be found in math and science domains than in other domains.

Control conditions used for comparison

The types of control conditions used to compare PS‐I can have a great influence in the relative efficacy of PS‐I. In the literature we have identified several types of comparative interventions:

  • 1. Instruction with lecture before problem‐solving . Share with the PS‐I intervention both the problem‐solving phase and the other learning activities in the instruction phase, but instead of introducing the contents with the problem‐solving activity, the contents are introduced with a lecture about the target concepts, in which students have to listen to the explanations of the concepts at a given pace.
  • 2. Instruction with worked‐examples exploration before problem‐solving . Share with the PS‐I intervention both the problem‐solving phase and the other learning activities in the instruction phase, but instead of introducing the contents with the problem‐solving activity, the contents are introduced with a worked example that they study at their own pace.
  • 3. Instruction with worked example exploration before further instruction . These interventions do not include problem‐solving activities. They share with the PS‐I intervention all the learning activities in the instruction phase, which are activities that do not include problem‐solving. Also, instead of the initial problem‐solving activity of PS‐I, they start with a worked example, where students study the resolution procedures to problems at their own pace.
  • 4. Problem‐solving with content guidance before instruction . Share with the PS‐I intervention all the learning activities in the instruction phase, and also start with the initial problem‐solving activity used in the PS‐I intervention, but provide students with some content guidance during it.

We expect that PS‐I would lead to higher benefits in terms of learning and motivation than these control conditions. Although extraneous cognitive load and frustration might be lower in the initial phases of these control conditions than in PS‐I, the initial problem‐solving phase of PS‐I gives students an opportunity to acquire a creative role and to experience impasses that are not given in other conditions.

Nevertheless, we expect that the relative efficacy of PS‐I would be higher when compared with conditions that introduce the concepts with a lecture (1), rather than when they are introduced with worked‐examples or problems with content guidance (2‐4). In these latter control conditions, students start by exploring information at their own pace, which, similarly to PS‐I, gives them the opportunity to activate prior knowledge before receiving explanations from the professor. As we have previously described, activation of prior knowledge can potentially favour the assimilation of explanations (Carriedo,  1995 ; Smith,  1992 ; Sweller,  2019 ). This pattern of results would be in line with the results of Newman,  2019 , in which the advantage of PS‐I in terms of learning was higher when compared against the introduction of concepts with a lecture, rather than when PS‐I was compared against interventions that introduced concepts with worked‐examples or problems with content guidance.

Additionally, among these last comparative interventions (2‐4) we also hypothesize that the relative efficacy of PS‐I will be higher when compared with interventions that do not include a problem‐solving phase (3) than when this problem‐solving phase is included (2, 4). Problem‐solving activities can help students to reflect and reason about the target concept, and missing these types of activities can have implications in conceptual knowledge acquired.

1.4. Why it is important to do this review

There are four reviews in the literature that have provided interesting insights into the efficacy of PS‐I (Darabi,  2018 ; Jackson,  2021 ; Loibl,  2017 ; Sinha,  2021 ).

First, the review by Jackson ( 2021 ) is a qualitative review that included studies from educational databases and conferences that addressed factors that can influence learning from failures in STEM courses. It included studies about PS‐I interventions, where failure is expected in the initial problem, but also other type of studies that addressed learning from failures. Considering the results of the 35 papers included, they discussed that the efficacy of learning from failure could depend on factors such as whether students conceptualize the failures as learning opportunities, the promotion of positive affective reactions such as persistence, the promotion of a classroom climate to speak about failures and embrace them, and the use of failures as stimuli to identify misconceptions and induce thoughtfulness about key features. Although the presence of these factors can be important in the efficacy of PS‐I to promote learning and motivation, this review was limited in that these factors were not addressed quantitatively.

The review of Loibl ( 2017 ), specifically focused on the efficacy of PS‐I to promote learning, and provided a vote counting procedure to synthetize the literature results. Their results suggested that the efficacy of PS‐I depended on the type of learning outcome considered. Across the 34 studies they identified, they found that most studies reported no significant differences between PS‐I and other alternative approaches in terms of procedural knowledge, which just refers to the capacity to reproduce memorized procedures covered in class. However, when the evaluation was made in terms of conceptual knowledge or transfer, PS‐I generally led to more positive results. For example, out of 17 studies, they identified 10 studies where transfer was significantly higher in PS‐I approaches, 1 study in which it was higher in alternative approaches, and 6 studies showing no significant difference. They also explored the effect of different PS‐I design features, for which they proposed an interesting moderator: whether PS‐I was presented in combination with techniques oriented to foster awareness of knowledge gaps and recognition of deep features, such as contrasting cases or building instruction on students' solutions. They found that when any of these forms of guidance were present, it was more likely to find a positive significant difference for PS‐I.

However, there are important aspects in relation to the scope and methods of this review that are important to address. First, the results were analysed using a vote counting procedure instead of meta‐analysis techniques. Second, their results could be easily contaminated by publication bias and availability bias, because, beyond looking into the list of references of some of the localized studies, they did not try to find studies within the grey literature. Finally, they did not consider the outcome of motivation for learning, nor some of the other potential moderating factors commented here, such as the type of control activities used for comparing PS‐I, or the intensity of PS‐I interventions.

The review by Darabi ( 2018 ) provided some meta‐analytic evidence about the general efficacy of PS‐I on learning. However, the scope of this review was larger and less specific. Their goal was to evaluate educational approaches based on learning from failures, which included PS‐I approaches, but also other failure‐driven approaches. In spite of that, out of the 23 studies that they ended up identifying, 22 were about the effectiveness of PS‐I. The results suggested that students who learned through failure‐driven approaches acquired more knowledge than students who learned through alternative approaches, with a moderate effect size ( g  = 0.43). They also explored the influence of interesting moderators such as age or intensity of the intervention, for which they found no significant results.

Nevertheless, this review has important aspects to address. Beyond mixing results of PS‐I interventions with other failure‐driven interventions, results can be biased because of mixing different types of learning outcomes. They aggregated together outcomes that referred to procedural knowledge, conceptual knowledge, and transfer, which according to the review of Loibl ( 2017 ) can lead to very different results. Also, these results might be biased by availability and publication bias, as suggested by their post‐hoc analyses. They only searched in few databases and using a very short variety of keywords, which can explain why, in spite of being more recent than the Loibl ( 2017 ) review, they identified considerably fewer studies.

Lastly, the review of Sinha ( 2021 ), specifically focused on the efficacy of PS‐I interventions to promote learning in comparison to other interventions in which explicit instruction were provided from the beginning. Up to date, their review is the one that includes more studies about this topic. Sinha and Kapur ( 2021 ) included 53 studies that were selected from studies that had cited in Google Scholar some of the reference articles of productive failure, a specific approach of PS‐I. Their review also had the advantage of analysing the results with meta‐analysis techniques. The results showed a significant moderate effect in favour of PS‐I ( g  = 0.36) versus alternative interventions, using an aggregation of measures that included tests of conceptual knowledge and transfer. Their moderation analyses showed that this effect was higher when using PS‐I design features such as using group work in the problem‐solving phase ( g  = 0.49), or building the explanations of the explicit instruction phase on students' solutions ( g  = 0.56). The use of other design features, duration of the interventions, age, and learning discipline did not show significant differences.

However, it is important to consider some aspects to complement in this review. First, the search of studies was limited to studies citing pioneer papers of productive failure in Google Scholar, which can leave behind studies about PS‐I not available in this source, or that are disconnected from the productive failure literature. Second, it did not consider the different types of control interventions to compare the efficacy of PS‐I. Also, it did not include motivational outcomes. Lastly, most of the effect sizes reported were based on a substantial body of studies in which no equivalence between conditions was kept in terms of learning materials. Only the effect size they reported for the subgroup of experimental studies is expected to be free from these studies ( g  = 0.25).

While trying to overcome their mentioned methodological limitations, the upcoming review aims to update the evidence of these four reviews. It also aims to consider a greater variety of outcomes and moderators. Regarding the outcomes, rather than just considering different types of learning, the upcoming review will also consider motivation for learning. Regarding factors that can moderate the efficacy of PS‐I, rather than just considering different PS‐I design features, and contextual factors such as duration or learning domain, it will also consider the different types of control activities used to compare the efficacy of PS‐I. Lastly, it will also provide separate results for the main analyses, in which equivalence of materials between PS‐I and other interventions is maintained, and additional exploratory analyses, in which such equivalence is not necessarily maintained.

Results of this review can have important implications when considering whether or not to introduce PS‐I into the educational practice. The use of PS‐I is very scarce (Pan,  2020 ), and it is important to offer updated evidence of whether it can contribute to the promotion of motivation, conceptual knowledge, and capacity to transfer learning. This evidence can help instructors to reduce the uncertainty of trying it. Also, it can help them to get guidance about which design features or contextual factors can contribute to its efficacy.

2. OBJECTIVES

The purpose of this review is to synthesize the evidence about the efficacy of PS‐I to promote learning and motivation in students. Specifically, this review is designed to answer the following questions:

  • To what degree does PS‐I affect learning and motivation, relative to alternative learning approaches?
  • To what extent is the efficacy of PS‐I associated with the use of different design features within it, including the use of group work, contrasting cases, and metacognitive guidance in the initial problem‐solving activity, and the use of explanations that build upon students' solutions in the explicit instruction phase?
  • To what extent is the relative efficacy of PS‐I associated with the contextual factors of activities used as control, age of students, duration of the interventions, and learning domain?
  • What is the quality of the existent evidence to evaluate these questions in terms of number of studies included and potential biases derived from publication and methodological restrictions?

3.1. Criteria for considering studies for this review

3.1.1. types of studies.

All studies included in the review will fulfil the following requirements:

  • They must involve a comparison of at least one group that goes through PS‐I with at least one comparative group that goes through an alternative intervention in which the teaching of the target concepts starts by providing students with some content.
  • They will either be randomized controlled trials or quasi‐experimental designs in which different students are assigned to the PS‐I conditions and the control conditions. For both types of designs, we will include studies in which the unit of assignment is either the students or students' groups (e.g., class groups, work groups). Also, for both types of designs, we will include studies in which the assignment method is random, quasi‐random, or even not random.

Nevertheless, we will exclude studies if the assignment leads to any difference between the PS‐I group and the comparative group that can affect learning (e.g., if one group belongs to class groups or schools with recognized better performance than the other group), or if pre‐existing differences between these two groups in terms of age, gender, or previous knowledge are statistically significant, as indicated by inferential statistical tests for group comparisons, using a level of statistical significance of p  ˂ .05. This exclusion criterion would apply for both quasi‐experimental designs and randomized controlled trials. Studies where teaching time is not the same for both groups will also be excluded.

In regard to the equivalence of teaching contents, we will have different inclusion criteria for the main analyses and complementary exploratory analyses. For the main analyses, we will only include studies in which the PS‐I group and the control group receive the same contents about the target concepts, and in which the learning activities are also the same but with the exception that the PS‐I group would perform a problem‐solving activity at the beginning of the intervention, and, during the same amount of time but not necessarily at the same time, the comparative group would perform alternative activities covering the same contents.

For the additional exploratory analyses, we will include studies in which such an equivalence of contents and activities is not maintained, which often occurs in studies that use a business‐as‐usual comparative condition. For example, in some studies the explicit instruction phase of the PS‐I condition includes explanations that build on the students’ generated solutions, while such explanations are not given in the comparative condition (e.g., Kapur,  2012a ).

3.1.2. Types of participants

The studies must have child or adult students as participants. We will not have an exclusion criterion based on age. Students from any developmental stage can potentially benefit from PS‐I. Eligible samples would also include populations at risk because of socio‐economic disadvantage, such as students from specific ethnicities or minorities, inner‐city schools, prison populations, or students who have poor school performance. These populations are important for inclusive education policies, and all of them have the potential to benefit from PS‐I. Samples consisting exclusively of people with a specific psychological diagnosis will be excluded because of the complexity of interpreting the variability generated by these populations.

3.1.3. Types of interventions

Studies eligible for this review will have to examine the effectiveness of PS‐I, and therefore will be required to have at least one group of students go through this approach, which will be defined by the following components:

  • Students start the learning process with a problem‐solving activity that targets concepts they have not yet learned,
  • For which they are given time to develop solutions on their own,
  • And that will be followed by a separate phase of explicit instruction about the new concepts, in which students can listen to or read these concepts.

Within PS‐I interventions there are possible additional characteristics that we might consider in the moderation analyses, but not within our inclusion criteria. Specifically, for the initial problem‐solving phase we will consider the presence of (a) contrasting cases; (b) metacognitive guidance; and (c) collaborative work. For the posterior instruction phase we will consider the presence of explanations that build upon students' solutions. Table  1 shows examples for these variables. We will also consider interventions with different durations, which can range from one session to several sessions.

It is important to note that we will exclude studies where students are faced with novel problems but they are not given the opportunity to solve them on their own. Examples of this situation include studies where students have access to external sources of information from the beginning of the problem‐solving activity (e.g., Tandogan,  2007 ), or where the problem only acts as a scenario to stimulate students' expression of their first intuitions.

Regarding the comparison condition, eligible studies have to at least include one control group that is given the same learning materials as the PS‐I group, but instead of going through the initial problem‐solving activity, they work through an alternative activity. Examples of these alternative activities include: (a) the same problem‐solving activity but used as a practice activity, which is provided to students once they have received all or part of the instruction about the target concepts; (b) the same problem‐solving activity but used in the form of a worked example; (c) other alternative activities that maintains a balance between the two interventions in terms of time and content covered.

Examples of eligible studies are summarized below:

  • Study 1 in Kapur ( 2014 ) assigned several statistics classes, composed of 75 9th grade students, into two learning conditions. In the PS‐I condition, students first had one hour to solve a novel problem about designing variability measures (phase 1), then they received another hour of explicit instruction about the standard deviation concept accompanied with practice problems (phase 2). In the control condition, students went through the same two phases but in reverse order. In the post‐test, students in the PS‐I condition outperformed those in the control condition in conceptual knowledge and transfer of learning, but not in procedural knowledge. During the learning process, students' engagement did not differ between conditions, but mental effort was higher for students in the PS‐I condition.
  • A study by Likourezos ( 2017 ) assigned, within their classes, 72 8th grade students into three learning conditions that spanned six 1‐h sessions of geometry. In the PS‐I condition the sessions were composed of two phases, a 30 min phase in which students solved novel problems, then an explicit instruction phase of 30 min. In the control condition the initial problems were substituted by worked out examples, which were the same as the problems used in the PS‐I condition but totally solved and included explanations that students could study. In a second control condition, which authors called partially‐guided, these worked‐examples only included the final solution, and students had to figure out the process. Post‐test results showed no significant differences between conditions in learning transfer nor procedural knowledge. Yet, some differences were found during the learning process in terms of motivation and the cognitive load students experienced.

3.1.4. Types of outcome measures

Primary outcomes.

Eligible studies must report either outcomes for learning or motivation for learning after the intervention.

In terms of learning we will consider two of the primary outcomes already analysed in the review of Loibl,  2017 , conceptual knowledge and transfer:

  • Conceptual knowledge is defined as the understanding of the structure and relationships that underlie a taught concept or procedure. It is generally measured by testing students in principle‐based reasoning, where they have to explain the why of different situations or procedures, or in the ability to connect different representations (refer to the conceptual knowledge post‐test in Kapur,  2012 for an example).
  • Transfer is defined as the ability to adapt the learned concepts to new situations. It is generally measured by asking students to solve problems that have no explicit relation with the concepts learned, and that are novel in the sense that have a new structure or occur in a new context compared to the problems students have previously seen (refer to the transfer post‐test in Kapur,  2012 for an example).

Measures of conceptual knowledge and transfer reported in the literature are generally not previously validated. They are generally created for the specific contents seen in each study. To be as comprehensive as possible we will include studies with un‐validated measures as long as the items correspond to our definitions of conceptual learning or transfer. We will only consider measures of students' performance, based on knowledge tests completed by students after the end of the interventions.

Concerning motivation, the planned primary outcome will be motivation for learning, which we define as a desire to engage in learning about the topic that has been taught. For its measurement we will primarily consider self‐report measures of interest, which refers to the perception of caring about learning the topic, and which is generally measured with questionnaires that ask students about the intensity with which they have different motives for learning. As a second priority to measure motivation for learning we will also consider self‐report measures of curiosity. Curiosity is an important component of interest, but it is more specific in the sense that refers to the desire of knowing. It is generally measured through questionnaires that ask students about the intensity with which they feel this sensation. The PS‐I literature has often used measures of interest or curiosity that have not been previously validated. To be as inclusive as possible, studies with non‐validated measures will be considered, but only as long as the items correspond with our definition of motivation for learning. Measures of engagement in the learning task will not be considered as indicators of motivation for learning, because engagement can be influenced by different factors not related to motivation, such as the task requirements.

Finally, it is important to consider that in the literature these measures are often measured at different time points during and after PS‐I interventions. For the main analysis we will consider the first measurement taken at the end of the learning process. Yet, other measurement times might be considered if available for several studies.

Secondary outcomes

We will code, and potentially consider as secondary outcomes:

  • Procedural knowledge, defined as the ability to correctly apply the learned procedures (Loibl,  2017 ). It is generally measured by testing students in the ability to carry out a set of steps, such as solving plug‐and‐chug problems, or questions to develop a set of learned procedures.
  • General measures of learning that mix items of procedural knowledge, conceptual knowledge, and/or transfer. These types of measures can be common in applied studies that use a typical exam to evaluate performance.
  • Factors that can influence the learning process, such as engagement, cognitive load, or number of solutions generated during the problem‐solving activity.
  • Factors that can influence motivation, such as self‐efficacy, anxiety and frustration.
  • Outcomes related to implementing the activities, such as work load experienced by the professors who implement the activity.

3.2. Search methods for identification of studies

Different sources will be searched to include published and unpublished international studies written in English or Spanish, with no date restriction. Although we might have problems scanning studies written in other languages than English or Spanish, no language limits will be applied in the searches.

3.2.1. Electronic searches

Based on the guidelines and lists of databases of Kugley ( 2017 ) for selecting electronic searches, we will search within the following sources that include journal articles, conference proceedings, government documents, and dissertations:

  • Databases for scientific literature, either with a general scope or with a scope focused on education. Across them, we will search in all the six indexed of Web of Science, PsycINFO, ERIC, MEDLINE, Google Scholar, Academic Search Complete (EBSCO), Education Abstracts (EBSCO), Education Full Text (EBSCO), SciELO, and Dialnet.
  • Databases that are broadly open to grey literature. Across them, we will search in EBSCO Open Dissertations, ProQuest Dissertations & Theses Global, EThOS, TESEO, and the Networked Digital Library for Theses and Dissertations.

Within these databases, we will use a combination of keywords that refer to PS‐I interventions (e.g., ‘Problem‐solving’ AND ‘Explicit instruction’ OR ‘Problem‐solving before Instruction’ OR ‘Productive Failure’ OR ‘Inventing to Prepare for Future Learning’). To make the output more specific, this combination may be restricted with a combination of keywords that refer to our primary outcomes (e.g., ‘learning’ OR ‘motivation’) and/or a combination of keywords referring to our eligible population (e.g., ‘students’ OR ‘pupils’). Appendix 1 shows an example of a strategy search in PsycINFO.

3.2.2. Searching other resources

Beyond electronic searches, we will use other sources, including:

  • Citations associated with systematic reviews and relevant studies. Specifically, we will search in the list of references of previous systematic reviews about PS‐I (Darabi,  2018 ; Jackson,  2021 ; Loibl,  2020 ; Sinha,  2021 ). Additionally, we will use Google Scholar to search across the documents that have cited either these reviews or the reports that are considered pioneers in PS‐I approaches (Kapur,  2008 ; Kapur,  2012a ; Schwartz,  1998 ; Schwartz,  2004 ). Lastly, the review team will check reference lists of included studies, and the citations in Google Scholar to these included studies.
  • Conference proceedings of educational conferences. We will search in proceedings of conferences celebrated in the last 15 years of the European Educational Research Association (EERA) and the International Society of the Learning Sciences (ISLS).
  • Documents from international and national organizations. We will search for publications in the OECD ( https://www.oecd-ilibrary.org/ ), the UNESCO ( https://www.unesco.org/es/ideas-data/publications ), the World Bank ( https://www.worldbank.org/en/research ), the Eurydice Network ( https://eacea.ec.europa.eu/national-policies/eurydice/ ), the US Department of Education ( https://www.ed.gov/ ), and the Spanish Department of Education and Professional Training ( https://www.educacionyfp.gob.es/portada.html ).
  • Hand searches of journals. Four journals that frequently publish about PS‐I interventions will be hand searched for documents published in the last 5 years: Instructional Science , Learning and Instruction , Cognition and Instruction , and Journal of Educational Psychology .
  • Communications with international experts. After finishing the search in other sources, we will email all the contacting authors of the identified studies to ask them about additional studies they may know of, including unpublished studies. This email will contain a comprehensive list of the included articles along with the inclusion criteria.

3.3. Data collection and analysis

3.3.1. selection of studies.

Study selection will be done through the software Covidence. After eliminating duplicated manuscripts, we will screen the titles and abstracts of the remaining manuscripts to evaluate their potential inclusion. Among these pre‐selected manuscripts, we will screen the full texts to consider if they meet our inclusion criteria. For these two screening processes, 20% of the manuscripts will be screened individually by two members of the team. If for a given subset of manuscripts, the level of agreement is below 80%, the subset will be screened again until reaching this standard. The level of agreement will be reported. Disagreements will be resolved by discussion until reaching consensus. If disagreements persist, a third reviewer in the team will be consulted.

3.3.2. Data extraction and management

Data of the primary studies will be directly introduced in two forms in a Microsoft Access document that can be found in the following link, together with the coding manual: https://www.dropbox.com/sh/u3nr12ayilaezps/AADQngLciNF_gGLrRSpqKtofa?dl=0 .

The first form is the Reports Form, and will be used to code information about each report that, after screening, contains any study suspected to be included in the review. It includes variables related to the following:

  • Title of the report.
  • Year of publication and type of publication.
  • Authors and affiliations.
  • Studies contained in the report.

The second form is the Studies Form, and will be used to code information of each study in the reports that has been preliminarily accepted to inclusion. It includes variables related to the following:

  • Setting (e.g., public vs. private institutions, special education units, topic taught).
  • Sample characteristics (e.g., sex ratio, age mean).
  • Design features of the PS‐I and comparative interventions (e.g., use of contrasting cases, group work, metacognitive guidance).
  • Information related to risk of bias (e.g., assignment procedures, control of extraneous variables, attrition)
  • Implementation characteristics (e.g., person who delivers the intervention, duration of intervention).
  • Types of control groups being compared.
  • Characteristics of measures used (e.g., internal reliability).
  • Characteristics of the effect sizes (e.g., time passed from the end of the intervention to the measurement).
  • Effect sizes for different subgroups (e.g., effect sizes of subsamples with different levels of prior knowledge).

This form automatically calculates effect sizes and their related statistics after introducing the means, standard deviations, and sample sizes reported in the primary studies. In cases where this information is not reported, the coders will use the Campbell Collaboration online calculator to calculate effect sizes from other reported values.

To evaluate the coding reliability, the Studies Form will be completed by two coders for a random selection of 10% of the studies. Discrepancies will be resolved by further review of the reports and by discussion until an agreement is reached. If we identify relevant variables during the coding process, they will be added to the questionnaire.

3.3.3. Assessment of risk of bias in included studies

Risk of biases will be assessed using several items in the Studies Form (refer to Data extraction and management) that address the five domains of the Cochrane Risk of Bias Tool for Randomized Trials (Sterne,  2019 ). However, in comparison to this tool, we changed some items in each domain to specifically adapt to the context of this review:

  • Randomization process: we will code for whether the units of assignment are students or students' groups, and whether assignment is random. We will also code the identification of baseline differences in terms of gender, age, previous knowledge, or other relevant variable identified, and whether this data is reported.
  • Deviations from the intended intervention: we will code for whether the PS‐I interventions and the control interventions were implemented in the same place, at the same time, with the same implementers, with the same durations, with similar levels of attrition, and covering the same contents. It will also be coded whether implementers were blind, and whether it was used a pre‐test including problem‐solving activities related to the contents to cover, which can create a PS‐I effect in the control interventions and therefore contaminate the results (Newman,  2019 ).
  • Missing outcome data: missing data higher than 5% for any relevant comparison will be identified.
  • Measurement of the outcome: appropriateness of the measure will be coded regarding whether the items correspond to the definition of the construct, using a Likert type scale (yes, probably yes, probably no, no, cannot tell). Other factors that will be coded include whether the measure was previously validated, and reliability indicators in terms of internal reliability and inter‐rater reliability.
  • Selection in the reported result: the coder will assess the probability that the reported assessments or analyses were selected on the basis of the findings, using a Likert type scale (yes, probably yes, probably no, no, cannot tell).

For each of these five dimensions, coders will assess the degree of risk of bias (low, high, or some concerns). In case of assessing them as ‘high’ or ‘some concerns’, they will describe the specific effect sizes affected by this judgement, the direction in which the potential bias is suspected to affect (favours experimental, favours comparator, towards null, away from null, unpredictable), and the reasons behind it.

After evaluating these questions, coders will re‐evaluate whether some or all effect sizes taken from the study should be analysed according to the inclusion criteria. They will also classify these effect sizes into three categories referring to the general risk of biases: low, some concerns, or high.

  • Low risk of bias will be assigned for studies in which are fulfilled two requirements: a) participants are randomly assigned to conditions (unit of assignment is the participant and the method of assignment was totally random), and b) there is enough information to assume equivalence between groups and interventions.
  • Some concerns for risk of bias will be assigned for studies in which only one of these two requirements are fulfilled.
  • High risk of bias will be assigned for studies in which none of these two requirements are fulfilled.

In case of selecting options ‘high’ or ‘some concerns’, descriptions about the specific effect sizes affected by this assessment, the direction of the potential bias, and the reasons behind it will be added.

3.3.4. Measures of treatment effect

For the three primary outcomes of conceptual knowledge, transfer, and motivation for learning, we will use standardized mean difference effect sizes, or Cohen's d (Cohen,  1988 ), to estimate the effect of PS‐I interventions in comparison with other interventions used as a control, as indicated in the following formula:

where the numerator is the difference of the PS‐I group mean minus the control group mean, and the denominator is the pooled standard deviation for the two comparison groups. Larger effect sizes will represent a higher quantity of the outcome in the PS‐I group in comparison to the control group. Once these effect sizes are obtained, they will be adjusted with the small‐sample correction factor to provide unbiased estimates (Hedges,  1981 ), and 95% confidence intervals will be calculated from them.

3.3.5. Unit of analysis issues

To prevent the inclusion of the same effect size twice in one analysis, effect sizes for different constructs and different evaluation moments would be analysed separately. In cases where one study provides more than one measure for one of the constructs we have defined, we will select only one measure. First, for that selection we will follow the priorities already specified for the primary outcomes (refer to Primary outcomes). Second, if the possibility to select two outcomes remains, we will select a previously validated measure over a non‐validated measure. Last, if the possibility to select several outcomes remains, we will select the measure that is most similar to those used by the other studies.

To prevent that a study that has been published in several reports is included several times in the analyses, at the end of the coding process we will explore nonobvious duplicates by looking for repetitions within the categories of key variables such as authors, date of publication, or effect sizes.

givenNamesIn multi‐aim primary studies that compare two PS‐I groups with one control group, we will carry two options following recommendations in Higgins,  2019 to avoid weighting as twice the values of the control group in the aggregated analyses: (a) when the two PS‐I groups are similar, we will treat them as a single group; (b) when they are not similar, the sample size of the control group will be divided in half before being compared with the PS‐I groups. A similar strategy but in reverse order will be conducted when a study compares one PS‐I group with two control groups.

Clustering issues

In the PS‐I literature it is common that the units of assignment to conditions are not the students, but clusters of students, either class groups or working groups (pairs or small groups of students that work together in the interventions). To correct for the artificial reduction in the standard error of the effect size estimates due to these clusterings, we will follow the recommendations in Higgins ( 2019 ) of multiplying the standard error by the square root of the ‘design effect’, whose formula is

For studies in which the intracluster correlation coefficient is not reported, we will use the coefficient of similar studies included in the review.

3.3.6. Dealing with missing data

To deal with missing data, authors from primary studies will be contacted via email. In case the requested information is not received, the study will be reported, but the effects for which there is missing data will not be included in the analyses.

3.3.7. Assessment of heterogeneity

We will evaluate the variability across studies using the Q statistic and its associated chi‐square test for inference. Additionally, we will provide the I 2 statistic as an indicator of the approximate proportion of variation that is due to between‐study heterogeneity rather than a sampling error. Lastly, we will estimate the τ 2 as an absolute estimation of the magnitude of variation between studies.

3.3.8. Assessment of reporting biases

To estimate the impact from publication bias, we will use funnel plots in combination with trim‐and‐fill analyses. Additionally, we will analyse the risk of publication bias with the Egger regression tests and the Kendall tau test.

3.3.9. Data synthesis

Analyses will include a descriptive summary of the contextual characteristics, methodological characteristics, sample characteristics, and outcome characteristics of the included studies.

PS‐I interventions and control interventions will be compared using averaged effect sizes based on the standardized mean difference, weighted with the inverse of variance method. Separate averages will be reported for each of the three primary outcomes of motivation for learning, conceptual knowledge, and transfer. In turn, for each of these outcomes, separate meta‐analyses will be performed for the comparison of PS‐I interventions with each type of control intervention (As defined in section Why it is important to do this review, four different types of control interventions have been identified: instruction with lecture before problem‐solving, instruction with worked‐examples exploration before problem‐solving, instruction with worked examples exploration before further instruction, and problem‐solving with content guidance before instruction. Yet, other types of control interventions might be identified during the review process).

A random effects model will be assumed. This option was chosen instead of a fixed effects model because we expect that a great variety of factors would influence the effect sizes, and therefore it is difficult to assume a common effect size for the studies (Borenstein,  2010 ). The 95% confidence intervals will be reported for the averaged effect sizes. Funnel plots will be used to visually represent their aggregation.

The comparison between PS‐I and several types of control activities might be complemented with network meta‐analysis, as long as homogeneity of the comparisons fulfil the transitivity assumption, which will be checked by observing the distribution of significant moderators in each comparison. For the network meta‐analyses, we will report a network plot to describe the direct and indirect evidence available across interventions. Effect sizes between treatments will be reported with 95% confidence intervals using a random effects model, and a p  ˂ .05 will be considered statistically significant.

Beyond these main analyses, we will conduct exploratory analyses, which will include similar comparisons between PS‐I interventions and control interventions, but we will consider secondary outcomes and studies in which there is not strict equivalence of learning materials between the PS‐I interventions and the control interventions.

3.3.10. Subgroup analysis and investigation of heterogeneity

For all of the separate meta‐analyses in which PS‐I is compared with each of the control activities in each of the three primary outcomes, in cases where we find significant statistical heterogeneity, we will perform moderation analyses to identify factors associated with the efficacy of PS‐I. Correlations between potential moderators will precede these analyses to identify whether the effects of different moderators might be cofounded with each other, and to identify potential groupings of moderating variables.

Moderation analyses will be performed individually for each of the variables discussed in How the intervention might work. Specifically, within design features of PS‐I, we will test for use in the initial problem solving activity of contrasting cases (yes vs. no), metacognitive guidance (yes vs. no), and collaborative work (yes vs. no), and for use in the explicit instruction phase of explanations that build upon students' solutions (yes vs. no). Within contextual factors, we will test for the duration of the intervention in minutes, the average age of the sample in years, and learning domain (math related domain vs. other domains). These individual analyses will also be performed with the general risk of bias variable (low risk vs. some concerns vs. high risk of bias). For the categorical variables we will perform subgroup analyses, and for the continuous variables we will perform individual meta‐regression analyses. Further combinations of moderating variables are not initially hypothesized. A minimum aggregation of three studies will be considered necessary for the analyses to be performed.

3.3.11. Sensitivity analysis

We will conduct sensitivity analyses to determine the impact of several decisions, such as removing studies with outlier effect sizes, removing unpublished studies, removing studies with high risk of bias, or using alternative ways for coding or including moderator variables in the analyses.

3.3.12. Summary of findings and assessment of the certainty of the evidence

This is the protocol for a Campbell review whose objective is exploring the efficacy of the educational strategy of Problem‐solving before Instruction (PS‐I) to promote learning and motivation in child and adult students.

CONTRIBUTIONS OF AUTHORS

  • Content: Catherine Chase, Eduardo González‐Cabañes, Trinidad García, and Jose Cárlos Núñez.
  • Systematic review methods: González‐Cabañes, Garcia, Chase, and Núñez.
  • Information retrieval: González‐Cabañes, García, and Chase. We count on the advisory assistance of librarians at our universities.

DECLARATIONS OF INTEREST

None of the researchers involved in the team have financial or personal interests in the results of this review, nor belong to any organization with such interests. All of us have published studies on the problem solving before instruction (PS‐I) method. This review is designed as an independent study and procedures will be detailed to allow replication from perspectives different than ours.

SOURCES OF SUPPORT

Internal sources

  • No sources of support provided

External sources

  • Ministry of Universities of the Government of Spain, Spain
  • Scholarship to conduct PhD studies (grant number: FPU16/05802)
  • Ministry of Economy, Industry, and Competitiveness of the Government of Spain, Spain
  • Research Project (Reference PID2019‐107201GB‐100)
  • Principality of Asturias, Spain
  • Research Project (Reference: FC‐GRUPIN‐IDI/2018/000199)

Supporting information

Supporting information.

ACKNOWLEDGEMENTS

This research is being funded by the Principality of Asturias (reference FC‐GRUPIN‐IDI/2018/000199), by the Ministry of Economy, Industry, and Competitiveness of the Government of Spain (reference: PID2019‐107201GB‐100) and by a predoctoral grant from the Ministry of Universities of Spain (grant number: FPU16/05802). We would like to thank Cheryl von Asten for her contribution to editing the English of the manuscript, Juan Botella for teaching and assisting us with methodological questions, and Jarson Varela for his consultory assistance with the creation of coding forms.

González‐Cabañes, E. , Garcia, T. , Chase, C. , & Núñez, J. C. (2023). PROTOCOL: Problem solving before instruction (PS‐I) to promote learning and motivation in child and adult students . Campbell Systematic Reviews , 19 , e1337. 10.1002/cl2.1337 [ CrossRef ] [ Google Scholar ]

OTHER REFERENCES

ADDITIONAL REFERENCES

Asterhan 2009

  • Asterhan Christa, S. C. , & Schwarz, B. B. (2009). Argumentation and explanation in conceptual change: Indications from protocol analyses of peer‐to‐peer dialog . Cognitive Science , 33 ( 3 ), 374–400. [ PubMed ] [ Google Scholar ]

Belenky 2012

  • Belenky, D. M. , & Nokes‐Malach, T. J. (2012). Motivation and transfer: The role of mastery‐approach goals in preparation for future learning . Journal of the Learning Sciences , 21 ( 3 ), 399–432. [ Google Scholar ]

Borenstein 2010

  • Borenstein, M. , Hedges, L. V. , Higgins, J. P. T. , & Rothstein, H. R. (2010). A basic introduction to fixed‐effect and random‐effects models for meta‐analysis . Research Synthesis Methods , 1 ( 2 ), 97–111. [ PubMed ] [ Google Scholar ]

Carnero‐Sierra 2020

  • Carnero‐Sierra, S. , & González‐Cabañes, E. (2020). Resolución de Problemas Previo a Instrucción, aplicado al aprendizaje online de Modelos de Atención Selectiva . Magister , 1 ( 32 ), 49–54. [ Google Scholar ]

Carriedo 1995

  • Carriedo, N. , & Alonso Tapia, J. (1995). Comprehension strategy training in content areas . European Journal of Psychology of Education , 10 ( 4 ), 411–431. [ Google Scholar ]
  • Chase, C. C. , & Klahr, D. (2017). Invention versus direct instruction: For some content, it's a tie . Journal of Science Education and Technology , 26 ( 6 ), 582–596. [ Google Scholar ]
  • Chi, M. T. H. (2000). Self‐explaining expository texts: The dual processes of generating inferences and repairing mental models . Advances in Instructional Psychology , 5 ( 5 ), 161–238. [ Google Scholar ]
  • Chiu, M. , & Xihua, Z. (2008). Family and motivation effects on mathematics achievement: Analyses of students in 41 countries . Learning and Instruction , 18 ( 4 ), 321–336. [ Google Scholar ]
  • Richard, C. , Kirschner, P. A. , & Sweller, J. (2012). Putting students on the path to learning: The case for fully guided instruction . American Educato , 36 ( 1 ), 5–11. [ Google Scholar ]
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences . Routledge. [ Google Scholar ]

Council National Research 2003

  • Council National Research . (2003). Engaging schools: Fostering high school students' motivation to learn . National Academies Press. [ Google Scholar ]

Darabi 2018

  • Darabi, A. , Logan, A. T. , & Erkan, S. (2018). Learning from failure: A meta‐analysis of the empirical studies . Etr&D‐Educational Technology Research and Development , 66 ( 5 ), 1101–1118. [ Google Scholar ]
  • Fyfe, E. R. , DeCaro, M. S. , & Rittle‐Johnson, B. (2014). An alternative time for telling: When conceptual instruction prior to problem solving improves mathematical knowledge . British Journal of Educational Psychology , 84 ( 3 ), 502–519. [ PubMed ] [ Google Scholar ]

Glogger‐Frey 2015

  • Glogger‐Frey, I. , Fleischer, C. , Grueny, L. , Kappich, J. , & Renkl, A. (2015). Inventing a solution and studying a worked solution prepare differently for learning from direct instruction . Learning and Instruction , 39 , 72–87. [ Google Scholar ]

Glogger‐Frey 2017

  • Glogger‐Frey, I. , Gaus, K. , & Renkl, A. (2017). Learning from direct instruction: Best prepared by several self‐regulated or guided invention activities? Learning and Instruction , 51 , 26–35. [ Google Scholar ]

Golman 2018

  • Golman, R. , & Loewenstein, G. (2018). Information gaps: A theory of preferences regarding the presence and absence of information . Decision , 5 ( 3 ), 143–164. [ Google Scholar ]

González‐Cabañes 2020

  • González‐Cabañes, E. , García, T. , Rodríguez, C. , Cuesta, M. , & Núñez José, C. (2020). Learning and emotional outcomes after the application of invention activities in a sample of university students . Sustainability , 12 ( 18 ), 7306. [ Google Scholar ]

González‐Cabañes 2021

  • González‐Cabañes, E. , García, T. , Núñez José, C. , & Rodríguez, C. (2021). Problem‐solving before instruction (PS‐I): A protocol for assessment and intervention in students with different abilities . JoVE , 175 , e62138. [ PubMed ] [ Google Scholar ]

Hedges 1981

  • Hedges Larry, V. (1981). Distribution theory for Glass's estimator of effect size and related estimators . Journal of Educational Statistics , 6 ( 2 ), 107–128. [ Google Scholar ]

Higgins 2019

  • Higgins, J. P. T. , Eldridge, S. , & Li, T. (2019). Chapter 23: Including variants on randomized trials. In Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., & Welch V. A. (Eds.), Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019) (pp. 569–593). Cochrane. [ Google Scholar ]

Holmes 2014

  • Holmes, N. G. , Day, J. , Park, A. H. K. , Bonn, D. A. , & Roll, I. (2014). Making the failure more productive: scaffolding the invention process to improve inquiry behaviors and outcomes in invention activities . Instructional Science , 42 ( 4 ), 523–538. [ Google Scholar ]

Jackson 2021

  • Jackson, A. , Godwin, A. , Bartholomew, S. , & Mentzer, N. (2021). Learning from failure: A systematized review . International Journal of Technology and Design Education , 1 , 1–21. [ Google Scholar ]

Kahneman 2011

  • Kahneman, D. (2011). Thinking, fast and slow . Macmillan. [ Google Scholar ]
  • Kapur, M. (2008). Productive failure . Cognition and Instruction , 26 ( 3 ), 379–425. [ Google Scholar ]
  • Kapur, M. (2010). Productive failure in mathematical problem solving . Instructional Science , 38 ( 6 ), 523–550. [ Google Scholar ]
  • Kapur, M. (2011). A further study of productive failure in mathematical problem solving: unpacking the design components . Instructional Science , 39 ( 4 ), 561–579. [ Google Scholar ]

Kapur 2011a

  • Kapur, M. , & Bielczyz, K. (2011). Classroom‐based experiments in productive failure. In Carlson L., Hoelscher C., & Shipley T. F. (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2812–2817). Cognitive Science Society. [ Google Scholar ]
  • Kapur, M. (2012). Productive failure in learning the concept of variance . Instructional Science , 40 ( 4 ), 651–672. [ Google Scholar ]

Kapur 2012a

  • Kapur, M. , & Bielaczyc, K. (2012). Designing for productive failure . Journal of the Learning Sciences , 21 ( 1 ), 45–83. [ Google Scholar ]
  • Kapur, M. (2014). Productive failure in learning math . Cognitive science , 38 ( 5 ), 1008–1022. [ PubMed ] [ Google Scholar ]

Kapur 2014a

  • Kapur, M. (2014). Comparing learning from productive failure and vicarious failure . Journal of the Learning Sciences , 23 ( 4 ), 651–677. [ Google Scholar ]

Kugley 2017

  • Kugley, S. , Wade, A. , Thomas, J. , Mahood, Q. , Jørgensen Anne‐Marie, K. , Hammerstrøm, K. , & Sathe, N. (2017). Searching for studies: A guide to information retrieval for Campbell systematic reviews . Campbell Systematic Reviews , 13 ( 1 ), 1–73. [ Google Scholar ]

Lamnina 2019

  • Marianna, L. , & Catherine, C. (2019). Developing a thirst for knowledge: How uncertainty in the classroom influences curiosity, affect, learning, and transfer . Contemporary Educational Psychology , 59 , 101785. [ Google Scholar ]

Likourezos 2017

  • Likourezos, V. , & Kalyuga, S. (2017). Instruction‐first and problem‐solving‐first approaches: Alternative pathways to learning complex tasks . Instructional Science , 45 ( 2 ), 195–219. [ Google Scholar ]
  • Liu, Y. , Hau, K.‐T. , & Zheng, X. (2020). Does instrumental motivation help students with low intrinsic motivation? Comparison between Western and Confucian students . International Journal of Psychology , 55 ( 2 ), 182–191. [ PubMed ] [ Google Scholar ]

Loderer 2018

  • Loderer, K. , Pekrun, R. , & Lester, J. C. (2018). Beyond cold technology: A systematic review and meta‐analysis on emotions in technology‐based learning environments . Learning and Instruction , 70 , 101162. [ Google Scholar ]

Loewenstein 1994

  • Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation . Psychological Bulletin , 116 ( 1 ), 75–98. [ Google Scholar ]
  • Loibl, K. , & Rummel, N. (2014). Knowing what you don't know makes failure productive . Learning and Instruction , 34 , 74–85. [ Google Scholar ]

Loibl 2014a

  • Loibl, K. , & Rummel, N. (2014). The impact of guidance during problem‐solving prior to instruction on students' inventions and learning outcomes . Instructional Science , 42 ( 3 ), 305–326. [ Google Scholar ]
  • Loibl, K. , Roll, I. , & Rummel, N. (2017). Towards a theory of when and how problem solving followed by instruction supports learning . Educational Psychology Review , 29 ( 4 ), 693–715. [ Google Scholar ]
  • Loibl, K. , Tillema, M. , Rummel, N. , & van Gog, T. (2020). The effect of contrasting cases during problem solving prior to and after instruction . Instructional Science , 48 ( 2 ), 115–136. [ Google Scholar ]

Mallart 2014

  • Mallart, S. A. (2014). La resolución de problemas en la prueba de Matemáticas de acceso a la universidad: procesos y errores . Educatio Siglo xxi , 32 , 233–254. [ Google Scholar ]
  • Mayer, R. E. (2003). Learning and instruction . Prentice Hall. [ Google Scholar ]

Mazziotti 2019

  • Mazziotti, C. , Rummel, N. , Deiglmayr, A. , & Loibl, K. (2019). Probing boundary conditions of productive failure and analyzing the role of young students’ collaboration . NPJ science of learning , 4 , 2. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Mega, C. , Ronconi, L. , & De Beni, R. (2014). What makes a good student? How emotions, self‐regulated learning, and motivation contribute to academic achievement . Journal of Educational Psychology , 106 ( 1 ), 121–131. [ Google Scholar ]
  • Moore, J. L. , & Schwartz, D. L. (1998). On learning the relationship between quantitative properties and symbolic representations. In. Bruckman A, Guzdial M., Kolodner J., & Ram A. (Eds.), Proceedings of the International Conference of the Learning Sciences (pp. 209–214). Association for the Advancement of Computing in Education. [ Google Scholar ]

Newman 2019

  • Newman, P. M. , & DeCaro Marci, S. (2019). Learning by exploring: How much guidance is optimal? Learning and Instruction , 62 , 49–63. [ Google Scholar ]

Núñez 2009

  • Núñez, J. C. (2009). Motivación, aprendizaje y rendimiento académico. In B. Duarte Silva , L.Almeida , A. Barca Lozano , & M X Congresso Internacional Galego‐Português de Psicopedagogia (pp. 41–67). Universidad do Minho. [ Google Scholar ]
  • OECD . (2014). PISA 2012 Results: Creative Problem Solving: Students' Skills in Tackling Real‐Life Problems (Volume V) . OECD Publishing Paris.
  • OECD . (2016). PISA 2015 Results (Volume I): Excellence and Equity in Education . [ Google Scholar ]
  • OECD . (2018). Pisa 2018 database . [ Google Scholar ]
  • Pan, S. C. , Sana, F. , Samani, J. , Cooke, J. , & Kim, J. A. (2020). Learning from errors: students' and instructors' practices, attitudes, and beliefs. Memory , 28 , 1–18. [ PubMed ] [ Google Scholar ]

Pekrun 2011

  • Pekrun, R. , Goetz, T. , Frenzel, A. C. , Barchfeld, P. , & Perry, R. P. (2011). Measuring emotions in students' learning and performance: The Achievement Emotions Questionnaire (AEQ) . Contemporary Educational Psychology , 36 ( 1 ), 36–48. [ Google Scholar ]

Pekrun 2012

  • Pekrun, R. , & Stephens, E. J. (2012). Academic emotions. In Harris K. R., Graham S., Urdan T., Graham S., Royer J. M., & Zeidner M. (Eds.), APA educational psychology handbook, Vol 2: Individual differences and cultural and contextual factors (pp. 3–31). American Psychological Association. [ Google Scholar ]
  • Roll, I. , Holmes, N. G. , Day, J. , & Bonn, D. (2012). Evaluating metacognitive scaffolding in guided invention activities . Instructional Science , 40 ( 4 ), 691–710. [ Google Scholar ]
  • Ryan, R. M. , & Deci, E. L. (2009). Promoting self‐determined school engagement: Motivation, learning, and well‐being. In Wentzel K. & Wigfield A. (Eds.), Handbook of motivation at school (pp. 171–196). Routledge. [ Google Scholar ]

Salmerón 2013

  • Salmerón, L. (2013). Actividades que promueven la transferencia de los aprendizajes: una revisión de la literatura . Revista de educación , Extra ( 1 ), 34–53. [ Google Scholar ]

Schraw 1994

  • Schraw, G. , & Dennison, R. S. (1994). Assessing metacogntive awareness . Contemporary Educational Psychology , 19 ( 4 ), 460–475. [ Google Scholar ]

Schwartz 1998

  • Schwartz, D. L. , & Bransford, J. D. (1998). A time for telling . Cognition and Instruction , 16 ( 4 ), 475–522. [ Google Scholar ]

Schwartz 2004

  • Schwartz, D. L. , & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction . Cognition and Instruction , 22 ( 2 ), 129–184. [ Google Scholar ]

Schwartz 2011

  • Schwartz, D. L. , Chase Catherine, C. , Oppezzo Marily, A. , & Chin Doris, B. (2011). Practicing versus inventing with contrasting cases: The effects of telling first on learning and transfer . Journal of Educational Psychology , 103 ( 4 ), 759–775. [ Google Scholar ]

Silver 2000

  • Silver, E. A. , & Kenney, P. A. (2000). Results from the seventh mathematics assessment of the National Assessment of Educational Progress . National Council of Teachers of Mathematics . [ Google Scholar ]
  • Sinha, T. , & Kapur, M. (2021). When problem solving followed by instruction works: Evidence for productive failure . Review of Educational Research , 91 ( 5 ), 761–798. [ Google Scholar ]
  • Smith, E. E. , & Swinney, D. A. (1992). The role of schemas in text: A real‐time examination . Discourse processes , 15 ( 3 ), 303–316. [ Google Scholar ]
  • Song, Y. (2018). Improving primary students' collaborative problem solving competency in project‐based science learning with productive failure instructional design in a seamless learning environment . Educational Technology Research and Development , 66 ( 4 ), 979–1008. [ Google Scholar ]

Sterne, 2019

  • Sterne, J. A. C. , Savović, J. , Page, M. J. , Elbers, R. G. , Blencowe, N. S. , Boutron, I. , Cates, C. J. , Cheng, H.‐Y. , Corbett, M. S. , Eldridge, S. M. , Hernán, M. A. , Hopewell, S. , Hróbjartsson, A. , Junqueira, D. R. , Jüni, P. , Kirkham, J. J. , Lasserson, T. , Li, T. , McAleenan, A. , Reeves, B. C. , Shepperd, S. , Shrier, I. , Stewart, L. A. , Tilling, K. , White, I. R. , Whiting, P. F. , & Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials . BMJ , 366 , I4898. [ PubMed ] [ Google Scholar ]

Sweller 2019

  • Sweller, J. , van Merrienboer, J. J. G. , & Paas, F. (2019). Cognitive architecture and instructional design: 20 years later . Educational Psychology Review , 31 ( 2 ), 261–292. [ Google Scholar ]

Tandogan 2007

  • Tandogan, R. O. , & Orhan, A. (2007). The effects of problem‐based active learning in science education on students' academic achievement, attitude and concept learning . Eurasia Journal of Mathematics, Science & Technology Education , 3 ( 1 ), 71–81. [ Google Scholar ]

Teasley 1995

  • Teasley, S. D. (1995). The role of talk in children peer collaborations . Developmental Psychology , 31 ( 2 ), 207–220. [ Google Scholar ]
  • Urdan, T. , & Schoenfelder, E. (2006). Classroom effects on student motivation: Goal structures, social relationships, and competence beliefs . Journal of School Psychology , 44 ( 5 ), 331–349. [ Google Scholar ]

VanLehn 1999

  • VanLehn, K. (1999). Rule‐learning events in the acquisition of a complex skill: An evaluation of Cascade . Journal of the Learning Sciences , 8 ( 1 ), 71–125. [ Google Scholar ]

Veenman 2005

  • Veenman, M. V. J. , & Spaans, M. A. (2005). Relation between intellectual and metacognitive skills: Age and task differences . Learning and Individual Differences , 15 ( 2 ), 159–176. [ Google Scholar ]

Weaver 2018

  • Weaver, J. P. , Chastain, R. J. , DeCaro, D. A. , & DeCaro, M. S. (2018). Reverse the routine: Problem solving before instruction improves conceptual knowledge in undergraduate physics . Contemporary Educational Psychology , 52 , 36–47. [ Google Scholar ]
  • Webb Noreen, M. (1993). Collaborative group versus individual assessment in mathematics: Processes and outcomes . Educational Assessment , 1 ( 2 ), 131–152. [ Google Scholar ]
  • Webb, N. M. , Franke, M. L. , Ing, M. , Wong, J. , Fernandez, C. H. , Shin, N. , & Turrou, A. C. (2014). Engaging with others' mathematical ideas: Interrelationships among student participation, teachers' instructional practices, and learning . International Journal of Educational Research , 63 , 79–93. [ Google Scholar ]

IMAGES

  1. How to Study for Problem Solving Tests

    problem solving test instruction

  2. Lesson Plan: Problem Solving

    problem solving test instruction

  3. Improve your students' math problem solving skills with these FULLY

    problem solving test instruction

  4. Problem Solving Practice Test

    problem solving test instruction

  5. [FREE] The Ultimate Guide to Problem Solving Techniques

    problem solving test instruction

  6. Problem Solving Strategies

    problem solving test instruction

VIDEO

  1. Outsmart Yourself: Confronting the Trickiest Brain Teasers! #shorts

  2. Can You Solve This Mind-Bending Riddle? 🧠

  3. Grade 5 Math Quiz Challenge! Can You Solve These 12 Questions in 9 Seconds? 🧮🤔

  4. Unlock the Mystery: Can You Crack This Intriguing Riddle? 🔍

  5. Can You Solve These Math Puzzles? 🤔

  6. Challenge Your Mind: Solve This Intriguing Riddle! 🧠

COMMENTS

  1. Improving Your Test Questions

    An example of a problem solving test item follows. Example Problem Solving Test Item. ... The information on this page is intended for self-instruction. However, CITL staff members will consult with faculty who wish to analyze and improve their test item writing. The staff can also consult with faculty about other instructional problems.

  2. PDF Effective Problem-Solving Instruction, Part 2: Multiple Strategies

    This will help students understand that strategies should be chosen based on ease and efficiency. Effective Problem-Solving Instruction, Part 2: Multiple Strategies. 2. It can also be helpful for a teacher to demonstrate approaches to problems that are not successful and discuss why they seem like they would work, but why they don't.

  3. PDF Test Specifications for Problem-Solving Assessment

    The CRESST model of problem solving (Figure 1) is adapted from the problem-solving models of Baxter, Elder, and Glaser (1996), Glaser, Raghavan, and Baxter (1992), and Mayer and Wittrock (1996), Sugrue (1995). It includes four elements: (a) content understanding, (b) problem-solving strategies (i.e., either

  4. Problem-Solving Exams

    The exam will likely test your ability to apply what you've learned about solving problems to new types of questions, rather than your ability to memorize and regurgitate examples you've already seen. Focus on the process the instructor used for solving the problem. Think about your own problem-solving strategies. Practise, practise, practise!

  5. McKinsey Problem Solving Test Practice Test A

    Instructions Practice Test Overview and Instructions This practice test has been developed to provide a sample of the actual McKinsey Problem Solving Test used for selection purposes. This test assesses your ability to solve business problems using deductive, inductive, and quantitative reasoning. This practice test contains a total of 26 ...

  6. How to Ace a Problem Solving Skills Test: Tips and Strategies

    2 Prepare and practice. If you want to ace a problem solving skills test, the best way to prepare is to review the test format, instructions, and time limit. This will give you an idea of what ...

  7. Teaching Problem Solving

    Make students articulate their problem solving process . In a one-on-one tutoring session, ask the student to work his/her problem out loud. This slows down the thinking process, making it more accurate and allowing you to access understanding. When working with larger groups you can ask students to provide a written "two-column solution.".

  8. Problem Solving Skills Test

    The test requires candidates to identify the right answers to the questions in a limited amount of time. Successful candidates can quickly identify the key elements of the problem and work through the problem at speed without making mistakes. This is a great test to include to check candidates' overall analytical skills. Explore the test science.

  9. Problem-solving Tests

    The single best way to prepare for problem-solving tests is to solve problems—lots of them. Be sure to work problems not previously assigned. h2. Before the Test h3. Review * *Go over class notes and reading.* Identify the major concepts and formulas from both. * *Highlight topics or problems your instructor emphasized.* Note why these items are important.

  10. Problem solving followed by instruction

    Problem solving followed by instruction. This section of the guide focuses on process-oriented guided inquiry learning (POGIL), contrasting cases, and productive failure. ... In terms of post-test performance, students in the ill-structured condition outperformed those in the well-structured condition on both measures: well-structured problem ...

  11. Instruction followed by problem solving

    Instruction followed by problem solving. This section of the guide focuses on peer-led team learning (PLTL) and worked examples plus practice. We included these pedagogies because they are well-defined examples of instruction followed by problem solving and are either widely used in undergraduate science education or have a strong literature base.

  12. How to assess reasoning skills

    When it comes to assessing a candidate's reasoning skills, it's important to delve deeper beyond surface-level observations. Understanding their critical thinking, problem-solving, and decision-making abilities is crucial. That's where TestGorilla can lend a hand. Our extensive test library is a treasure trove of options to suit your needs.

  13. How Good Is Your Problem Solving?

    Problem solving is an exceptionally important workplace skill. Being a competent and confident problem solver will create many opportunities for you. By using a well-developed model like Simplexity Thinking for solving problems, you can approach the process systematically, and be comfortable that the decisions you make are solid.

  14. PDF CHAPTER 6 EXAM Instructions. Solve the following 8 problems and write

    Instructions. Solve the following 8 problems and write your solutions clearly, showing all work necessary for your solution. A correct solution without supporting work may receive little or no credit. ... The goal of this problem is set up integrals giving the volume of the solid using both the disk/washer method and the shell method. You do ...

  15. Test Your Problem-Solving Skills

    Test Your Problem-Solving Skills. Personalize Your Emails Personalize your monthly updates from BrainFacts.org by choosing the topics that you care about most! Sign Up Find a Neuroscientist Engage local scientists to educate your community about the brain. ...

  16. Problem Solving Test

    The Problem Solving Test makes use of scenario-based questions to test for on-the-job skills as opposed to theoretical knowledge, ensuring that candidates who do well on this screening test have the relavant skills. The questions are designed to covered following on-the-job aspects: Logical reasoning and problem-solving skills.

  17. PDF Developing a Problem-Solving Essay Test Instrument (PSETI) in the ...

    instruction and to improve students' problem-solving skills as well. Teachers with good problem- ... Problem-solving Essay Test Instrument (PSETI); (b) How is the validity based on experts assessment; (c) how are the validity and reliability based on trials. Journal of Turkish Science Education 40 .

  18. PDF McKinsey & Company

    Practice Test Overview and Instructions This practice test has been developed to provide a sample of the actual McKinsey Problem Solving Test used for selection purposes. This test assesses your ability to solve business problems using deductive, inductive, and quantitative reasoning. This practice test contains a total of 13 questions. The ...

  19. PDF PROBLEM SOLVING INSTRUCTION FOR OVERCOMING STUDENTS IFFICULTIES IN ...

    Mandina Shadreck, Ochonogor Chukunoye Enunuwe. Abstract. The study sought to find out difficulties encountered by high school chemistry students when solving stoichiometric problems and how these could be overcome by using a problem-solving approach. The study adopted a quasi-experimental design. 485 participants drawn from 8 highs schools in a ...

  20. ERIC

    Assessment of problem-solving skills for preservice teachers is important in science instruction. The purpose of this study is to discover; (a) How are the characteristics of the Problem-Solving Essay Test Instrument (PSETI); (b) How is the validity based on experts assessment; (c) how are the validity and reliability based on trials. The research method is research and development.

  21. Frontiers

    Mathematical problem-solving constitutes an important area of mathematics instruction, and there is a need for research on instructional approaches supporting student learning in this area. This study aims to contribute to previous research by studying the effects of an instructional approach of cooperative learning on students' mathematical problem-solving in heterogeneous classrooms in ...

  22. Problem Solving TEST and Example

    1 PROBLEM SOLVING TEST. DIRECTION: Below are problems that need to be solved illustrating the four phases of Polya's problem. solving: understanding the problem, devising/planning a strategy on how to solve the problem, carrying out the strategy/plan and looking back/checking of solution****.. Each phase of the problem solving process is described below.

  23. Extra Credit: Culturally Responsive Problem Solving Modules

    Traditional problem solving vs. problem solving with a culturally responsive approach; Strengths-based vs. deficit-based approach; How and why to reframe; Changing how we engage in the problem-solving process; What to Expect This module is presents school staff with an evidence-based, culturally responsive approach to problem solving with students.

  24. Put students' reasoning skills to the test with ERB on Kahoot!

    Strengthen students' reasoning skills with new learning content from ERB on Kahoot! Put your students' problem-solving, critical thinking, and reasoning skills to the test with these compelling new kahoots! Learning isn't all about the destination; it's also about the journey! While memorizing facts before a test is certainly helpful ...

  25. PROTOCOL: Problem solving before instruction (PS‐I) to promote learning

    Specifically, within design features of PS‐I, we will test for use in the initial problem solving activity of contrasting cases (yes vs. no), metacognitive guidance (yes vs. no), and collaborative work (yes vs. no), and for use in the explicit instruction phase of explanations that build upon students' solutions (yes vs. no).