This website works best with JavaScript switched on. Please enable JavaScript

  • Centre Services
  • Associate Extranet
  • All About Maths

Specifications that use this resource:

  • AS and A-level Psychology 7181; 7182

Lesson activity: practical activities for research methods

This resource contains ideas for relevant and engaging practical activities which can be either incorporated into your teaching of the research methods section of the psychology specification, or your students can follow independently.

Activity 1: investigating short term memory

Research suggests that Short Term Memory (STM) cannot hold very much information. You are going to design and carry out an experiment to see whether the capacity of STM differs between two groups: A-level students and older people.

Generate a hypothesis for this study. Justify the direction of your hypothesis. Identify the IV and DV in this experiment.

  • gain their consent to take part
  • enable them to carry out the task appropriately.
  • length of words
  • type of words
  • number of words
  • word presentation.

Participants

Decide upon and justify your choice of participants for the two conditions (the two age groups). Identify and justify your sampling method.

Ethical issues

Before you collect your data, identify and address any relevant ethical issues which may arise from the study you have designed. For example, consider how participants will be debriefed afterwards.

Once you have collected your results, produce a summary table which includes appropriate measures of central tendency. Also generate an appropriate graphical display. Ensure these are appropriately labelled and have a title.

Ask another student to interpret your table and graph for the rest of the class.

Activity 2: investigating handedness

Research suggests that around 10% of the population are left handed or 'sinistral'.

You are going to design a study to compare two types of A-level student. You are aiming to see whether left handedness is more common in some subject groups, such as art students or geographers.

Consider whether you will carry out an observation – eg by counting the number of left handers and right handers from within lessons you attend, or whether you will use a verbal survey of students in the common room.

Describe any materials needed for your chosen method. Remember, if someone wanted to replicate your study they would need to know exactly what you did.

Describe and justify your choice of the A-level subject groups you have chosen for this investigation. Include information about the size of the sample in each condition.

Consider and compare at least two ethical issues associated with each method before deciding which one you will use. How, for example, would you gain consent from students you are observing? How would you gain consent from a verbal survey?

Identify and justify the type of data (level of measurement) you will collect (will it be nominal, ordinal or interval?).

Consider two potential methodological variables associated the use of your chosen method. For example, are demand characteristics likely to be a problem?

Once you have collected your data, summarise it into a correctly labelled pie chart for each of the subject group you measured (eg artists and geographers). Do your findings reflect 10% left handedness in both groups?

In order to practise the skill of reference writing, find three references for studies which have investigated handedness. Include them here in an academically accepted format.

Hint : look at the reference section of an academic text book. What do you notice about their order and format?

Activity 3: investigating gender stereotyping

It has been suggested by some researchers that males and females are often gender stereotyped by others when it comes to their expected and/or perceived roles and behaviours. Your task is to investigate whether gender stereotyping occurs in product marketing aimed at children.

  • examine online promotional material
  • look at television advertisements
  • examine children's comics.

Generate a suitable aim and hypothesis for your study. Justify your choice of a directional or non-directional hypothesis.

Decide upon a specific age range for the children targeted by your chosen media source.

In pairs, decide upon an appropriate working definition of stereotyping for your study. In other words, clearly 'operationalise' the concept you will measure.

You might decide to measure:

  • the number of times that boys or girls interact with particular toys
  • how often certain colours are used to promote toys for girls and boys
  • the type of words used to promote toys for girls and boys
  • the actions associated with certain toys (physical or passive).

Well operationalised definitions make it much easier to identify your IV and DV.

Once you have collected your data, summarise it into a correctly labelled graph and present your findings to the rest of the class in a five minute presentation.

Ask your peers for questions about your investigation and answer one or two of these.

Discussions

Research findings are an important tool for informing social change.

In two or three paragraphs, discuss the possible social and/or developmental implications of your findings.

Activity 4: investigating aggression

Researchers have come up with theories to try and explain why people become aggressive. One explanation is to do with 'nurture'. That is, we learn to be aggressive from environmental influences such as computer games.

Your aim is to compare the perceived level of aggression in games designed for two different age groups: those under 12 and those over 18 years of age.

In a small group, generate the names of six computer games intended for play by individuals over the age of 18 and six games which are intended for children under 12. Randomly select three games from the 'over 18' list and three from the 'under 12' list. Produce a random list of these six games.

How and why would random selection be used to produce the list?

Selection of game raters

Select an equal number of male and female students aged eighteen or over. Their job will be to rate the games for levels of aggression. Explain and justify your choice of game raters. For example, why would you need a balance of males and female raters?

Ask the raters to give each game on the list a rating for aggression from 1 to 10 (where 1 = no aggression and 10 = high levels of aggression).

Calculate an appropriate measure of central tendency for each 'over 18s' game and each 'under 12s' game.

Carry out an 'eyeball test' to see which set of games appears to have the highest levels of aggression. The ones designed for under 12s or over 18s?

Which statistical test would you use if you wanted to see whether there were significantly different levels of aggression in games for older and younger people? Justify your choice of test.

In a paragraph or two, explain the methodological and ethical issues arising when asking people to rate levels of aggression in computer games.

Explain the possible implication of your findings relative to theories of aggression.

Activity 5: investigating age and sleep patterns

Research has shown that the human body clock is very important in determining sleep and wake patterns. Your task will be to design a study to investigate the relationship between age and sleep duration.

Generate an appropriate directional hypothesis for this correlational study.

Design a response sheet for people to complete in order to record the amount of time they sleep over a number of nights. You will need to consider how many nights, which days of the week and how they are to record their sleep (eg minutes/hours/clock times). Justify your choices.

What other information will you need on this sheet to enable you carry out the study? For example, how will you record the age of your participants?

In terms of sampling, who will be your target population and what type of sampling will you use? Justify your choices. Decide upon and operationalise the age groups you hope to measure. You should aim to include a wide age range and therefore address the ethical requirements associated with these, particularly with regard to any participants under 16 years of age.

Once you have collected your data, produce a suitable scattergraph to show the relationship between age and sleep duration.

Do the results appear to support your predictions? Justify your answer.

Which statistical test would you use to look for a significant relationship between age and sleep duration? Why would you choose to use this test?

In two or three paragraphs, and as part of the 'Discussion' section of a psychological investigation, briefly consider the possible methodological implications of your findings, particularly with regard to confounding variables within the study.

In order to practise the skill of reference writing, find three references for studies which have investigated sleep. Include them here in an academically accepted format.

Activity 6: investigating cognitive psychology

The Cognitive Approach in psychology places a great deal of importance on the influence of higher thought processes on decision making and behaviour.

Your task is to design and carry out a study to investigate the possible influence of expectation and perceptual bias on decision making processes.

Participants will simply be asked to rate the suitability of someone who has applied for a particular (named) job. Think carefully what this could be.

You should produce a short and credible education/career summary for a fictitious individual. This could include a list of their GCSE results, A-levels, degree details and work experience. You may decide not to include all of these depending on the job vacancy you have chosen to use.

People often have preconceptions regarding ability and a person’s age or gender, so look at one of these factors. If you choose age, then produce two identical versions of the CV differing only in terms of the persons specific age. The applicant's name could be an extraneous variable in this study. How will you control this EV?

Identify and justify an appropriate participant sample and sampling method.

Half of the participants should see the 'young' CV version, and be asked to rate the suitability of the person for the vacancy. You will need to devise a suitable rating scale for this and a clear set of instructions for participants to follow.

The remaining participants will rate the 'older' candidate.

Identify, explain and justify the experimental design used in your study. Is it repeated measures, independent groups or matched pairs? Would it have been possible to use a different design to the one you have used? Explain your answer.

Summarise your findings using descriptive statistics, perhaps a table and a graph.

In two or three paragraphs, explain the implications of your findings with regard to any age bias you may or may not have found.

Activity 7: investigating stress

People often report feeling high levels of stress at certain points in their lives. Students, for example, often feel stressed in the run up to examinations.

Your task is to devise a self-report measure to try and find the possible reasons for examination stress in AS/A Level students or GCSE students.

You should devise a questionnaire asking students to list and briefly describe possible reasons for examination stress in students.

You should emphasise in your brief that their answers may not necessarily be a reflection of their own stressors and that their answers will be confidential and anonymous. Write a set of ethically sound procedures to explain how this will be achieved.

Decide upon a sample of students. Informed consent must be addressed. If you decide to sample GCSE students, for example, you must first (and also) gain consent from parents or those in loco parentis. Explain why and how this will be done and evidenced.

Identify and justify whether the students responses will generate qualitative or quantitative data. Identify one strength and one weakness of the data type you have collected in this investigation.

From the answers given, think about how you could summarise these to generate a suitable graph. This could include identifying types or categories of stressor. You could then calculate the percentage of students who identified these as potential stressors.

Ask another student (who was not involved with your investigation) to interpret and describe the results using your graph. This will tell you whether your graph is a clear summary of your results.

Write two or three paragraphs to consider the implications of your findings. This might, for example, be ways of helping to reduce examination stress.

Activity 8: investigating social development

It has often been suggested that small animals, including humans, are born with certain physical features (such as large eyes) that encourage others to take care of them.

You will design a study to see whether babies look cuter when their eyes are open compared to when their eyes are closed.

You will need two photographs – one of you as a baby/toddler with your eyes wide open and one of you at a similar age with your eyes closed. If you are creative, you could use the same photograph manipulated in a photographic software programme.

Only the face should be visible. Explain the methodological reasons for using the same photograph and two other controls you consider relevant to this investigation.

These might, for example, include a justification of the size of the photograph or whether it is in colour or black and white.

Explain the ethical reasons for using photos of yourself in this study.

You will then ask people to rate the cuteness of the 'two' babies using an independent groups (unrelated) design. How will you allocate people to the 'open eyes' and 'closed eyes' conditions? Justify your answer.

Devise a suitable 'cuteness' rating scale for this study. Justify how long you will give participants to rate the photograph. Explain why participants will not be given unlimited time to give their ratings. Generate an appropriate set of instructions, a brief and debrief for use in this study.

Produce an appropriate graph from the data collected. Which statistical test would be appropriate for analysing this data? Justify your choice. Explain whether the test you have chosen is parametric or non-parametric.

Produce an abstract (summary) which could be used when writing up this study. Try to keep this to a maximum of 200 words, but include reference to: the aim of the study, theory behind the study, how it was tested, participants, summary of findings and a conclusion.

Activity 9: investigating food preference

Many theories have been offered to explain food preference in humans; some of which are biological, others due to environmental influence. For example, it is said that more people are now choosing to eat vegetarian diets than ever before.

You will carry out a study to record:

  • Whether more males or females are vegetarians
  • How long the males and female participants have been vegetarian (in an attempt to identify which gender has been a vegetarian the longest)

Participants in this study should be over 16 years of age. Explain why.

Design your study to gain participants using volunteer sampling. How will you achieve this? Outline the main methodological problems arising from using a volunteer sampling method for this investigation. Outline and justify a better way of sampling in this study which could contribute to more valid results.

Decide whether this will take the form of written responses to a simple questionnaire or a verbal survey of participants. Design and justify your materials accordingly.

Whichever method you choose, you should plan and produce an appropriate set of procedures for your investigation. This way, you will know exactly what you intend to do and/or say to participants and what they have to do/say during the investigation.

A 'Procedures' section, when written up, would normally be:

  • written in the past tense
  • include all steps and 'verbatim' instructions (find out what does this means)
  • written in the third person.

So although you must plan this ahead prospectively, you must write this up afterwards retrospectively. Try doing this by writing up your 'Procedure' in this way.

Identify and justify the type of data you will collect in each part of the study.

Produce a summary of your findings using appropriate descriptive statistics.

Include a written conclusion of your findings.

Eating behaviour can be a sensitive topic for some people. Perhaps their diet is governed by illness or other personal factors. Outline at least two ways in which you will ensure that your participants are not placed in a position of psychological discomfort by taking part in your study.

Activity 10: investigating food preference

When psychologists design studies, they have to consider the validity of their research. That is, are they really measuring what they set out to measure?

You will be considering issues of validity in this exercise when you attempt to design the materials for a study intending to measure social influence. Obedience is one form of social influence; conformity is another.

In small groups, collect and agree upon ten celebrity faces for use in this investigation. What will you need to consider when choosing the faces for this study? Perhaps how well known the person is or their gender.

Explain how these and other factors might impact upon the validity of your study.

You will need to duplicate these photographs. One set of the faces will remain 'whole', whilst the other set should only show the eyes of the same celebrities.

One way of testing validity is through 'face validity'. In this case, the researchers would be asking whether the measure looks, at face value, as though it measures obedience.

The class should therefore look at all of the questions generated and explain whether the questions designed to test obedience actually look as though they do this.

What if one of the questions reads 'Your neighbour asks you to move her dustbin? Do you?' or 'All of your friends make a noise in the library, do you join in?'

Are these valid measures of obedience or something else? Justify your answer.

If such questions were to be used in a study, how would the participant's responses be recorded? Would it be through yes/no answers or some other measure? Describe and justify way of measuring obedience other than through yes/no responses.

Identify and explain at least two potential methodological issues which might arise in such a study of obedience.

Identify and explain at least two ethical issues which might arise. One of these should relate to confidentiality.

As an alternative task, you could start to look around your school, supermarkets etc for posters/signs which encourage obedience. Categorise the techniques used, eg obedience through fear, and consider which technique is more likely to cause obedience in the real world.

Activity 11: investigating holism v reductionism

One of many important debates in psychology is that of Holism versus Reductionism. In Cognitive Psychology, for example, this can be seen in theories of face recognition. The holistic view would argue that we need to see a whole face in order to identify it. The reductionist view argues that single features alone are sufficient.

You will carry out an investigation to test holism and reductionism in face recognition.

Design/participants

Using an independent groups (unrelated) design, randomly allocate 10 people to Condition 1 (whole face) and 10 people to Condition 2 (eyes only). Explain why the independent groups design would be used. Could you use a different design in this study?

The participants simply have to name the celebrity. You will time them using a stopwatch to see how long it takes to name all ten celebrities in each condition (whole or eyes). Devise a suitable system for accurately recording total reaction time.

Carry out a pilot study with two or three people prior to the main study in order to test and improve the procedure and/or materials. You may, for example, have to consider what you will do if the participant answers incorrectly, or takes a long time to answer.

Produce a summary table and graph to summarise the findings from your study. Which side of the debate seems to be supported? Explain your answer.

Name and justify an appropriate parametric test which could be used to analyse your data. Name and justify the use of an alternative non-parametric test.

Activity 12: investigating honesty

Some researchers believe that when we are being truthful, our eyes look to the left, but it we are not being honest, we gaze to the right and that this process is reversed for left handed people. Other researchers are not so sure.

Your task is to design two ways in which this could be tested.

Design an observational study which could be carried out in a sixth form setting using stratified sampling.

You will need to describe:

  • how the researcher could consistently determine gaze direction
  • the questions asked in order to elicit truthful and non-truthful answers
  • the type of observation undertaken and why
  • the ethical issues associated with a study of this nature
  • how the stratified sample would be achieved.

Design a second experiment in which eye gaze direction could be measured through the use of more physiological means such as an EOG (look online for this).

  • the ethical issues associated with a study of this nature and how they differ from the observation study described above
  • an appropriate brief and debrief and how these might differ from those given in the observation study described above
  • the type of experiment undertaken and why. For example, would this be a lab experiment or a field experiment, and why?

In order to practise the skill of reference writing, find three references for studies which have investigated this topic. Include them here in an academically accepted format.

Activity 13: investigating reliability

When psychologists design studies, they have to consider the reliability (consistency) of their findings. That is, if someone else were to carry out the same study, would they get the same or very similar results?

The Psychology teacher should select a 4 or 6 mark memory question from a past Psychology examination paper. Students will be answering and then double marking this. The mark scheme should be kept confidential at this point.

Students should then consider and devise a system whereby they are each randomly allocated an identification number. This will replace their names on their answer to the question they are about to answer. Justify this in terms of appropriate ethical and/or methodological issues.

In silence, students should then write their answer to the question set. An appropriate time limit should be set for this task. Students should consider what this should be and base their decision on the amount of time you would normally expect to allocate to 4 (or 6) mark question.

The answer papers should then be randomly allocated to other members of the class for marking according to the mark scheme.

This process should be done twice. This will allow you to consider inter-marker reliability regarding the marking of the students answers.

As a possible control of potential EVs, explain why it is important to ensure that the second marker does not know or see the mark awarded by the first marker. Describe how this could be achieved.

The two marks awarded to each anonymous student should then be examined. What would you expect to find if the marking is reliable? Briefly outline the results and what these mean in terms of inter-marker reliability.

Explain how inter-marker reliability could be checked statistically.

Produce an appropriate graphical display of the findings.

As a potential discussion/improvement point, briefly explain why the answers might have been better word processed than hand written.

Document URL https://www.aqa.org.uk/resources/psychology/as-and-a-level/psychology/teach/practical-activities-for-research-methods

Last updated 07 Sep 2020

  • Teaching of Psych Idea Exchange (ToPIX)

log in help

  • Pages & Files

Comments ( 0 )

You don't have permission to comment on this page.

PBworks / Help Terms of use / Privacy policy / GDPR

About this workspace Contact the owner / RSS feed / This workspace is public

STP Homepage

About ToPIX

Teaching Resources  (Peer-reviewed)

Project Syllabus  (Peer-reviewed)

Teaching of Psychology journal (Peer-reviewed/STP members only)

Mentoring Service

Departmental Consulting Service

Recent Activity

Psychology Headlines

From around the world.

  • OpenAI Launches GPT-o, With New Humanlike Capabilities
  • Illness Took Away Her Voice. AI Cloned a Replica She Carries in Her Phone
  • Scientists Unveil Most Detailed Map of the Human Brain Ever Created
  • AI Deadbots Spur Call for Safeguards to Prevent Unwanted "Hauntings"
  • AI Knowledge Gets Your Foot in the Door, Study Finds
  • How a 1973 Missouri Law Makes It Hard for Pregnant Women to Divorce
  • AI Systems Are Already Skilled at Deceiving and Manipulating Humans
  • Eating Disorders Common in People with Type 1 Diabetes

Source: Psychology News Center

psychology research methods class activities

psychology research methods

All Formats

Resource types, all resource types.

  • Rating Count
  • Price (Ascending)
  • Price (Descending)
  • Most Recent

Psychology research methods

Preview of Psychology Research Methods Activities Homework

Psychology Research Methods Activities Homework

psychology research methods class activities

  • Easel Activity

Preview of Unit Zero, Research Methods in Psychology (NEW CED Unit Bundle)

Unit Zero, Research Methods in Psychology (NEW CED Unit Bundle)

psychology research methods class activities

Research Methods in Psychology Reading w/ Questions: Multiple Editable Formats

psychology research methods class activities

Psychology Name that Research Method Activity

psychology research methods class activities

AP Psychology Research Methods unit BUNDLE Powerpoint activities test

Preview of Psychology Research Methods Project & Rubric

Psychology Research Methods Project & Rubric

psychology research methods class activities

Psychology Research Design: Methods Card Sort & Discussion

psychology research methods class activities

Psychology Research Methods Unit Bundle

Preview of Research Methods in Psychology- 2 day lesson

Research Methods in Psychology - 2 day lesson

psychology research methods class activities

  • Google Slides™

Preview of Research Methods in Psychology - Lecture and Guided Notes!

Research Methods in Psychology - Lecture and Guided Notes!

psychology research methods class activities

Psychology Research Methods Class Experiment Activity Interactive Lesson

Preview of Psychology Research Design: Scientific Method Mini-lesson

Psychology Research Design: Scientific Method Mini-lesson

Preview of Research Methods & Statistics in Psychology PowerPoint & Notes: Multiple Formats

Research Methods & Statistics in Psychology PowerPoint & Notes: Multiple Formats

Preview of IB Psychology Research Methods Full Unit with Lessons, Slides, & Assessments

IB Psychology Research Methods Full Unit with Lessons, Slides, & Assessments

psychology research methods class activities

  • Google Docs™

Preview of AP Psychology - Research Methods Bell Ringers / Class Warm Ups / Exit Tickets

AP Psychology - Research Methods Bell Ringers / Class Warm Ups / Exit Tickets

psychology research methods class activities

IB Psychology Cognitive Psychology + Research Methods Units Bundle

Preview of IB Psychology Biological Psychology + Research Methods Units Bundle

IB Psychology Biological Psychology + Research Methods Units Bundle

Preview of IB Psychology Sociocultural Psychology + Research Methods Units Bundle

IB Psychology Sociocultural Psychology + Research Methods Units Bundle

Preview of AP Psychology Complete Unit Plan History and Research Methods

AP Psychology Complete Unit Plan History and Research Methods

psychology research methods class activities

IB Psychology Unit 1: Research Methods & Ethics BUNDLE!

psychology research methods class activities

RESEARCH METHODS IN PSYCHOLOGY [UNIT 1 COMPLETE MODULE]

psychology research methods class activities

RESEARCH METHODS IN PSYCHOLOGY [UNIT 2 COMPLETE MODULE]

Preview of Research Methods for Psychology Readings & Activities for Distance Learning

Research Methods for Psychology Readings & Activities for Distance Learning

Preview of Cambridge AICE Psychology - PowerPoints for Research Methods (2024 Syllabus)

Cambridge AICE Psychology - PowerPoints for Research Methods (2024 Syllabus)

psychology research methods class activities

  • We're hiring
  • Help & FAQ
  • Privacy policy
  • Student privacy
  • Terms of service
  • Tell us what you think

PANDEMIC-related resources have been recently added to the site. Use the search box below or Ctrl(Cmd)-F on each page using the keyword "pandemic."

Subscribe to the free site newsletter - Read past issues

Report broken links here

Other Teaching Psychology Sites

YouTube Channel for Psychology Videos (including many social psychology videos) - Steven Ross has started an excellent compendium of video clips from movies, TV shows, advertisements, news programs, and other sources. It is well organized in playlists to find video clips in many areas of psychology. Luvze - (previously Science of Relationships) Ben Le, Gary Lewandowski, Tim Loving, and Marci Gleason have created a website entitled Science of Relationships. It will keep us up to date on research in the field, answer questions submitted to the site, and provide observations on relationships in pop culture and on references in the media. They will even have a "Fact Checker" section that will verify and debunk claims. As they say, it will be their version of Mythbusters, "but without stuff exploding." GoCognitive Web Project - "The goal of the GoCognitive web project is the creation of an online center for teaching in cognitive psychology and cognitive neuroscience. We are providing online demonstrations of cognitive and neurological phenomena as well as video content related to research in neuroscience. Many of the demonstrations are downloadable and the content modifiable so that they can be used in small-scale research projects by students." Teaching Research and Statistics in Psychology - TeachPsychScience.org is a new site created by Gary Lewandowski, Natalie Ciarocco, and David Strohmetz. "TeachPsychScience.org provides links to online demonstrations, descriptions of class demonstrations, suggestions for class/lab activities, class assignments, lecture materials, and/or student exercises." The site was supported by funding from the APS Teaching Fund, a great place to go to create such resources. Marianne Miserandino's Personality Pedagogy - an excellent Wiki-based site that presents lots of resources for instructors of personality and related courses -- the Wiki format also allows you to add to the site. Correlation or Causation? - This page contains a large number of science news headlines that link to the actual popular press articles. Unfortunately, many of the headlines confuse correlation and causation. Some activities/assignments are included on the page which describe how you can use this resource in a variety of courses.  

Other Resources

Jon Mueller's Authentic Assessment Toolbox - a how-to text on creating authentic tasks, rubrics and standards for measuring and improving student learning Assessments of Information Literacy available online Jon's Book Assessing Critical Skills We ask students to memorize reams of information that they will rarely if ever use again, but we often fail to teach them the critical skills needed to meet the daily challenges of the 21st century, skills such as information literacy, collaboration, metacognitive reflection, and self-assessment. One reason we have shied away from teaching such skills is that we are unsure how to assess them. Thus, in this text I offer a detailed set of steps and examples, including rubrics, of how to summatively and formatively assess the skills our students need. For K-16 educators. Table of Contents
Copyright 2000-2024. This site was created and is maintained by Jon Mueller , Professor of Psychology, Emeritus, at North Central College , Naperville, IL. Send comments to Jon .

ORIGINAL RESEARCH article

Active learning in research methods classes is associated with higher knowledge and confidence, though not evaluations or satisfaction.

\r\nPeter J. Allen*

  • School of Psychology and Speech Pathology, Curtin University, Perth, WA, Australia

Research methods and statistics are regarded as difficult subjects to teach, fueling investigations into techniques that increase student engagement. Students enjoy active learning opportunities like hands-on demonstrations, authentic research participation, and working with real data. However, enhanced enjoyment does not always correspond with enhanced learning and performance. In this study, we developed a workshop activity in which students participated in a computer-based experiment and used class-generated data to run a range of statistical procedures. To enable evaluation, we developed a parallel, didactic/canned workshop, which was identical to the activity-based version, except that students were told about the experiment and used a pre-existing/canned dataset to perform their analyses. Tutorial groups were randomized to one of the two workshop versions, and 39 students completed a post-workshop evaluation questionnaire. A series of generalized linear mixed models suggested that, compared to the students in the didactic/canned condition, students exposed to the activity-based workshop displayed significantly greater knowledge of the methodological and statistical issues addressed in class, and were more confident about their ability to use this knowledge in the future. However, overall evaluations and satisfaction between the two groups were not reliably different. Implications of these findings and suggestions for future research are discussed.

Introduction

A cornerstone of educational practice is the notion that the more engaged the learner, the more interested, passionate and motivated they will become, and the better the outcome will typically be vis-à-vis their learning. This causal chain, of sorts, thus predicts that higher rates of student retention, better grades, and higher levels of satisfaction and enjoyment are more likely to follow when a student is genuinely curious and involved in their study. However, student engagement appears to be more difficult to achieve in some areas of study compared to others. For instance, within psychology, research methods and statistics are widely regarded as ‘difficult’ subjects to teach (e.g., Conners et al., 1998 ). Student attitudes toward these topics are often negative ( Murtonen, 2005 ; Sizemore and Lewandowski, 2009 ), and their interest in them is low ( Vittengl et al., 2004 ; Rottinghaus et al., 2006 ). This lack of engagement is likely to impact student outcomes, contributing to poorer grades and higher rates of attrition. However, a basic understanding of research methods is essential in order for students to gain a fuller appreciation of the literature underpinning their later academic, or professional careers. Thus, there appears to be a clear and growing need to identify teaching strategies that are maximally effective at removing barriers to learning research methods. This view is echoed by recent calls to reform traditional methods for teaching research methods and statistics, and it finds support from recent research. For example, in the Guidelines for Assessment and Instruction in Statistics Education (GAISE; Aliaga et al., 2005 ) college report, published by the American Statistical Association, a number of recommendations are highlighted with regard to the teaching of statistics in higher education. These recommendations include emphasizing the development of statistical literacy and thinking, making use of real data, focusing on conceptual understanding (rather than procedures or formulae), promoting active learning, making use of technology and administering assessment appropriate to evaluating learning in the classroom.

The view that teaching research methods and statistics may require a particular kind of approach is further supported by a recent meta-analysis by Freeman et al. (2014) . In their analysis, traditional methods of teaching statistics (e.g., lecturing to classes) was shown to be less effective in terms of student exam performance, and student satisfaction and enjoyment, compared to other subjects of study. The challenge facing teachers of statistics and research methods therefore is to make research methods more applied, relevant and engaging for students, whilst simultaneously improving students’ understanding of statistics, their grades, and attendance rates ( Hogg, 1991 ; Lovett and Greenhouse, 2000 ). In this article, we focus on the possible benefits of implementing two of the recommendations highlighted in the GAISE report. These are: (1) the use of real data, and (2) the use of an active learning methodology. We describe a study that examines the ways in which incorporating these recommendations into the teaching of research methods and statistics may positively affect student outcomes.

When applied to the teaching of research methods, active learning approaches typically involve students carrying out research, rather than merely reading about, or listening to instructors talk about it. Active learning in research methods and statistics classes may include taking part in demonstrations designed to illustrate methodological and statistical concepts, participating in authentic research, and working with data the students have been responsible for collecting. A great deal of work has explored the impact of active learning using ‘hands-on’ demonstrations of both statistical processes (e.g., Riniolo and Schmidt, 1999 ; Sciutto, 2000 ; Christopher and Marek, 2002 ; Fisher and Richards, 2004 ) and methodological concepts (e.g., Renner, 2004 ; Eschman et al., 2005 ; Madson, 2005 ). Importantly, the use of active learning methods in research methods and statistics appears to be successful at increasing levels of satisfaction and enjoyment and reducing failure rates ( Freeman et al., 2014 ). Against this backdrop of findings, it might then seem reasonable to assume that the effects of active learning would further contribute toward positive outcomes, for example on exam performance. However, this is not found to be the case. While students may report higher levels of enjoyment and usefulness of active learning demonstrations, these are not consistently associated with more beneficial learning outcomes ( Elliott et al., 2010 , though see also Owen and Siakaluk, 2011 ). Put another way, the subjective evaluation of one’s enjoyment of a subject does not bear a direct relationship on the amount of knowledge acquired, or the extent to which one can apply knowledge in a given area (see e.g., Christopher and Marek, 2002 ; Copeland et al., 2010 ).

With regard to the use of real datasets in class exercises and assessments, this too has been proposed to hold a number of advantages ( Aliaga et al., 2005 ). The advantages include: increased student interest; the opportunity for students to learn about the relationships between research design, variables, hypotheses, and data collection; the ability for students to use substantive features of the data set (e.g., the combination of variables measured, or the research question being addressed) as a mnemonic device to aid later recall of particular statistical techniques; and the added benefit that using real data can provide opportunities for learning about interesting psychological phenomena, as well as how statistics should be calculated and interpreted ( Singer and Willett, 1990 ). Additionally, a number of studies have showed that when real, class-generated data are used students report higher levels of enjoyment, an enhanced understanding of key concepts, and are likely to endorse the use of real data in future classes (see e.g., Lutsky, 1986 ; Stedman, 1993 ; Thompson, 1994 ; Chapdelaine and Chapman, 1999 ; Lipsitz, 2000 ; Ragozzine, 2002 ; Hamilton and Geraci, 2004 ; Marek et al., 2004 ; Morgan, 2009 ; Neumann et al., 2010 , 2013 ).

Overall, the benefits of using active learning and real data within research methods and statistics classes show much promise. However, to better understand how the implementation of these strategies results in positive outcomes, further empirical investigation is needed. First, we note a lack of research that has simultaneously targeted outcomes of satisfaction, evaluation and knowledge (i.e., performance). Each of these outcomes likely plays an important role in influencing student engagement. In this study we assess students on each of these components. Secondly, we eliminate a potential design confound that may have affected previous research, by ensuring highly similar contexts in both our intervention and our control group. The same instructors were used in both instances. In this way, we may be more confident that any effects we observe are more likely due to our manipulation (i.e., active learning versus control), than to student-instructor interactions.

Motivated by a desire to increase student engagement in our undergraduate statistics and research methods courses, we developed a series of activities for a 1.5-h workshop. In each of these activities, students participated in a computer-based psychological experiment, engaged in class discussions and activities around the methods used in the experiment, and then used data generated by the class to run a range of data handling and statistical procedures. In this paper, we describe an evaluation of the first of these workshop activities in terms of (a) its subjective appeal to students; and, (b) its pedagogic effectiveness. It was hypothesized that, compared to control participants who were provided with the same content, but delivered using a didactic presentation and canned dataset, students who participated in the activity-based (active learning + real data) workshop would (H1) evaluate the workshop more favorably; (H2) report higher levels of satisfaction with the workshop; (H3) achieve higher scores on a short multiple-choice quiz assessing their knowledge of key learning concepts addressed in the workshop; and (H4) report significantly higher confidence about their ability to demonstrate skills and knowledge acquired and practiced in the workshop.

Materials and Methods

A non-equivalent groups (quasi-experimental) design was employed in this study, with intact tutorial classes randomly assigned to the two workshop versions. These workshop versions were equivalent in content, but differed in delivery format. The activity-based version of the workshop began with a computer-based experiment in which the students participated, and contained activities that required students to analyze data collected in class. The canned dataset version of the workshop differed in that it began with a short description of the computer-based experiment (presented by the same instructors as the activity-based workshop), but was otherwise equivalent to the activity-based workshop. As much as possible, the workshops were identical in all other respects. The independent variable in this study was workshop type, of which there were two levels: activity-based and didactic/canned. The four dependent variables were: (1) evaluations, (2) overall satisfaction, (3) knowledge, and (4) confidence.

Participants

Participants were recruited from a participant pool, within which students are required to participate in at least 10 points worth of research during each semester (or complete an alternate written activity). One point was awarded for participating in the current study. A total of 39 participants were obtained for final analysis. Initial comparisons between the activity-based group ( n = 25; M age = 22.43, SD = 4.95; 68% female; M final grade = 61.12, SD = 14.54) and the didactic/canned group ( n = 14; M age = 25.93, SD age = 12.27; 78.6% female; M final grade = 61.42, SD = 11.90) indicated that there were no reliable group differences in age, t (15.59) = -1.22, p = 0.230, d = 0.37, gender distribution, χ 2 (1, N = 39) = 0.50, p = 0.482, φ = 0.11, or final semester grades, t (36) = -0.066, p = 0.948, d = 0.02.

This research complies with the guidelines for the conduct of research involving human participants, as published by the Australian National Health and Medical Research Council ( The National Health, Medical Research Council, the Australian Research Council, and the Australian Vice-Chancellors’ Committee [NH&MRC], 2007 ). Prior to recruitment of any participants, the study was reviewed and approved by the Human Research Ethics Committee at Curtin University. Consent was indicated by the submission of an online evaluation questionnaire, as described in the participant information immediately preceding it.

Materials and Measures

The activity-based version of the workshop commenced with students participating in a short computer based experiment designed to examine the effects of processing depth on recall. Class members were randomized to one of two processing conditions, imagine and rehearse , then asked to remember a list of 12 words presented on screen at a rate of one word every 2 s. Members of the imagine condition were encouraged to engage in deep processing by being instructed to “try to imagine each concept as vividly as possible such that you are able to remember it later.” Members of the rehearse condition were encouraged to engage in shallow processing by being instructed to “try to rehearse each word silently such that you are able to remember it later.” All students then completed multiplication problems for 150 s as a distractor task. Finally, all students were presented with 24 words, 12 of which were ‘old’ (i.e., appeared on the original list) and 12 of which were ‘new’. They were asked to indicate whether each of the 24 words was ‘old’ or ‘new’ by pressing a relevant keyboard button.

This task was developed in Java by the second author, as existing commercial software packages were unsuitable for our purposes due to high annual licensing fees (e.g., St James et al., 2005 ), or an insufficient feature set (e.g., Francis et al., 2008 ). It was hosted on a private webserver, and accessed by students using a standard web browser (e.g., Firefox). The data generated by each student were saved to a MySQL database accessible to the class tutor from his/her networked workstation. Following their participation, students were provided with a brief written summary of the experiment, and asked to work together to address a series of questions about its key methodological features. These questions prompted students to identify and operationalize independent and dependent variables, write research and null hypotheses, visualize experimental designs using standard notation, and consider the purpose of randomization.

While the students worked on these questions, the tutor downloaded the class data and collated them into an SPSS data file that was subsequently uploaded to a network drive for students to access. After a brief class discussion around the methodology of the experiment, students were directed to open the SPSS data file, and commence work on a series of questions requiring various data handling techniques and statistical analyses to address. Specifically, students were required to identify the appropriate statistical test to compare the two conditions on classification accuracy, and then run, interpret and report (in APA style) an independent samples t -test (including assumption testing, and an effect size). The workshop concluded with a class discussion around the statistical analyses, findings and interpretation.

The didactic/canned version of the workshop was identical to the activity-based version, except it began with a short description of the computer based experiment (presented by the class tutor with the aid of PowerPoint slides), and required students to analyze a canned data set, rather than class generated data.

Evaluation Questionnaire

The online evaluation questionnaire contained five sections, measuring the four DVs and capturing key demographic data. It is reproduced in full in the Appendix (available as Supplementary Material Data Sheet 1).

Section 1 (evaluations)

Section 1 of the online questionnaire contained 13 items assessing students’ evaluations of the workshop. Although there are numerous measures that have been developed to allow students to evaluate units and courses, a review of the literature indicated that there are currently no instruments suitable for evaluating specific activities embedded within a unit or course. Consequently, this measure was developed specifically for the purposes of the current research (although inspired by the single-item measures that are frequently used in evaluations of teaching activities reported elsewhere). Participants responded to each item on a 7-point scale ranging from 1 ( Strongly disagree ) to 7 ( Strongly agree ), and examples of items on this measure include “this workshop was useful” and “this workshop was an effective way of teaching research methods and statistics.” Although a small sample size limited our ability to examine the factor structure of this measure (for example, Pett et al. (2003) , suggest a minimum of 10–15 cases per item for exploratory factor analysis), Cronbach’s alpha was 0.96, indicating that it was internally consistent. Responses to the 13 items were summed to provide an overall index of how favorably students rated the workshop.

Section 2 (satisfaction)

The second section of the online questionnaire was a single item measure of overall satisfaction with the workshop, which respondents answered on a scale ranging from 1 ( Very Dissatisfied ) to 10 ( Very Satisfied ). The correlation between this single item measure and the sum of responses to the 13-item evaluation scale was r = 0.91, suggesting that they measured overlapping constructs.

Section 3 (knowledge)

Five multiple-choice questions were used to assess knowledge of the key learning outcomes addressed in the workshop. Each question provided four response options, of which only one was correct, thus total scores on this measure ranged from 0 to 5.

Section 4 (confidence)

This section of the questionnaire asked respondents to indicate on a 4-point scale ranging from 1 ( Not at all confident ) to 4 ( Very confident ) their confidence regarding their ability to apply seven specific skills developed in the workshop, assuming access to their notes and textbook. For example, “run and interpret and independent samples t -test using SPSS.” Again, the small sample size limited our ability to examine the factor structure of this measure, although Cronbach’s alpha was 0.84, indicating that it was internally consistent. Responses to the items on this measure were summed to provide an overall index of student confidence.

Section 5 (demographics)

The final section of the evaluation questionnaire asked students to specify their age, gender, and the day/time of the workshop they attended. The day/time information was used to assign participants to the levels of the independent variable.

Before the start of semester, tutorial classes were block-randomized to the two workshop versions. The workshop was then delivered as part of the normal tutorial schedule. Participants were provided with an information sheet outlining the nature of the current study, and it was stressed that their involvement was (a) entirely voluntary, and (b) anonymous to the unit’s teaching staff. At the end of the workshop, students were reminded about the research, and asked to complete the online evaluation questionnaire, which was linked from the unit’s Blackboard site, within 48 h of the class finishing. Prior to accessing the online questionnaire, participants were presented with an online version of the information sheet hosted on our school website, as recommended by Allen and Roberts (2010) .

Each hypothesis was tested with a Generalized Linear Mixed Model (GLMM), implemented via SPSS GENLINMIXED (version 22), with an alpha level of 0.0125 (to protect against the inflated risk of making Type 1 errors when conducting multiple comparisons on a single data set), and robust parameter estimation. GLMM is preferable to a series of independent samples t -tests or ordinary least squares (OLS) regression analyses, as it can accommodate dependencies arising from nested data structures (in this instance, 39 students nested in seven classes, facilitated by three tutors), non-normal outcome variables, and small, unequal group sizes. In each GLMM, there were two random effects (class and tutor) 1 and one fixed effect (condition) specified. A normal probability distribution was assumed for each outcome variable, and each was linked to the fixed effect with an identity function.

The fixed effects from the four GLMMs are summarized in Table 1 , where it can be seen that members of the activity-based condition scored significantly higher than members of the didactic/canned condition on the knowledge and confidence measures, but not the evaluation and satisfaction measures. When indexed using Hedges’ g, the knowledge and confidence effects could be characterized as ‘large’ and ‘small,’ respectively.

www.frontiersin.org

TABLE 1. Summary of differences between the two conditions on the four outcome variables.

We have focused on the implementation of two recommended strategies for teaching research methods and statistics: using real data, and following an active learning approach. Our results showed no reliable differences between groups in their rated evaluation of (H1), or satisfaction with (H2) the workshops. Those participants in the activity-based workshop were statistically no different in their views to those in the didactic/canned workshop. Indeed, it is interesting to note that both groups rated the workshops to be below-average (i.e., below the neutral-point) on the evaluation and satisfaction measures, suggesting that their views regarding the workshops were somewhere between ambivalent and negative. Overall, these findings were not as we predicted. Rather, we expected students in the activity-based workshop to find more satisfaction with their workshop and evaluate their learning experience more favorably. In-line with our predictions, however, was the finding that on the outcome measure of knowledge/performance, the activity-based group did significantly outperform those in the didactic/canned workshop (H3). Thus, while the groups did not differ in their apparent engagement, they nevertheless achieved different levels of knowledge. Also noteworthy, was the finding that the activity-based group were reliably different to the didactic/canned group in their reported levels of confidence to later apply the skills developed in the workshop (H4).

Seemingly, the results of this study sit at odds with the ‘causal chain’ we described in the introduction. One possible explanation is that for student satisfaction to be positively affected, students need to see the results of their engaged learning first, and perhaps these positive attitudes require time to accumulate. In our study, participants did not have this opportunity. A more interesting possibility is that rather than greater engagement being instrumental in promoting greater levels of satisfaction and enjoyment, which in turn promotes learning, that instead, one’s level of satisfaction is in fact rather separate to the process of learning. If so, this would indicate that a combination of teaching strategies is needed to produce positive outcomes and student engagement. Accordingly, our results would be consistent with previous research that suggests exposure to research methods and statistics in an engaging environment can improve students’ knowledge without necessarily affecting their attitudes (e.g., Sizemore and Lewandowski, 2009 ). This latter interpretation offers up a variety of potentially interesting research avenues. Minimally, the results of this study suggest against the tailoring of content in educational curricular, based on the reported levels of satisfaction of students.

Limitations

While the results of the current study raise intriguing questions about the relationship between academic outcomes and self-reported student satisfaction and evaluations, it is important to note a number of possible limitations to the approach we took. The first of these concerns the relatively small, unequal number of participants in the activity-based ( n = 25) versus canned/didactic ( n = 14) groups. Clearly, to be more confident in our results, this study requires replication with a larger, more evenly spread sample. A second sampling limitation concerns the randomization of intact groups to conditions. Ideally, we would have randomized individual participants to either the activity-based or didactic/canned workshop, allowing for a true experimental test of each hypothesis. However, this was not possible due to the fact that students self-select into classes based on personal preferences and commitments.

A further possible limitation concerns the analytical approach we chose. Had we opted for another approach, for example independent samples t -tests, no reliable differences would have emerged ( p s 0.385–0.839) and the implications of our study would be quite different. However, due to the fact that participants were recruited across a number of tutorial groups ( n = 7) supervised by a number of instructors ( n = 3), we deemed the use of GLMM procedures to be most appropriate. This is because GLMM is aptly suited to dealing with hierarchical data, and clustering effects that may have been present within nested groups of tutorials and instructors. GLMM has the further advantage over the t -test in that it may be more robust to dealing with unequal sample sizes ( Bolker et al., 2009 ). Although our analysis showed no such clustering effects, in light of the sampling limitations, GLMM remained most suited to the data.

This paper describes the implementation and quasi-experimental evaluation of a relatively short (1.5 h) class activity in which students participated in an authentic computer-based psychological experiment, engaged in class discussion around its methods, and then used class-generated data to run a ange of data handling procedures and statistical tests. Results indicated that students who participated in this activity scored significantly higher than participants in a parallel didactic/canned class on measures of knowledge and confidence, but not on overall evaluations or satisfaction. In contrast to the view that student satisfaction is paramount in achieving positive learning outcomes, the results of the current study suggest that, at least during some points in the learning process, one’s level of satisfaction has little effect. This would indicate that a combination of teaching strategies is needed to produce both positive outcomes and student engagement. Future research that employs large-scale, fully randomized experimental designs may have the best chance of revealing these strategies ( Wilson-Doenges and Gurung, 2013 ).

Authors Contributions

PA conceived and designed the study and analyzed the data. PA and FB co-authored this manuscript. FB programmed the experimental task used as one level of the IV, wrote the documentation and spreadsheets used by the tutors to aggregate the data for class use, and contributed to the overall design of the study.

This research was supported with an eScholar grant awarded to the first author by the Centre for eLearning, Curtin University, Australia.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would also like to acknowledge Dr. Robert T Kane from the School of Psychology and Speech Pathology at Curtin University for his statistical advice.

Supplementary Material

The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fpsyg.2016.00279

  • ^ Note that for five of the eight tests of random effects, the variances were negative, and consequently set at zero during analyses. For iterative procedures (e.g., maximum likelihood estimation), this can occur when the variance attributable to a random effect is relatively small, and the random effect is having a negligible impact on outcome of the analyses. The remaining three random effects were non-significant, with Wald’s Z ranging from 0.298 to 0.955 ( p = 0.765 to 0.340). Despite their non-significance in the current context, the random effects of class and tutor were retained in our analyses, based on the common recommendation that non-independence of observations attributable a study’s design ought to be routinely accounted for, regardless of the estimated magnitude of its impact ( Murray and Hannan, 1990 ; Bolker et al., 2009 ; Thiele and Markussen, 2012 ; Barr et al., 2013 ).

Aliaga, M., Cobb, G., Cuff, C., Garfield, J., Gould, R., Lock, R., et al. (2005). Guidelines for Assessment and Instruction in Statistics Education: College Report. Alexandria, VA: American Statistical Association.

Google Scholar

Allen, P. J., and Roberts, L. D. (2010). The ethics of outsourcing survey research. Int. J. Technoethics 1, 35–48. doi: 10.4018/jte.2010070104

CrossRef Full Text | Google Scholar

Barr, D. J., Levy, R., Scheepers, C., and Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: keep it maximal. J. Mem. Lang. 68, 255–278. doi: 10.1016/j.jml.2012.11.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Bolker, B. J., Brooks, M. E., Clark, C. J., Geange, S. W., Poulsen, J. R., Stevens, M. H. H., et al. (2009). Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol. Evol. 24, 127–135. doi: 10.1016/j.tree.2008.10.008

Chapdelaine, A., and Chapman, B. L. (1999). Using community-based research projects to teach research methods. Teach. Psychol. 26, 101–105. doi: 10.1207/s15328023top2602_4

Christopher, A. N., and Marek, P. (2002). A sweet tasting demonstration of random occurrences. Teach. Psychol. 29, 122–125. doi: 10.1207/S15328023TOP2902_09

Conners, F. A., Mccown, S. M., and Roskos-Ewoldsen, B. (1998). Unique challenges in teaching undergraduate statistics. Teach. Psychol. 25, 40–42. doi: 10.1207/s15328023top2501_12

Copeland, D. E., Scott, J. R., and Houska, J. A. (2010). Computer-based demonstrations in cognitive psychology: benefits and costs. Teach. Psychol. 37, 141–145. doi: 10.1080/00986281003626680

Elliott, L. J., Rice, S., Trafimow, D., Madson, L., and Hipshur, M. F. (2010). Research participation versus classroom lecture: a comparison of student learning. Teach. Psychol. 37, 129–131. doi: 10.1080/00986281003626862

Eschman, A., St James, J., Schneider, W., and Zuccolotto, A. (2005). PsychMate: providing psychology majors the tools to do real experiments and learn empirical methods. Behav. Res. Methods 37, 301–311. doi: 10.3758/BF03192698

Fisher, L. A., and Richards, D. St. P. (2004). Random walks as motivational material in introductory statistics and probability courses. Am. Stat. Assoc. 58, 310–316. doi: 10.1198/000313004X5482

Francis, G., Neath, I., and VanHorn, D. (2008). CogLab on a CD: Version 2.0 , 4th Edn. Belmont, CA: Wadsworth.

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. U.S.A. 111, 8410–8415. doi: 10.1073/pnas.1319030111

Hamilton, M., and Geraci, L. (2004). Converting an experimental laboratory course from paper and pencil to computer. Teach. Psychol. 31, 141–143. doi: 10.1207/s15328023top3102_7

Hogg, R. V. (1991). Statistical education: improvements are badly needed. Am. Stat. 45, 342–343. doi: 10.2307/2684473

Lipsitz, A. (2000). Research methods with a smile: a gender difference exercise that teaches methodology. Teach. Psychol. 27, 111–113. doi: 10.1207/S15328023TOP2702_07

Lovett, M. C., and Greenhouse, J. B. (2000). Applying cognitive theory to statistics instruction. Am. Stat. 54, 196–206. doi: 10.2307/2685590

Lutsky, N. (1986). Undergraduate research experience through the analysis of data sets in psychology courses. Teach. Psychol. 13, 119–122. doi: 10.1207/s15328023top1303_4

Madson, L. (2005). Demonstrating the importance of question wording on surveys. Teach. Psychol. 32, 40–43. doi: 10.1207/s15328023top3201_9

Marek, P., Christopher, A. N., and Walker, B. J. (2004). Learning by doing: research methods with a theme. Teach. Psychol. 31, 128–131. doi: 10.1207/s15328023top3102_6

Morgan, D. L. (2009). Using single-case design and personalized behavior change projects to teach research methods. Teach. Psychol. 36, 267–269. doi: 10.1080/00986280903175715

Murray, D. M., and Hannan, P. J. (1990). Planning for the appropriate analysis in school-based drug-use prevention studies. J. Consult. Clin. Psychol. 58, 458–468. doi: 10.1037/0022-006X.58.4.458

Murtonen, M. (2005). University students’ research orientations: do negative attitudes exist toward quantitative methods? Scand. J. Educ. Res. 49, 263–280. doi: 10.1080/00313830500109568

Neumann, D. L., Hood, M., and Neumann, M. M. (2013). Using real-life data when teaching statistics: student perceptions of this strategy in an introductory statistics course. Stat. Educ. Res. J. 12, 59–70.

Neumann, D. L., Neumann, M. M., and Hood, M. (2010). The development and evaluation of a survey that makes use of student data to teach statistics. J. Stat. Educ. 18, 1–19.

PubMed Abstract | Google Scholar

Owen, W. J., and Siakaluk, P. D. (2011). A demonstration of the analysis of variance using physical movement and space. Teach. Psychol. 38, 151–154. doi: 10.1177/0098628311411779

Pett, M. A., Lackey, N. R., and Sullivan, J. J. (2003). Making Sense of Factor Analysis: The Use of Factor Analysis for Instrument Development in Health Care Research. Thousand Oaks, CA: Sage.

Ragozzine, F. (2002). SuperLab LT: evaluation and uses in teaching experimental psychology. Teach. Psychol. 29, 251–254. doi: 10.1207/S15328023TOP2903_13

Renner, M. J. (2004). Learning the Rescorla-Wagner model of Pavlovian conditioning: an interactive simulation. Teach. Psychol. 31, 146–148. doi: 10.1207/s15328023top3102_9

Riniolo, T. C., and Schmidt, L. A. (1999). Demonstrating the gambler’s fallacy in an introductory statistics class. Teach. Psychol. 26, 198–200. doi: 10.1207/S15328023TOP260308

Rottinghaus, P. J., Gaffey, A. R., Borgen, F. H., and Ralston, C. A. (2006). Diverse pathways of psychology majors: vocational interests, self-efficacy, and intentions. Career Dev. Q. 55, 85–93. doi: 10.1002/j.2161-0045.2006.tb00007.x

Sciutto, M. J. (2000). Demonstration of factors affecting the F ratio. Teach. Psychol. 27, 52–53. doi: 10.1207/S15328023TOP2701_12

Singer, J. D., and Willett, J. B. (1990). Improving the teaching of applied statistics: putting the data back into data analysis. Am. Stat. 44, 223–230. doi: 10.2307/2685342

Sizemore, O. J., and Lewandowski, G. W. Jr. (2009). Learning might not equal liking: research methods course changes knowledge but not attitudes. Teach. Psychol. 36, 90–95. doi: 10.1080/00986280902739727

St James, J. D., Schneider, W., and Eschman, A. (2005). PsychMate: Experiments for Teaching Psychology (Version 2.0). Pittsburgh, PA: Psychology Software Tools.

Stedman, M. E. (1993). Statistical pedagogy: employing student-generated data sets in introductory statistics. Psychol. Rep. 72, 1036–1038. doi: 10.2466/pr0.1993.72.3.1036

The National Health, Medical Research Council, the Australian Research Council, and the Australian Vice-Chancellors’ Committee [NH&MRC].(2007). National Statement on Ethical Conduct in Human Research 2007 (Updated May 2015). Canberra, ACT: Author.

Thiele, J., and Markussen, B. (2012). Potential of GLMM in modelling invasive spread. CAB Rev. 7, 1–10. doi: 10.1079/PAVSNNR20127016

Thompson, W. B. (1994). Making data analysis realistic: incorporating research into statistics courses. Teach. Psychol. 21, 41–43. doi: 10.1207/s15328023top2101_9

Vittengl, J. R., Bosley, C. Y., Brescia, S. A., Eckardt, E. A., Neidig, J. M., Shelver, K. S., et al. (2004). Why are some undergraduates more (and others less) interested in psychological research? Teach. Psychol. 31, 91–97. doi: 10.1207/s15328023top3102_3

Wilson-Doenges, G., and Gurung, R. A. R. (2013). Benchmarks for scholarly investigations of teaching and learning. Aust. J. Psychol. 65, 63–70 doi: 10.1111/ajpy.12011

Keywords : active learning, research methods, statistics, computer based experiments, authentic data, canned data

Citation: Allen PJ and Baughman FD (2016) Active Learning in Research Methods Classes Is Associated with Higher Knowledge and Confidence, Though not Evaluations or Satisfaction. Front. Psychol. 7:279. doi: 10.3389/fpsyg.2016.00279

Received: 29 July 2015; Accepted: 12 February 2016; Published: 01 March 2016.

Reviewed by:

Copyright © 2016 Allen and Baughman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Peter J. Allen, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

psychology research methods class activities

Live revision! Join us for our free exam revision livestreams Watch now →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Teaching Activities

Research Methods Teaching Activity: Reliably Valid Bullseye

Last updated 2 Dec 2022

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This is a simple game which can be used to introduce the concepts of reliability and validity.

psychology research methods class activities

The concepts of reliability and validity are often confused by students which can bring their marks down on many topics.

There are various alternatives for this activity, limited only by your student numbers, resources, space and imagination!

This resource comes complete with instructions and a colour bullseye target that can be used in class.

Download this free tutor2u resource

You might also like

Results analysis for a-level psychology in 2019, research methods teaching activity: sampling with skittles, aqa a-level psychology christmas lesson quiz pack for 2020, approaches teaching activity: it's a knockout, ib psychology teaching activity: acculturation identifier, ib psychology teaching activity: sampling solutions, teaching activity l developmental dominoes for aqa gcse psychology, halloween quiz 2023 | aqa a level psychology, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

Teach Psych Science

Activity: Ethics in Psychological Research

psychology research methods class activities

This in-class activity (created by Dr. Jamie S. Hughes ) may be used with upper level undergraduates or new graduate students in research methods courses. It is designed for use with small collaborative groups and requires about 50 minutes of class time. Students will apply their knowledge about Belmont principles, APA ethical guidelines, and IRB review. Given information about a study, students will determine which category of IRB review is most appropriate, identify applicable Belmont principles and decide whether or not the study conforms to the principles, perform a risk and benefit analysis, and finally redesign studies to minimize risks.

Please click here for the file.

  • Student Login
  • Instructor Login
  • Areas of Study
  • Art and Design
  • Behavioral Health Sciences
  • Business Administration
  • Leadership and Management
  • Project Management
  • See the full list
  • Construction and Sustainability
  • Humanities and Languages
  • Mathematics and Statistics
  • Sciences and Biotechnology
  • Chemistry and Physics
  • Clinical Laboratory Science
  • Health Advising
  • Life Science Business and Biotechnology
  • Online Sciences Courses
  • Technology and Information Management
  • Writing, Editing and Technical Communication
  • Transfer Credit
  • Transfer Credit Courses
  • Online Learning
  • Online Courses and Certificates
  • Information Sessions
  • Career Services
  • Career-Development Courses
  • Professional Internship Program
  • Custom Programs
  • For Universities and Organizations
  • Academic Services
  • Transcripts
  • General Information
  • Community Guidelines
  • Course and Program Information
  • Latest COVID-19 Information
  • Online Course Policies
  • Certificates, Programs and CEUs
  • Concurrent Enrollment
  • International Student Services
  • Student Aid
  • Disability Support Services
  • Financial Assistance
  • Voices Home
  • Educator Insights
  • Student Stories
  • Professional Pathways
  • Industry Trends
  • Free and Low Cost Events
  • Berkeley Global

Research Methods in Psychology

Prerequisites: A lower-division general psychology course

Course Outline

Course Objectives

  • Describe the methods and procedures used to carry out and evaluate psychological research
  • Discuss the critical issues that must be considered to evaluate "scientific evidence" found in journals, magazines, newspapers, and news programs
  • Explain the standards recommended by the American Psychological Association to publish research and papers in the behavioral sciences

What You Learn

  • The history of psychology as a science
  • Ethics and research, including in psychological research, developing ideas, finding a research article and in scholarly communication
  • Defining and evaluating measurements
  • Data analysis
  • Essential features of experimental design
  • Single-factor designs
  • Factorial designs
  • Correlational research
  • Quasi-experimental designs
  • Observational methods
  • Small N designs

How You Learn

  • Reading assignments
  • Discussion forum assignments
  • Self-check tests
  • Laboratory-based quizzes
  • Midterm exams
  • Proctored final exam

Is This Course for Me?

This is an excellent introductory course for students interested in psychological research, and is a required course in the Post-Baccalaureate Program for Counseling and Psychology Professions.

Section 058

Type: Online, Fixed Date

Instructor:

Jiuqing Cheng

Cost: $910.00

See section 058 Details

Type Online, Fixed Date

Instructional Hours

Delivery Options

Available for Credit

Section Notes

Drop deadline: 6 days after start date.

This is a fixed-date class with asynchronous, online instruction . 

Students will receive access to their online section within the Canvas Learning Management System on the scheduled start date. There are weekly deadlines for assignments, and students will complete assignments within the specific dates posted in their online class.

Final exam will be online. 

Summer 2024 enrollment opens on March 18!

Session Time-Out

Privacy policy, cookie policy.

This statement explains how we use cookies on our website. For information about what types of personal information will be gathered when you visit the website, and how this information will be used, please see our Privacy Policy .

How we use cookies

All of our web pages use "cookies". A cookie is a small file of letters and numbers that we place on your computer or mobile device if you agree. These cookies allow us to distinguish you from other users of our website, which helps us to provide you with a good experience when you browse our website and enables us to improve our website.

We use cookies and other technologies to optimize your website experience and to deliver communications and marketing activities that are targeted to your specific needs. Some information we collect may be shared with selected partners such as Google, Meta/Facebook or others. By browsing this site you are agreeing to our Privacy Policy . You can revoke your voluntary consent to participate in monitored browsing and targeted marketing by selecting “Disable All Cookies” below.

Types of cookies we use

We use the following types of cookies:

  • Strictly necessary cookies - these are essential in to enable you to move around the websites and use their features. Without these cookies the services you have asked for, such as signing in to your account, cannot be provided.
  • Performance cookies - these cookies collect information about how visitors use a website, for instance which pages visitors go to most often. We use this information to improve our websites and to aid us in investigating problems raised by visitors. These cookies do not collect information that identifies a visitor.
  • Functionality cookies - these cookies allow the website to remember choices you make and provide more personal features. For instance, a functional cookie can be used to remember the items that you have placed in your shopping cart. The information these cookies collect may be anonymized and they cannot track your browsing activity on other websites.

Most web browsers allow some control of most cookies through the browser settings. To find out more about cookies, including how to see what cookies have been set and how to manage and delete them please visit https://www.allaboutcookies.org/.

Specific cookies we use

The list below identify the cookies we use and explain the purposes for which they are used. We may update the information contained in this section from time to time.

  • JSESSIONID: This cookie is used by the application server to identify a unique user's session.
  • registrarToken: This cookie is used to remember items that you have added to your shopping cart
  • locale: This cookie is used to remember your locale and language settings.
  • cookieconsent_status: This cookie is used to remember if you've already dismissed the cookie consent notice.
  • _ga_UA-########: These cookies are used to collect information about how visitors use our site. We use the information to compile reports and to help us improve the website. The cookies collect information in an anonymous form, including the number of visitors to the website, where visitors have come to the site from and the pages they visited. This anonymized visitor and browsing information is stored in Google Analytics.

Changes to our Cookie Statement

Any changes we may make to our Cookie Policy in the future will be posted on this page.

Guidelines for Classroom Activities Involving Research Methods

Guidelines for Classroom Activities Involving Research Methods

Definitions.

  • Graduate thesis and capstone projects are clearly understood as research and fall within the IRB purview when human participants are involved.
  • If a student's proposed research project involving human subjects may result in a formal publication or presentation (beyond Utica University's Student Research Day), involves more than minimal risk to the subjects, or in any other way meets the federal definition of "research" according to 45 CFR 46, the student must receive IRB approval before beginning the study.
  • There may be instances when a student or instructor wishes to use data for research that was previously collected for educational purposes. An application should be submitted to the IRB when a student or instructor wishes to analyze the data with the intent of contributing to generalizable knowledge.

James Teliha

I would like to see logins and resources for:.

For a general list of frequently used logins, you can also visit our logins page .

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

IMAGES

  1. Psychology Research Methods Class Experiment Activity Interactive Lesson

    psychology research methods class activities

  2. Psychology: Research Methods Task Cards

    psychology research methods class activities

  3. *APPROACHES IN PSYCHOLOGY* COMPLETE TOPIC WITH LESSON SLIDES AND

    psychology research methods class activities

  4. Introduction to Psychology Worksheet

    psychology research methods class activities

  5. Psychology Research Methods Activities Homework by Everyday Learning

    psychology research methods class activities

  6. Research methods Complete lesson

    psychology research methods class activities

VIDEO

  1. Powerpoint Psychology Research Methods

  2. Welcome to Basic Research Methods in Psychology

  3. PSY 2120: Why study research methods in psychology?

  4. Pinel's Chapter 5: Research Methods in Biopsychology

  5. Research Methods In Psychology

  6. Types of Research

COMMENTS

  1. TOPSS Classroom Activities

    Quickly search over 100 activities from our popular Teachers of Psychology in Secondary School (TOPSS) unit lesson plans. Choose from popular topics like biological basis of behavior, memory, research methods and more. Be sure to check back regularly as activities will be added as they become available.

  2. Classroom Resources

    The activities guide students through the reproduction of the results reported in papers published in the journal Psychological . PSY 225: Research Methods. Materials from APS Past President Morton Ann Gernsbacher's Research Methods class, designed for undergraduates. Beginnings in Psychological Science

  3. Lesson activity: practical activities for research methods

    Activity 2: investigating handedness. Research suggests that around 10% of the population are left handed or 'sinistral'. You are going to design a study to compare two types of A-level student. You are aiming to see whether left handedness is more common in some subject groups, such as art students or geographers.

  4. Society for the Teaching of Psychology

    In-class activities and detailed instructions (including screen shots) are appropriate primarily for the graphing unit in a statistics or research methods class, but other instructors who want to help students read primary sources can select particular stand-alone activities from the set provided. Note: 2010 OTRP Instructional Resource Award

  5. APA resources to help teachers engage students in research

    APA's revised National Standards for High School Psychology Curricula emphasize the importance of infusing research across all units of the high school psychology course. By embedding research methods into all course units, teachers can demonstrate that research and scientific inquiry are the foundation of the field.

  6. PDF 100 Activities For Teaching Research Methods

    Elizabeth Brondolo. 100 Activities for Teaching Research Methods Catherine Dawson,2016-08-08 A sourcebook of exercises, games, scenarios and role plays, this practical, user-friendly guide provides a complete and valuable resource for research methods tutors, teachers and lecturers. Developed to complement and enhance existing course materials ...

  7. Teaching of Psych Idea Exchange (ToPIX) / Research Methods in the Classroom

    It consists of 10 pages of definitions, examples, and images, and 11 quiz questions that help students measure their understanding of the design. At the end of the application, students can print a certificate to indicate their completion of the exercise." Add your tips and suggestions for teaching methods in Intro.

  8. Social Psychology Teaching Resources

    Listed below are links to a variety of social psychology teaching resources, including textbooks, course syllabi, lecture notes, classroom activities, demonstrations, assignments, and more. The following table shows a detailed outline of topics. For information on the SPN Action Teaching Award, please visit the teaching award page.

  9. Research Methods Teaching Activity: Mix a Method

    In Mix a Method, students wpractice developing research ideas with specific criteria, and work collaboratively to make cost-benefit decisions about research elements. The activity can be used for teaching research design, and multiple elements of the research methods specification such as experimental group designs, validity, reliability, and ...

  10. Psychology Research Methods Teaching Resources

    Psychology: Foundations: Research MethodsHand In Hand's Psychology Research Methods Lessons and Homework packet includes 3 activities that can be used as in-class small group assignments or as homework. → 1 - Identifying Research MethodsAfter students have learned about different types of psychological research methods, they have the opportunity to apply that knowledge with this 2-page ...

  11. Resources for the Teaching of Social Psychology

    The Self. Click on the topic links below to find activities and exercises, class assignments, online videos, examples, and more for the teaching of social psychology and related courses. PANDEMIC-related resources have been recently added to the site. Use the search box below or Ctrl (Cmd)-F on each page using the keyword "pandemic."

  12. Frontiers

    For instance, within psychology, research methods and statistics are widely regarded as 'difficult' subjects to teach (e.g., Conners et al., 1998). ... In each of these activities, students participated in a computer-based psychological experiment, engaged in class discussions and activities around the methods used in the experiment, and ...

  13. Research Methods

    01. Psychology as a Science: Thinking Like a Researcher, Research Methods / Activities and Demonstrations, Critical Thinking, Thinking Scientifically / TeachPsychScience Editor. This activity involves students analyzing a famous editorial about the existence of Santa Claus in terms in terms of different ways of knowing (e.g., intuition ...

  14. Research Methods Teaching Activity: Reliably Valid Bullseye

    Teaching Activities. Research Methods Teaching Activity: Reliably Valid Bullseye. Level: GCSE, AS, A-Level, IB, BTEC National, BTEC Tech Award ... Research Methods Teaching Activity: Sampling with Skittles Teaching Activities. AQA A-Level Psychology Christmas Lesson Quiz Pack for 2020 Teaching Activities. Approaches Teaching Activity: It's a ...

  15. Activity: Ethics in Psychological Research

    This in-class activity (created by Dr. Jamie S. Hughes) may be used with upper level undergraduates or new graduate students in research methods courses. It is designed for use with small collaborative groups and requires about 50 minutes of class time. Students will apply their knowledge about Belmont principles, APA ethical guidelines, and ...

  16. Research Methods in Psychology

    PSYCH X106. Gain an understanding of the scientific methods behind psychological research, and how research in psychology is planned, carried out, communicated and critiqued. Learn methods of designing, collecting, analyzing and interpreting data using examples from a variety of specialty areas in psychology.

  17. Guidelines for Classroom Activities Involving Research Methods

    Activities designed to educate/train students in research methods, under the normal classroom setting, usually do not fall within the federal definition of research as described in 45 CFR 46.102: Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable ...

  18. PDF Syllabus for Research Methods (Psychology 303)

    5 In-Class Activity Assignments (7 points each, but one will be dropped): 28 points Out-of-Class Activities (a few short exercises to complete outside of class): 20 points 3 Exams (100 points each): 300 points* ... Research Methods in Psychology: Evaluating a World of Information. (2nd ed.) New York: Norton & Company. ...

  19. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  20. Engaging activities for a research methods class?

    A place to share and discuss articles/issues related to all fields of psychology. Discussions should be of an academic nature, avoiding 'pop psychology.' This is also a place to talk about your own psychology research, methods, and career in order to gain input from our vast psychology community.