research methods key terms table

Final dates! Join the tutor2u subject teams in London for a day of exam technique and revision at the cinema. Learn more →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Research Methods Key Term Glossary

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

The researcher’s area of interest – what they are looking at (e.g. to investigate helping behaviour).

A graph that shows the data in the form of categories (e.g. behaviours observed) that the researcher wishes to compare.

Behavioural categories

Key behaviours or, collections of behaviour, that the researcher conducting the observation will pay attention to and record

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

Closed questions

Questions where there are fixed choices of responses e.g. yes/no. They generate quantitative data

Co-variables

The variables investigated in a correlation

Concurrent validity

Comparing a new test with another test of the same thing to see if they produce similar results. If they do then the new test has concurrent validity

Confidentiality

Unless agreed beforehand, participants have the right to expect that all data collected during a research study will remain confidential and anonymous.

Confounding variable

An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV

Content analysis

Technique used to analyse qualitative data which involves coding the written data into categories – converting qualitative data into quantitative data.

Control group

A group that is treated normally and gives us a measure of how people behave when they are not exposed to the experimental treatment (e.g. allowed to sleep normally).

Controlled observation

An observation study where the researchers control some variables - often takes place in laboratory setting

Correlational analysis

A mathematical technique where the researcher looks to see whether scores for two covariables are related

Counterbalancing

A way of trying to control for order effects in a repeated measures design, e.g. half the participants do condition A followed by B and the other half do B followed by A

Covert observation

Also known as an undisclosed observation as the participants do not know their behaviour is being observed

Critical value

The value that a test statistic must reach in order for the hypothesis to be accepted.

After completing the research, the true aim is revealed to the participant. Aim of debriefing = to return the person to the state s/he was in before they took part.

Involves misleading participants about the purpose of s study.

Demand characteristics

Occur when participants try to make sense of the research situation they are in and try to guess the purpose of the research or try to present themselves in a good way.

Dependent variable

The variable that is measured to tell you the outcome.

Descriptive statistics

Analysis of data that helps describe, show or summarize data in a meaningful way

Directional hypothesis

A one-tailed hypothesis that states the direction of the difference or relationship (e.g. boys are more helpful than girls).

Dispersion measure

A dispersion measure shows how a set of data is spread out, examples are the range and the standard deviation

Double blind control

Participants are not told the true purpose of the research and the experimenter is also blind to at least some aspects of the research design.

Ecological validity

The extent to which the findings of a research study are able to be generalized to real-life settings

Ethical guidelines

These are provided by the BPS - they are the ‘rules’ by which all psychologists should operate, including those carrying out research.

Ethical issues

There are 3 main ethical issues that occur in psychological research – deception, lack of informed consent and lack of protection of participants.

Evaluation apprehension

Participants’ behaviour is distorted as they fear being judged by observers

Event sampling

A target behaviour is identified and the observer records it every time it occurs

Experimental group

The group that received the experimental treatment (e.g. sleep deprivation)

External validity

Whether it is possible to generalise the results beyond the experimental setting.

Extraneous variable

Variables that if not controlled may affect the DV and provide a false impression than an IV has produced changes when it hasn’t.

Face validity

Simple way of assessing whether a test measures what it claims to measure which is concerned with face value – e.g. does an IQ test look like it tests intelligence.

Field experiment

An experiment that takes place in a natural setting where the experimenter manipulates the IV and measures the DV

A graph that is used for continuous data (e.g. test scores). There should be no space between the bars, because the data is continuous.

This is a formal statement or prediction of what the researcher expects to find. It needs to be testable.

Independent groups design

An experimental design where each participants only takes part in one condition of the IV

Independent variable

The variable that the experimenter manipulates (changes).

Inferential statistics

Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.

Informed consent

Psychologists should ensure that all participants are helped to understand fully all aspects of the research before they agree (give consent) to take part

Inter-observer reliability

The extent to which two or more observers are observing and recording behaviour in the same way

Internal validity

In relation to experiments, whether the results were due to the manipulation of the IV rather than other factors such as extraneous variables or demand characteristics.

Interval level data

Data measured in fixed units with equal distance between points on the scale

Investigator effects

These result from the effects of a researcher’s behaviour and characteristics on an investigation.

Laboratory experiment

An experiment that takes place in a controlled environment where the experimenter manipulates the IV and measures the DV

Matched pairs design

An experimental design where pairs of participants are matched on important characteristics and one member allocated to each condition of the IV

Measure of central tendency calculated by adding all the scores in a set of data together and dividing by the total number of scores

Measures of central tendency

A measurement of data that indicates where the middle of the information lies e.g. mean, median or mode

Measure of central tendency calculated by arranging scores in a set of data from lowest to highest and finding the middle score

Meta-analysis

A technique where rather than conducting new research with participants, the researchers examine the results of several studies that have already been conducted

Measure of central tendency which is the most frequently occurring score in a set of data

Natural experiment

An experiment where the change in the IV already exists rather than being manipulated by the experimenter

Naturalistic observation

An observation study conducted in the environment where the behaviour would normally occur

Negative correlation

A relationship exists between two covariables where as one increases, the other decreases

Nominal level data

Frequency count data that consists of the number of participants falling into categories. (e.g. 7 people passed their driving test first time, 6 didn’t).

Non-directional hypothesis

A two-tailed hypothesis that does not predict the direction of the difference or relationship (e.g. girls and boys are different in terms of helpfulness).

Normal distribution

An arrangement of a data that is symmetrical and forms a bell shaped pattern where the mean, median and mode all fall in the centre at the highest peak

Observed value

The value that you have obtained from conducting your statistical test

Observer bias

Occurs when the observers know the aims of the study study or the hypotheses and allow this knowledge to influence their observations

Open questions

Questions where there is no fixed response and participants can give any answer they like. They generate qualitative data.

Operationalising variables

This means clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV).

Opportunity sample

A sampling technique where participants are chosen because they are easily available

Order effects

Order effects can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task

Ordinal level data

Data that is capable of being out into rank order (e.g. places in a beauty contest, or ratings for attractiveness).

Overt observation

Also known as a disclosed observation as the participants given their permission for their behaviour to be observed

Participant observation

Observation study where the researcher actually joins the group or takes part in the situation they are observing.

Peer review

Before going to publication, a research report is sent other psychologists who are knowledgeable in the research topic for them to review the study, and check for any problems

Pilot study

A small scale study conducted to ensure the method will work according to plan. If it doesn’t then amendments can be made.

Positive correlation

A relationship exists between two covariables where as one increases, so does the other

Presumptive consent

Asking a group of people from the same target population as the sample whether they would agree to take part in such a study, if yes then presume the sample would

Primary data

Information that the researcher has collected him/herself for a specific purpose e.g. data from an experiment or observation

Prior general consent

Before participants are recruited they are asked whether they are prepared to take part in research where they might be deceived about the true purpose

Probability

How likely something is to happen – can be expressed as a number (0.5) or a percentage (50% change of tossing coin and getting a head)

Protection of participants

Participants should be protected from physical or mental health, including stress - risk of harm must be no greater than that to which they are exposed in everyday life

Qualitative data

Descriptive information that is expressed in words

Quantitative data

Information that can be measured and written down with numbers.

Quasi experiment

An experiment often conducted in controlled conditions where the IV simply exists so there can be no random allocation to the conditions

Questionnaire

A set of written questions that participants fill in themselves

Random sampling

A sampling technique where everyone in the target population has an equal chance of being selected

Randomisation

Refers to the practice of using chance methods (e.g. flipping a coin' to allocate participants to the conditions of an investigation

The distance between the lowest and the highest value in a set of scores.

A measure of dispersion which involves subtracting the lowest score from the highest score in a set of data

Reliability

Whether something is consistent. In the case of a study, whether it is replicable.

Repeated measures design

An experimental design where each participants takes part in both/all conditions of the IV

Representative sample

A sample that that closely matched the target population as a whole in terms of key variables and characteristics

Retrospective consent

Once the true nature of the research has been revealed, participants should be given the right to withdraw their data if they are not happy.

Right to withdraw

Participants should be aware that they can leave the study at any time, even if they have been paid to take part.

A group of people that are drawn from the target population to take part in a research investigation

Scattergram

Used to plot correlations where each pair of values is plotted against each other to see if there is a relationship between them.

Secondary data

Information that someone else has collected e.g. the work of other psychologists or government statistics

Semi-structured interview

Interview that has some pre-determined questions, but the interviewer can develop others in response to answers given by the participant

A statistical test used to analyse the direction of differences of scores between the same or matched pairs of subjects under two experimental conditions

Significance

If the result of a statistical test is significant it is highly unlikely to have occurred by chance

Single-blind control

Participants are not told the true purpose of the research

Skewed distribution

An arrangement of data that is not symmetrical as data is clustered ro one end of the distribution

Social desirability bias

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light.

Standard deviation

A measure of the average spread of scores around the mean. The greater the standard deviation the more spread out the scores are. .

Standardised instructions

The instructions given to each participant are kept identical – to help prevent experimenter bias.

Standardised procedures

In every step of the research all the participants are treated in exactly the same way and so all have the same experience.

Stratified sample

A sampling technique where groups of participants are selected in proportion to their frequency in the target population

Structured interview

Interview where the questions are fixed and the interviewer reads them out and records the responses

Structured observation

An observation study using predetermined coding scheme to record the participants' behaviour

Systematic sample

A sampling technique where every nth person in a list of the target population is selected

Target population

The group that the researchers draws the sample from and wants to be able to generalise the findings to

Temporal validity

Refers to how likely it is that the time period when a study was conducted has influenced the findings and whether they can be generalised to other periods in time

Test-retest reliability

Involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two

Thematic analysis

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

Time sampling

A way of sampling the behaviour that is being observed by recording what happens in a series of fixed time intervals.

Type 1 error

Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false

Type 2 error

Is a false negative. It is where you accept the null hypothesis when it is false

Unstructured interview

Also know as a clinical interview, there are no fixed questions just general aims and it is more like a conversation

Unstructured observation

Observation where there is no checklist so every behaviour seen is written down in an much detail as possible

Whether something is true – measures what it sets out to measure.

Volunteer sample

A sampling technique where participants put themselves forward to take part in research, often by answering an advertisement

You might also like

Working memory model, coding & encoding, episodic, procedural and semantic memory, multi-store model of memory, content analysis, investigator effects, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Glossary of research terms.

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research. Also included are common words and phrases defined within the context of how they apply to research in the social and behavioral sciences.

  • Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves.
  • Accuracy -- a term used in survey research to refer to the match between the target population and the sample.
  • Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions.
  • Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit.
  • Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants.
  • Baseline -- a control measurement carried out before an experimental treatment.
  • Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured.
  • Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.
  • Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference.
  • Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population.
  • Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.
  • Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way.
  • Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.
  • Causality -- the relation between cause and effect.
  • Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution.
  • Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.
  • Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research.
  • Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.
  • Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics.
  • Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].
  • Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant.
  • Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study.
  • Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.
  • Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
  • Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate.
  • Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications.
  • Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.
  • Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.
  • Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].
  • Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.
  • Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
  • Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted.
  • Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them.
  • Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.
  • Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables.
  • Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and  reliable [dependable].
  • Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises.
  • Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
  • Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.
  • Deviation -- the distance between the mean and a particular data point in a given distribution.
  • Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
  • Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings.
  • Distribution -- the range of values of a particular variable.
  • Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone.
  • Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge.
  • Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.
  • Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated.
  • Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research.
  • Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols.
  • External Validity -- the extent to which the results of a study are generalizable or transferable.
  • Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.
  • Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.
  • Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.
  • Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.
  • Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.
  • Grey Literature -- research produced by organizations outside of commercial and academic publishing that publish materials, such as, working papers, research reports, and briefing papers.
  • Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist.
  • Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group.
  • Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.
  • Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable.
  • Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals.
  • Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances.
  • Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.
  • Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community.
  • Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality.
  • Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
  • Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life.
  • Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.
  • Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.
  • Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses.
  • Methodology -- a theory or analysis of how research does and should proceed.
  • Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.
  • Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation.
  • Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.
  • Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation.
  • Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference.
  • Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old.
  • Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.
  • Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.
  • Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.
  • Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.
  • Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research.
  • Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view.
  • Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs.
  • Phonology -- the study of the ways in which speech sounds form systems and patterns in language.
  • Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area.
  • Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines.
  • Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.
  • Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation.
  • Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory.
  • Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.
  • Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project.
  • Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor].
  • Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.
  • Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.
  • Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.
  • Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.
  • Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.
  • Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.
  • Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.
  • Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them.
  • Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge.
  • Social Theories -- theories about the structure, organization, and functioning of human societies.
  • Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers.
  • Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.
  • Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.
  • Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.
  • Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].
  • Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.
  • Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong.
  • Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.
  • Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.
  • Treatment -- the stimulus given to a dependent variable.
  • Trend Samples -- method of sampling different groups of people at different points in time from the same population.
  • Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method.
  • Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.
  • Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.
  • Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].
  • Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance.
  • White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

Elliot, Mark, Fairweather, Ian, Olsen, Wendy Kay, and Pampaka, Maria. A Dictionary of Social Research Methods. Oxford, UK: Oxford University Press, 2016; Free Social Science Dictionary. Socialsciencedictionary.com [2008]. Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods . London: Sage, 2006.

  • << Previous: Independent and Dependent Variables
  • Next: 1. Choosing a Research Problem >>
  • Last Updated: Apr 19, 2024 11:16 AM
  • URL: https://libguides.usc.edu/writingguide

research methods key terms table

Qualitative and Quantitative Research: Glossary of Key Terms

This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods seminar (E600) taught by Mike Palmquist in the 1990s and 2000s.

Members of the Research Methods Seminar (E600) taught by Mike Palmquist in the 1990s and 2000s. (1994-2022). Glossary of Key Terms. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=90

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Research Methods

1.4 Understanding Key Research Concepts and Terms

In this textbook you will be exposed to many terms and concepts associated with research methods, particularly as they relate to the research planning decisions you must make along the way. Figure 1.1 will help you contextualize many of these terms and understand the research process. This general chart begins with two key concepts: ontology and epistemology, advances through other concepts, and concludes with three research methodological approaches: qualitative, quantitative and mixed methods.

Research does not end with making decisions about the type of methods you will use; we could argue that the work is just beginning at this point. Figure 1.3 does not represent an all-encompassing list of concepts and terms related to research methods. Keep in mind that each strategy has its own data collection and analysis approaches associated with the various methodological approaches you choose. Figure 1.1 is intentioned to provide a general overview of the research concept. You may want to keep this figure handy as you read through the various chapters.

research methods key terms table

Figure 1.3: Shows the research paradigms and research process © JIBC 2019

Ontology & Epistemology

Thinking about what you know and how you know what you know involves questions of ontology and epistemology. Perhaps you have heard these concepts before in a philosophy class? These concepts are relevant to the work of sociologists as well. As sociologists (those who undertake socially-focused research), we want to understand some aspect of our social world. Usually, we are not starting with zero knowledge. In fact, we usually start with some understanding of three concepts: 1) what is; 2) what can be known about what is; and, 3) what the best mechanism happens to be for learning about what is (Saylor Academy, 2012). In the following sections, we will define these concepts and provide an example of the terms, ontology and epistemology.

Ontology is a Greek word that means the study, theory, or science of being. Ontology is concerned with the what is or the nature of reality (Saunders, Lewis, & Thornhill, 2009). It can involve some very large and difficult to answer questions, such as:

  • What is the purpose of life?
  • What, if anything, exists beyond our universe?
  • What categories does it belong to?
  • Is there such a thing as objective reality?
  • What does the verb “to be” mean?

Ontology is comprised of two aspects: objectivism and subjectivism. Objectivism means that social entities exist externally to the social actors who are concerned with their existence. Subjectivism means that social phenomena are created from the perceptions and actions of the social actors who are concerned with their existence (Saunders, et al., 2009). Figure 1.2 provides an example of a similar research project to be undertaken by two different students. While the projects being proposed by the students are similar, they each have different research questions. Read the scenario and then answer the questions that follow.

Subjectivist and objectivist approaches (adapted from Saunders et al., 2009)

Ana is an Emergency & Security Management Studies (ESMS) student at a local college. She is just beginning her capstone research project and she plans to do research at the City of Vancouver. Her research question is: What is the role of City of Vancouver managers in the Emergency Management Department (EMD) in enabling positive community relationships? She will be collecting data related to the roles and duties of managers in enabling positive community relationships.

Robert is also an ESMS student at the same college. He, too, will be undertaking his research at the City of Vancouver. His research question is: What is the effect of the City of Vancouver’s corporate culture in enabling EMD managers to develop a positive relationship with the local community? He will be collecting data related to perceptions of corporate culture and its effect on enabling positive community-emergency management department relationships.

Before the students begin collecting data, they learn that six months ago, the long-time emergency department manager and assistance manager both retired. They have been replaced by two senior staff managers who have Bachelor’s degrees in Emergency Services Management. These new managers are considered more up-to-date and knowledgeable on emergency services management, given their specialized academic training and practical on-the-job work experience in this department. The new managers have essentially the same job duties and operate under the same procedures as the managers they replaced. When Ana and Robert approach the managers to ask them to participate in their separate studies, the new managers state that they are just new on the job and probably cannot answer the research questions; they decline to participate. Ana and Robert are worried that they will need to start all over again with a new research project. They return to their supervisors to get their opinions on what they should do.

Before reading about their supervisors’ responses, answer the following questions:

  • Is Ana’s research question indicative of an objectivist or a subjectivist approach?
  • Is Robert’s research question indicative of an objectivist or a subjectivist approach?
  • Given your answer in question 1, which managers could Ana interview (new, old, or both) for her research study? Why?
  • Given your answer in question 2, which managers could Robert interview (new, old, or both) for his research study? Why?

Ana’s supervisor tells her that her research question is set up for an objectivist approach. Her supervisor tells her that in her study the social entity (the City) exists in reality external to the social actors (the managers), i.e., there is a formal management structure at the City that has largely remained unchanged since the old managers left and the new ones started. The procedures remain the same regardless of whoever occupies those positions. As such, Ana, using an objectivist approach, could state that the new managers have job descriptions which describe their duties and that they are a part of a formal structure with a hierarchy of people reporting to them and to whom they report. She could further state that this hierarchy, which is unique to this organization, also resembles hierarchies found in other similar organizations. As such, she can argue that the new managers will be able to speak about the role they play in enabling positive community relationships. Their answers would likely be no different than those of the old managers, because the management structure and the procedures remain the same. Therefore, she could go back to the new managers and ask them to participate in her research study.

Robert’s supervisor tells him that his research is set up for a subjectivist approach. In his study, the social phenomena (the effect of corporate culture on the relationship with the community) is created from the perceptions and consequent actions of the social actors (the managers); i.e., the corporate culture at the City continually influences the process of social interaction, and these interactions influence perceptions of the relationship with the community. The relationship is in a constant state of revision. As such, Robert, using a subjectivist approach, could state that the new managers may have had few interactions with the community members to date and therefore may not be fully cognizant of how the corporate culture affects the department’s relationship with the community. While it would be important to get the new managers’ perceptions, he would also need to speak with the previous managers to get their perceptions from the time they were employed in their positions. This is because the community-department relationship is in a state of constant revision, which is influenced by the various managers’ perceptions of the corporate culture and its effect on their ability to form positive community relationships. Therefore, he could go back to the current managers and ask them to participate in his study, and also ask that the department please contact the previous managers to see if they would be willing to participate in his study.

As you can see the research question of each study guides the decision as to whether the researcher should take a subjective or an objective ontological approach. This decision, in turn, guides their approach to the research study, including whom they should interview.

Epistemology

Epistemology has to do with knowledge. Rather than dealing with questions about what is, epistemology deals with questions of how we know what is.  In sociology, there are many ways to uncover knowledge. We might interview people to understand public opinion about a topic, or perhaps observe them in their natural environment. We could avoid face-to-face interaction altogether by mailing people surveys to complete on their own or by reading people’s opinions in newspaper editorials. Each method of data collection comes with its own set of epistemological assumptions about how to find things out (Saylor Academy, 2012). There are two main subsections of epistemology: positivist and interpretivist philosophies. We will examine these philosophies or paradigms in the following sections.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 3, 2023 3:14 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Prevent plagiarism, run a free check.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples

Child Care and Early Education Research Connections

Research glossary.

The research glossary defines terms used in conducting social science and policy research, for example those describing methods, measurements, statistical procedures, and other aspects of research; the child care glossary defines terms used to describe aspects of child care and early education practice and policy.

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

2.2 Research Methods

Learning objectives.

By the end of this section, you should be able to:

  • Recall the 6 Steps of the Scientific Method
  • Differentiate between four kinds of research methods: surveys, field research, experiments, and secondary data analysis.
  • Explain the appropriateness of specific research approaches for specific topics.

Sociologists examine the social world, see a problem or interesting pattern, and set out to study it. They use research methods to design a study. Planning the research design is a key step in any sociological study. Sociologists generally choose from widely used methods of social investigation: primary source data collection such as survey, participant observation, ethnography, case study, unobtrusive observations, experiment, and secondary data analysis , or use of existing sources. Every research method comes with plusses and minuses, and the topic of study strongly influences which method or methods are put to use. When you are conducting research think about the best way to gather or obtain knowledge about your topic, think of yourself as an architect. An architect needs a blueprint to build a house, as a sociologist your blueprint is your research design including your data collection method.

When entering a particular social environment, a researcher must be careful. There are times to remain anonymous and times to be overt. There are times to conduct interviews and times to simply observe. Some participants need to be thoroughly informed; others should not know they are being observed. A researcher wouldn’t stroll into a crime-ridden neighborhood at midnight, calling out, “Any gang members around?”

Making sociologists’ presence invisible is not always realistic for other reasons. That option is not available to a researcher studying prison behaviors, early education, or the Ku Klux Klan. Researchers can’t just stroll into prisons, kindergarten classrooms, or Klan meetings and unobtrusively observe behaviors or attract attention. In situations like these, other methods are needed. Researchers choose methods that best suit their study topics, protect research participants or subjects, and that fit with their overall approaches to research.

As a research method, a survey collects data from subjects who respond to a series of questions about behaviors and opinions, often in the form of a questionnaire or an interview. The survey is one of the most widely used scientific research methods. The standard survey format allows individuals a level of anonymity in which they can express personal ideas.

At some point, most people in the United States respond to some type of survey. The 2020 U.S. Census is an excellent example of a large-scale survey intended to gather sociological data. Since 1790, United States has conducted a survey consisting of six questions to received demographical data pertaining to residents. The questions pertain to the demographics of the residents who live in the United States. Currently, the Census is received by residents in the United Stated and five territories and consists of 12 questions.

Not all surveys are considered sociological research, however, and many surveys people commonly encounter focus on identifying marketing needs and strategies rather than testing a hypothesis or contributing to social science knowledge. Questions such as, “How many hot dogs do you eat in a month?” or “Were the staff helpful?” are not usually designed as scientific research. The Nielsen Ratings determine the popularity of television programming through scientific market research. However, polls conducted by television programs such as American Idol or So You Think You Can Dance cannot be generalized, because they are administered to an unrepresentative population, a specific show’s audience. You might receive polls through your cell phones or emails, from grocery stores, restaurants, and retail stores. They often provide you incentives for completing the survey.

Sociologists conduct surveys under controlled conditions for specific purposes. Surveys gather different types of information from people. While surveys are not great at capturing the ways people really behave in social situations, they are a great method for discovering how people feel, think, and act—or at least how they say they feel, think, and act. Surveys can track preferences for presidential candidates or reported individual behaviors (such as sleeping, driving, or texting habits) or information such as employment status, income, and education levels.

A survey targets a specific population , people who are the focus of a study, such as college athletes, international students, or teenagers living with type 1 (juvenile-onset) diabetes. Most researchers choose to survey a small sector of the population, or a sample , a manageable number of subjects who represent a larger population. The success of a study depends on how well a population is represented by the sample. In a random sample , every person in a population has the same chance of being chosen for the study. As a result, a Gallup Poll, if conducted as a nationwide random sampling, should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people.

After selecting subjects, the researcher develops a specific plan to ask questions and record responses. It is important to inform subjects of the nature and purpose of the survey up front. If they agree to participate, researchers thank subjects and offer them a chance to see the results of the study if they are interested. The researcher presents the subjects with an instrument, which is a means of gathering the information.

A common instrument is a questionnaire. Subjects often answer a series of closed-ended questions . The researcher might ask yes-or-no or multiple-choice questions, allowing subjects to choose possible responses to each question. This kind of questionnaire collects quantitative data —data in numerical form that can be counted and statistically analyzed. Just count up the number of “yes” and “no” responses or correct answers, and chart them into percentages.

Questionnaires can also ask more complex questions with more complex answers—beyond “yes,” “no,” or checkbox options. These types of inquiries use open-ended questions that require short essay responses. Participants willing to take the time to write those answers might convey personal religious beliefs, political views, goals, or morals. The answers are subjective and vary from person to person. How do you plan to use your college education?

Some topics that investigate internal thought processes are impossible to observe directly and are difficult to discuss honestly in a public forum. People are more likely to share honest answers if they can respond to questions anonymously. This type of personal explanation is qualitative data —conveyed through words. Qualitative information is harder to organize and tabulate. The researcher will end up with a wide range of responses, some of which may be surprising. The benefit of written opinions, though, is the wealth of in-depth material that they provide.

An interview is a one-on-one conversation between the researcher and the subject, and it is a way of conducting surveys on a topic. However, participants are free to respond as they wish, without being limited by predetermined choices. In the back-and-forth conversation of an interview, a researcher can ask for clarification, spend more time on a subtopic, or ask additional questions. In an interview, a subject will ideally feel free to open up and answer questions that are often complex. There are no right or wrong answers. The subject might not even know how to answer the questions honestly.

Questions such as “How does society’s view of alcohol consumption influence your decision whether or not to take your first sip of alcohol?” or “Did you feel that the divorce of your parents would put a social stigma on your family?” involve so many factors that the answers are difficult to categorize. A researcher needs to avoid steering or prompting the subject to respond in a specific way; otherwise, the results will prove to be unreliable. The researcher will also benefit from gaining a subject’s trust, from empathizing or commiserating with a subject, and from listening without judgment.

Surveys often collect both quantitative and qualitative data. For example, a researcher interviewing people who are incarcerated might receive quantitative data, such as demographics – race, age, sex, that can be analyzed statistically. For example, the researcher might discover that 20 percent of incarcerated people are above the age of 50. The researcher might also collect qualitative data, such as why people take advantage of educational opportunities during their sentence and other explanatory information.

The survey can be carried out online, over the phone, by mail, or face-to-face. When researchers collect data outside a laboratory, library, or workplace setting, they are conducting field research, which is our next topic.

Field Research

The work of sociology rarely happens in limited, confined spaces. Rather, sociologists go out into the world. They meet subjects where they live, work, and play. Field research refers to gathering primary data from a natural environment. To conduct field research, the sociologist must be willing to step into new environments and observe, participate, or experience those worlds. In field work, the sociologists, rather than the subjects, are the ones out of their element.

The researcher interacts with or observes people and gathers data along the way. The key point in field research is that it takes place in the subject’s natural environment, whether it’s a coffee shop or tribal village, a homeless shelter or the DMV, a hospital, airport, mall, or beach resort.

While field research often begins in a specific setting , the study’s purpose is to observe specific behaviors in that setting. Field work is optimal for observing how people think and behave. It seeks to understand why they behave that way. However, researchers may struggle to narrow down cause and effect when there are so many variables floating around in a natural environment. And while field research looks for correlation, its small sample size does not allow for establishing a causal relationship between two variables. Indeed, much of the data gathered in sociology do not identify a cause and effect but a correlation .

Sociology in the Real World

Beyoncé and lady gaga as sociological subjects.

Sociologists have studied Lady Gaga and Beyoncé and their impact on music, movies, social media, fan participation, and social equality. In their studies, researchers have used several research methods including secondary analysis, participant observation, and surveys from concert participants.

In their study, Click, Lee & Holiday (2013) interviewed 45 Lady Gaga fans who utilized social media to communicate with the artist. These fans viewed Lady Gaga as a mirror of themselves and a source of inspiration. Like her, they embrace not being a part of mainstream culture. Many of Lady Gaga’s fans are members of the LGBTQ community. They see the “song “Born This Way” as a rallying cry and answer her calls for “Paws Up” with a physical expression of solidarity—outstretched arms and fingers bent and curled to resemble monster claws.”

Sascha Buchanan (2019) made use of participant observation to study the relationship between two fan groups, that of Beyoncé and that of Rihanna. She observed award shows sponsored by iHeartRadio, MTV EMA, and BET that pit one group against another as they competed for Best Fan Army, Biggest Fans, and FANdemonium. Buchanan argues that the media thus sustains a myth of rivalry between the two most commercially successful Black women vocal artists.

Participant Observation

In 2000, a comic writer named Rodney Rothman wanted an insider’s view of white-collar work. He slipped into the sterile, high-rise offices of a New York “dot com” agency. Every day for two weeks, he pretended to work there. His main purpose was simply to see whether anyone would notice him or challenge his presence. No one did. The receptionist greeted him. The employees smiled and said good morning. Rothman was accepted as part of the team. He even went so far as to claim a desk, inform the receptionist of his whereabouts, and attend a meeting. He published an article about his experience in The New Yorker called “My Fake Job” (2000). Later, he was discredited for allegedly fabricating some details of the story and The New Yorker issued an apology. However, Rothman’s entertaining article still offered fascinating descriptions of the inside workings of a “dot com” company and exemplified the lengths to which a writer, or a sociologist, will go to uncover material.

Rothman had conducted a form of study called participant observation , in which researchers join people and participate in a group’s routine activities for the purpose of observing them within that context. This method lets researchers experience a specific aspect of social life. A researcher might go to great lengths to get a firsthand look into a trend, institution, or behavior. A researcher might work as a waitress in a diner, experience homelessness for several weeks, or ride along with police officers as they patrol their regular beat. Often, these researchers try to blend in seamlessly with the population they study, and they may not disclose their true identity or purpose if they feel it would compromise the results of their research.

At the beginning of a field study, researchers might have a question: “What really goes on in the kitchen of the most popular diner on campus?” or “What is it like to be homeless?” Participant observation is a useful method if the researcher wants to explore a certain environment from the inside.

Field researchers simply want to observe and learn. In such a setting, the researcher will be alert and open minded to whatever happens, recording all observations accurately. Soon, as patterns emerge, questions will become more specific, observations will lead to hypotheses, and hypotheses will guide the researcher in analyzing data and generating results.

In a study of small towns in the United States conducted by sociological researchers John S. Lynd and Helen Merrell Lynd, the team altered their purpose as they gathered data. They initially planned to focus their study on the role of religion in U.S. towns. As they gathered observations, they realized that the effect of industrialization and urbanization was the more relevant topic of this social group. The Lynds did not change their methods, but they revised the purpose of their study.

This shaped the structure of Middletown: A Study in Modern American Culture , their published results (Lynd & Lynd, 1929).

The Lynds were upfront about their mission. The townspeople of Muncie, Indiana, knew why the researchers were in their midst. But some sociologists prefer not to alert people to their presence. The main advantage of covert participant observation is that it allows the researcher access to authentic, natural behaviors of a group’s members. The challenge, however, is gaining access to a setting without disrupting the pattern of others’ behavior. Becoming an inside member of a group, organization, or subculture takes time and effort. Researchers must pretend to be something they are not. The process could involve role playing, making contacts, networking, or applying for a job.

Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book and describe what he or she witnessed and experienced.

This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study . To her surprise, her editor responded, Why don’t you do it?

That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter.

She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper-class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer.

The book she wrote upon her return to her real life as a well-paid writer, has been widely read and used in many college classrooms.

Ethnography

Ethnography is the immersion of the researcher in the natural setting of an entire social community to observe and experience their everyday life and culture. The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a social group.

An ethnographic study might observe, for example, a small U.S. fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or an amusement park. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible.

A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results.

Institutional Ethnography

Institutional ethnography is an extension of basic ethnographic research principles that focuses intentionally on everyday concrete social relationships. Developed by Canadian sociologist Dorothy E. Smith (1990), institutional ethnography is often considered a feminist-inspired approach to social analysis and primarily considers women’s experiences within male- dominated societies and power structures. Smith’s work is seen to challenge sociology’s exclusion of women, both academically and in the study of women’s lives (Fenstermaker, n.d.).

Historically, social science research tended to objectify women and ignore their experiences except as viewed from the male perspective. Modern feminists note that describing women, and other marginalized groups, as subordinates helps those in authority maintain their own dominant positions (Social Sciences and Humanities Research Council of Canada n.d.). Smith’s three major works explored what she called “the conceptual practices of power” and are still considered seminal works in feminist theory and ethnography (Fensternmaker n.d.).

Sociological Research

The making of middletown: a study in modern u.s. culture.

In 1924, a young married couple named Robert and Helen Lynd undertook an unprecedented ethnography: to apply sociological methods to the study of one U.S. city in order to discover what “ordinary” people in the United States did and believed. Choosing Muncie, Indiana (population about 30,000) as their subject, they moved to the small town and lived there for eighteen months.

Ethnographers had been examining other cultures for decades—groups considered minorities or outsiders—like gangs, immigrants, and the poor. But no one had studied the so-called average American.

Recording interviews and using surveys to gather data, the Lynds objectively described what they observed. Researching existing sources, they compared Muncie in 1890 to the Muncie they observed in 1924. Most Muncie adults, they found, had grown up on farms but now lived in homes inside the city. As a result, the Lynds focused their study on the impact of industrialization and urbanization.

They observed that Muncie was divided into business and working class groups. They defined business class as dealing with abstract concepts and symbols, while working class people used tools to create concrete objects. The two classes led different lives with different goals and hopes. However, the Lynds observed, mass production offered both classes the same amenities. Like wealthy families, the working class was now able to own radios, cars, washing machines, telephones, vacuum cleaners, and refrigerators. This was an emerging material reality of the 1920s.

As the Lynds worked, they divided their manuscript into six chapters: Getting a Living, Making a Home, Training the Young, Using Leisure, Engaging in Religious Practices, and Engaging in Community Activities.

When the study was completed, the Lynds encountered a big problem. The Rockefeller Foundation, which had commissioned the book, claimed it was useless and refused to publish it. The Lynds asked if they could seek a publisher themselves.

Middletown: A Study in Modern American Culture was not only published in 1929 but also became an instant bestseller, a status unheard of for a sociological study. The book sold out six printings in its first year of publication, and has never gone out of print (Caplow, Hicks, & Wattenberg. 2000).

Nothing like it had ever been done before. Middletown was reviewed on the front page of the New York Times. Readers in the 1920s and 1930s identified with the citizens of Muncie, Indiana, but they were equally fascinated by the sociological methods and the use of scientific data to define ordinary people in the United States. The book was proof that social data was important—and interesting—to the U.S. public.

Sometimes a researcher wants to study one specific person or event. A case study is an in-depth analysis of a single event, situation, or individual. To conduct a case study, a researcher examines existing sources like documents and archival records, conducts interviews, engages in direct observation and even participant observation, if possible.

Researchers might use this method to study a single case of a foster child, drug lord, cancer patient, criminal, or rape victim. However, a major criticism of the case study as a method is that while offering depth on a topic, it does not provide enough evidence to form a generalized conclusion. In other words, it is difficult to make universal claims based on just one person, since one person does not verify a pattern. This is why most sociologists do not use case studies as a primary research method.

However, case studies are useful when the single case is unique. In these instances, a single case study can contribute tremendous insight. For example, a feral child, also called “wild child,” is one who grows up isolated from human beings. Feral children grow up without social contact and language, which are elements crucial to a “civilized” child’s development. These children mimic the behaviors and movements of animals, and often invent their own language. There are only about one hundred cases of “feral children” in the world.

As you may imagine, a feral child is a subject of great interest to researchers. Feral children provide unique information about child development because they have grown up outside of the parameters of “normal” growth and nurturing. And since there are very few feral children, the case study is the most appropriate method for researchers to use in studying the subject.

At age three, a Ukranian girl named Oxana Malaya suffered severe parental neglect. She lived in a shed with dogs, and she ate raw meat and scraps. Five years later, a neighbor called authorities and reported seeing a girl who ran on all fours, barking. Officials brought Oxana into society, where she was cared for and taught some human behaviors, but she never became fully socialized. She has been designated as unable to support herself and now lives in a mental institution (Grice 2011). Case studies like this offer a way for sociologists to collect data that may not be obtained by any other method.

Experiments

You have probably tested some of your own personal social theories. “If I study at night and review in the morning, I’ll improve my retention skills.” Or, “If I stop drinking soda, I’ll feel better.” Cause and effect. If this, then that. When you test the theory, your results either prove or disprove your hypothesis.

One way researchers test social theories is by conducting an experiment , meaning they investigate relationships to test a hypothesis—a scientific approach.

There are two main types of experiments: lab-based experiments and natural or field experiments. In a lab setting, the research can be controlled so that more data can be recorded in a limited amount of time. In a natural or field- based experiment, the time it takes to gather the data cannot be controlled but the information might be considered more accurate since it was collected without interference or intervention by the researcher.

As a research method, either type of sociological experiment is useful for testing if-then statements: if a particular thing happens (cause), then another particular thing will result (effect). To set up a lab-based experiment, sociologists create artificial situations that allow them to manipulate variables.

Classically, the sociologist selects a set of people with similar characteristics, such as age, class, race, or education. Those people are divided into two groups. One is the experimental group and the other is the control group. The experimental group is exposed to the independent variable(s) and the control group is not. To test the benefits of tutoring, for example, the sociologist might provide tutoring to the experimental group of students but not to the control group. Then both groups would be tested for differences in performance to see if tutoring had an effect on the experimental group of students. As you can imagine, in a case like this, the researcher would not want to jeopardize the accomplishments of either group of students, so the setting would be somewhat artificial. The test would not be for a grade reflected on their permanent record of a student, for example.

And if a researcher told the students they would be observed as part of a study on measuring the effectiveness of tutoring, the students might not behave naturally. This is called the Hawthorne effect —which occurs when people change their behavior because they know they are being watched as part of a study. The Hawthorne effect is unavoidable in some research studies because sociologists have to make the purpose of the study known. Subjects must be aware that they are being observed, and a certain amount of artificiality may result (Sonnenfeld 1985).

A real-life example will help illustrate the process. In 1971, Frances Heussenstamm, a sociology professor at California State University at Los Angeles, had a theory about police prejudice. To test her theory, she conducted research. She chose fifteen students from three ethnic backgrounds: Black, White, and Hispanic. She chose students who routinely drove to and from campus along Los Angeles freeway routes, and who had had perfect driving records for longer than a year.

Next, she placed a Black Panther bumper sticker on each car. That sticker, a representation of a social value, was the independent variable. In the 1970s, the Black Panthers were a revolutionary group actively fighting racism. Heussenstamm asked the students to follow their normal driving patterns. She wanted to see whether seeming support for the Black Panthers would change how these good drivers were treated by the police patrolling the highways. The dependent variable would be the number of traffic stops/citations.

The first arrest, for an incorrect lane change, was made two hours after the experiment began. One participant was pulled over three times in three days. He quit the study. After seventeen days, the fifteen drivers had collected a total of thirty-three traffic citations. The research was halted. The funding to pay traffic fines had run out, and so had the enthusiasm of the participants (Heussenstamm, 1971).

Secondary Data Analysis

While sociologists often engage in original research studies, they also contribute knowledge to the discipline through secondary data analysis . Secondary data does not result from firsthand research collected from primary sources, but are the already completed work of other researchers or data collected by an agency or organization. Sociologists might study works written by historians, economists, teachers, or early sociologists. They might search through periodicals, newspapers, or magazines, or organizational data from any period in history.

Using available information not only saves time and money but can also add depth to a study. Sociologists often interpret findings in a new way, a way that was not part of an author’s original purpose or intention. To study how women were encouraged to act and behave in the 1960s, for example, a researcher might watch movies, televisions shows, and situation comedies from that period. Or to research changes in behavior and attitudes due to the emergence of television in the late 1950s and early 1960s, a sociologist would rely on new interpretations of secondary data. Decades from now, researchers will most likely conduct similar studies on the advent of mobile phones, the Internet, or social media.

Social scientists also learn by analyzing the research of a variety of agencies. Governmental departments and global groups, like the U.S. Bureau of Labor Statistics or the World Health Organization (WHO), publish studies with findings that are useful to sociologists. A public statistic like the foreclosure rate might be useful for studying the effects of a recession. A racial demographic profile might be compared with data on education funding to examine the resources accessible by different groups.

One of the advantages of secondary data like old movies or WHO statistics is that it is nonreactive research (or unobtrusive research), meaning that it does not involve direct contact with subjects and will not alter or influence people’s behaviors. Unlike studies requiring direct contact with people, using previously published data does not require entering a population and the investment and risks inherent in that research process.

Using available data does have its challenges. Public records are not always easy to access. A researcher will need to do some legwork to track them down and gain access to records. To guide the search through a vast library of materials and avoid wasting time reading unrelated sources, sociologists employ content analysis , applying a systematic approach to record and value information gleaned from secondary data as they relate to the study at hand.

Also, in some cases, there is no way to verify the accuracy of existing data. It is easy to count how many drunk drivers, for example, are pulled over by the police. But how many are not? While it’s possible to discover the percentage of teenage students who drop out of high school, it might be more challenging to determine the number who return to school or get their GED later.

Another problem arises when data are unavailable in the exact form needed or do not survey the topic from the precise angle the researcher seeks. For example, the average salaries paid to professors at a public school is public record. But these figures do not necessarily reveal how long it took each professor to reach the salary range, what their educational backgrounds are, or how long they’ve been teaching.

When conducting content analysis, it is important to consider the date of publication of an existing source and to take into account attitudes and common cultural ideals that may have influenced the research. For example, when Robert S. Lynd and Helen Merrell Lynd gathered research in the 1920s, attitudes and cultural norms were vastly different then than they are now. Beliefs about gender roles, race, education, and work have changed significantly since then. At the time, the study’s purpose was to reveal insights about small U.S. communities. Today, it is an illustration of 1920s attitudes and values.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Authors: Tonja R. Conerly, Kathleen Holmes, Asha Lal Tamang
  • Publisher/website: OpenStax
  • Book title: Introduction to Sociology 3e
  • Publication date: Jun 3, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Section URL: https://openstax.org/books/introduction-sociology-3e/pages/2-2-research-methods

© Jan 18, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

In this textbook you will be exposed to many terms and concepts associated with research methods, particularly as they relate to the research planning decisions you must make along the way. Figure 1.1 will help you contextualize many of these terms and understand the research process. This general chart begins with two key concepts: ontology and epistemology, advances through other concepts, and concludes with three research methodological approaches: qualitative, quantitative and mixed methods.

Research does not end with making decisions about the type of methods you will use; we could argue that the work is just beginning at this point. Figure 1.3 does not represent an all-encompassing list of concepts and terms related to research methods. Keep in mind that each strategy has its own data collection and analysis approaches associated with the various methodological approaches you choose. Figure 1.1 is intentioned to provide a general overview of the research concept. You may want to keep this figure handy as you read through the various chapters.

research methods key terms table

Figure 1.3: Shows the research paradigms and research process © JIBC 2019

Ontology & Epistemology

Thinking about what you know and how you know what you know involves questions of ontology and epistemology. Perhaps you have heard these concepts before in a philosophy class? These concepts are relevant to the work of sociologists as well. As sociologists (those who undertake socially-focused research), we want to understand some aspect of our social world. Usually, we are not starting with zero knowledge. In fact, we usually start with some understanding of three concepts: 1) what is; 2) what can be known about what is; and, 3) what the best mechanism happens to be for learning about what is (Saylor Academy, 2012). In the following sections, we will define these concepts and provide an example of the terms, ontology and epistemology.

Ontology is a Greek word that means the study, theory, or science of being. Ontology is concerned with the what is or the nature of reality (Saunders, Lewis, & Thornhill, 2009). It can involve some very large and difficult to answer questions, such as:

  • What is the purpose of life?
  • What, if anything, exists beyond our universe?
  • What categories does it belong to?
  • Is there such a thing as objective reality?
  • What does the verb “to be” mean?

Ontology is comprised of two aspects: objectivism and subjectivism. Objectivism means that social entities exist externally to the social actors who are concerned with their existence. Subjectivism means that social phenomena are created from the perceptions and actions of the social actors who are concerned with their existence (Saunders, et al., 2009). Figure 1.2 provides an example of a similar research project to be undertaken by two different students. While the projects being proposed by the students are similar, they each have different research questions. Read the scenario and then answer the questions that follow.

Subjectivist and objectivist approaches (adapted from Saunders et al., 2009)

Ana is an Emergency & Security Management Studies (ESMS) student at a local college. She is just beginning her capstone research project and she plans to do research at the City of Vancouver. Her research question is: What is the role of City of Vancouver managers in the Emergency Management Department (EMD) in enabling positive community relationships? She will be collecting data related to the roles and duties of managers in enabling positive community relationships.

Robert is also an ESMS student at the same college. He, too, will be undertaking his research at the City of Vancouver. His research question is: What is the effect of the City of Vancouver’s corporate culture in enabling EMD managers to develop a positive relationship with the local community? He will be collecting data related to perceptions of corporate culture and its effect on enabling positive community-emergency management department relationships.

Before the students begin collecting data, they learn that six months ago, the long-time emergency department manager and assistance manager both retired. They have been replaced by two senior staff managers who have Bachelor’s degrees in Emergency Services Management. These new managers are considered more up-to-date and knowledgeable on emergency services management, given their specialized academic training and practical on-the-job work experience in this department. The new managers have essentially the same job duties and operate under the same procedures as the managers they replaced. When Ana and Robert approach the managers to ask them to participate in their separate studies, the new managers state that they are just new on the job and probably cannot answer the research questions; they decline to participate. Ana and Robert are worried that they will need to start all over again with a new research project. They return to their supervisors to get their opinions on what they should do.

Before reading about their supervisors’ responses, answer the following questions:

  • Is Ana’s research question indicative of an objectivist or a subjectivist approach?
  • Is Robert’s research question indicative of an objectivist or a subjectivist approach?
  • Given your answer in question 1, which managers could Ana interview (new, old, or both) for her research study? Why?
  • Given your answer in question 2, which managers could Robert interview (new, old, or both) for his research study? Why?

Ana’s supervisor tells her that her research question is set up for an objectivist approach. Her supervisor tells her that in her study the social entity (the City) exists in reality external to the social actors (the managers), i.e., there is a formal management structure at the City that has largely remained unchanged since the old managers left and the new ones started. The procedures remain the same regardless of whoever occupies those positions. As such, Ana, using an objectivist approach, could state that the new managers have job descriptions which describe their duties and that they are a part of a formal structure with a hierarchy of people reporting to them and to whom they report. She could further state that this hierarchy, which is unique to this organization, also resembles hierarchies found in other similar organizations. As such, she can argue that the new managers will be able to speak about the role they play in enabling positive community relationships. Their answers would likely be no different than those of the old managers, because the management structure and the procedures remain the same. Therefore, she could go back to the new managers and ask them to participate in her research study.

Robert’s supervisor tells him that his research is set up for a subjectivist approach. In his study, the social phenomena (the effect of corporate culture on the relationship with the community) is created from the perceptions and consequent actions of the social actors (the managers); i.e., the corporate culture at the City continually influences the process of social interaction, and these interactions influence perceptions of the relationship with the community. The relationship is in a constant state of revision. As such, Robert, using a subjectivist approach, could state that the new managers may have had few interactions with the community members to date and therefore may not be fully cognizant of how the corporate culture affects the department’s relationship with the community. While it would be important to get the new managers’ perceptions, he would also need to speak with the previous managers to get their perceptions from the time they were employed in their positions. This is because the community-department relationship is in a state of constant revision, which is influenced by the various managers’ perceptions of the corporate culture and its effect on their ability to form positive community relationships. Therefore, he could go back to the current managers and ask them to participate in his study, and also ask that the department please contact the previous managers to see if they would be willing to participate in his study.

As you can see the research question of each study guides the decision as to whether the researcher should take a subjective or an objective ontological approach. This decision, in turn, guides their approach to the research study, including whom they should interview.

Epistemology

Epistemology has to do with knowledge. Rather than dealing with questions about what is, epistemology deals with questions of how we know what is.  In sociology, there are many ways to uncover knowledge. We might interview people to understand public opinion about a topic, or perhaps observe them in their natural environment. We could avoid face-to-face interaction altogether by mailing people surveys to complete on their own or by reading people’s opinions in newspaper editorials. Each method of data collection comes with its own set of epistemological assumptions about how to find things out (Saylor Academy, 2012). There are two main subsections of epistemology: positivist and interpretivist philosophies. We will examine these philosophies or paradigms in the following sections.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

8.5: Key Terms

  • Last updated
  • Save as PDF
  • Page ID 76761
  • collection of variables

SOC101: Introduction to Sociology (2020.A.01)

Sociological research.

Read this chapter for a review of sociological research. As you read, consider the following topics:

  • Take note of the bold terms throughout the chapter.
  • Take some time to study Figure 1 and the accompanying text, which outline the scientific process of studying sociology.
  • Take note of the differences in scientific approaches to studying sociology, including surveys, field research, participant observation, ethnographies, case studies, experiments, and secondary data analysis.
  • Take note of the code of ethics and think about how these ethical standards are vital to conducting research about human subjects.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Methodology for research II

S bala bhaskar.

Department of Anaesthesiology, Vijayanagar Institute Medical Sciences, Bellary, Karnataka, India

M Manjuladevi

1 Department of Anaesthesiology, St. John's Medical College, Bengaluru, Karnataka, India

Research is a systematic process, which uses scientific methods to generate new knowledge that can be used to solve a query or improve on the existing system. Any research on human subjects is associated with varying degree of risk to the participating individual and it is important to safeguard the welfare and rights of the participants. This review focuses on various steps involved in methodology (in continuation with the previous section) before the data are submitted for publication.

INTRODUCTION

Research uses a systematic approach to generate new knowledge to answer questions based on needs of patient health and practice. The investigator identifies research question, examines the ethical implications, describes the research design and collects appropriate data[ 1 , 2 , 3 ] which is evaluated by statistical tests before it can be published.[ 4 ] Before putting this to use in clinical practice, the relevant data are critically appraised for validity and reliability.[ 1 ] This review covers these aspects of the research methodology, in continuation with the first part by Garg et al . published in this issue of Indian Journal of Anaesthesia (IJA).[ 5 ]

REGULATORY AND ETHICAL CONSIDERATIONS

The Indian Council of Medical Research (ICMR) is the apex body in India responsible for the formulation, coordination and promotion of biomedical research. The International Committee of Medical Journal Editors (ICMJE) makes it mandatory for clinical trials to be included in a clinical trials registry for acceptance for publication. Clinical Trials.gov, run by the United States National Library of Medicine, was the first online registry established in 2005 and is widely used today. All trials to be conducted in India should have mandatory prospective registration with the Clinical Trial Registry of India (CTRI- www.ctri.in ). Good clinical practice (GCP) guidelines is a set of guidelines for biomedical studies which encompasses the design, conduct, termination, audit, analysis, reporting and documentation of the studies involving human subjects. It protects rights of human subjects and the authenticity of biomedical data. ( www.cdsco.nic.in/html/GCP1.html ). Table 1 lists the type of the research involved and their regulatory bodies.[ 6 ]

Research involved and their regulatory bodies

An external file that holds a picture, illustration, etc.
Object name is IJA-60-646-g001.jpg

The International Standard Randomised Controlled Trial Number (ISRCTN) registry is a primary clinical trial registry recognised by the World Health Organization. The ICMJE provides content validation of all submitted studies (proposed, ongoing or completed). The study is assigned a unique identification number, and records of the study in the database can be easily accessed ( www.isrctn.com ).

To conduct a clinical trial in India, Institution Ethics Committee (IEC) approval is mandatory, and it must be registered with CTRI- www.ctri.nic.in .[ 2 , 6 ] When ‘off-label’ use of a drug (drug being used for a new indication/new dose/formulation/route) is tested for purely academic purposes and not for commercial use, currently there is no requirement of regulator approval.[ 2 , 6 ] However, the IEC has to consider the risks-benefits and ethical basis for approval of the research.

Drugs Controller General of India (DCGI) in India insists on registration and approval of clinical trials through CTRI and ensures scientific and safe conduct of the study. Most of the academic medical centres have Institutional Review Board (IRB) or IEC. They (‘internal’ Ethics Committees) can assess research proposals first and approve before submitting to national bodies. The approval may also go in parallel with DCGI approval. It is responsible for the supervision and protection of rights, safety and welfare of human subjects. During the progress of the trial, the IEC reviews safety reports, any significant violation/deviations in the protocol and for any amendments in the study protocol or informed consent.[ 2 , 7 ]

If IEC is not available in the institution, proposals can be sent independent ethics committee outside the institution (‘external’ Ethics Committees).[ 2 ] The ICMR suggests the establishment of registered Independent Ethics Committees (I nd EC) without institutional affiliation, functioning as per national guidelines. Proposals can also be sent to another institution, following established protocol, including providing a ‘no objection certificate’ and allow the external IEC necessary access.[ 2 ] When there is a large load of research, multiple ECs can function in the same institution as also subcommittees (e.g., subcommittees on adverse event, data safety monitoring, expedited review, etc.,).

The IRB consists of 7–15 members and at least five members are required to form the quorum to make a decision on the research [ Table 2 ].[ 2 ]

Composition of Institution Ethics Committee

An external file that holds a picture, illustration, etc.
Object name is IJA-60-646-g002.jpg

All the research involving human participants should follow four basic ethical principles;[ 2 ] (a) Respect for persons autonomy, (b) beneficence (balance the risks against benefits bearing in mind the welfare of the research participant[s]), (c) nonmaleficence (no harm or reduce exposure to greater harm) and (d) Justice (distribution of research subjects equitably in all groups, for example, social, economic demographic, etc).

Informed consent is a process by which a subject confirms his/her willingness to participate in a clinical study.[ 4 ] It protects the individual's freedom of choice and respect for individual's autonomy. It ensures proper regulations in clinical trials and assures patient safety by dealing with both legal and ethical basis.[ 7 ] The process of informed consent consists of providing relevant information, its comprehension and voluntariness.[ 2 ] The details of the clinical study are explained to the subject in a simple and easily understandable language. The ‘subject/participant information sheet’ should include research aspect of the study, sponsor of the study, purpose and procedure, side effects, risks and discomforts, benefits, compensation for any study-related injury, alternatives to participation, right to withdraw, confidentiality of records and contact information of the investigators and IRB.[ 2 , 6 ] The informed and written consent form is duly signed by the subject in a document called ‘informed consent form’.[ 1 , 2 , 3 ] The documents consisting of patient/participant information sheet and informed consent form should be reviewed and approved by the IEC before enrolment of the participants.

A legal authorised representative (LAR) should be involved in the decision-making of vulnerable subjects who lack the ability to consent. The consent is taken from parent/LAR (in kids <7 years) and consent of parent/LAR along with assent form (oral/written) in children aged 7–18 years.[ 2 ] Audio/audio-visual recording of the informed consent process may be required in case of certain regulatory, clinical trials.[ 2 ] After the completion/termination of the study, all records within the IEC must be archived for at least 3 years; those related to regulatory, clinical trials must be archived for 5 years as per CDSCO regulation. Longer preservation may be needed as required by the sponsors/regulatory bodies.

Many finer aspects of the legal and ethical issues in research are discussed by Yip et al in this issue of IJA.[ 8 ]

The ethical duty of confidentiality refers to the obligation of an individual or organisation to safeguard entrusted information of the research data. It is essential for the integrity of the research project and protects information from unauthorised access, use, disclosure, modification, loss or theft.[ 6 , 7 ]

Data related to any of the studies of individual participant can be disclosed only under the following circumstances:

(a) Threat to a person's life, (b) Communication with drug registration authority in cases of severe adverse reaction, (c) Communication to health authority whenever there is risk to public health, (d) In a court of law under the orders of the presiding judge and (e) As a requirement for government agencies or regulatory authorities.[ 2 ]

DATA COLLECTION

‘Data’ includes the information that is systematically collected by the investigator during the study. The primary data are those which are originally done for the first time. The secondary data are a compilation of information done by someone else and have already been passed through the statistical process. A Data Monitoring Committee or Data and Safety Monitoring Board may be appointed, independent of IEC for interim analysis; their report forms the basis for early termination of planned study when there is compelling evidence of beneficial effectiveness or harmful side effects or for major flaws in the study.

The two main types of data are qualitative and quantitative, and most studies will have a combination of both. While quantitative data are easy to analyse and fairly reliable, qualitative data provide more depth in the description of the sample.[ 9 ]

Data collection methods [ Figure 1 ]:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-646-g003.jpg

Methods of data collection

  • Interview: This method allows face to face contact with respondents, exploring the topic in depth. It allows the interviewer to explain or help to clarify questions increasing the usefulness of a response. It can be of different types-structured, unstructured (informal, conversational approach), semi-structured, focused and standardised.[ 9 , 10 , 11 ] There can be disadvantages-interviewer clarifications can lead to inconsistencies and influence the responses; the subject may distort information through recall error, selective perceptions and in the desire to please the interviewer.[ 10 ] Sometimes, the data may be too voluminous to record or reduce it
  • Observation: This method provides direct information about the behaviour of individuals and groups. It allows the investigator to understand the situation and context. It could be ‘Participant’ observation: The observer takes part in the situation he or she observes or ‘Nonparticipant’ observation: The observer watches the situation, openly or concealed, but does not participate[ 9 , 10 , 11 ]
  • Questionnaire: It is a simple and inexpensive method not even requiring any research assistants. More honest responses may be available when anonymity is provided. Written questions are presented that are to be answered by the respondents. A written questionnaire can be administered in different ways, such as by sending questionnaires by mail with clear instructions on how to answer the questions and asking for mailed responses; gathering all or part of the respondents in one place at one time, giving oral or written instructions, and letting the respondents fill out the questionnaires; or hand-delivering questionnaires to respondents and collecting them later.[ 10 , 11 ] The disadvantage of this method are observer bias and breach in confidentiality; also, this cannot be used on illiterate subjects. As with other types of outcome measurements, questionnaires and interviews are to be assessed for validity (accuracy) and for reproducibility (precision)-using ‘face validity, content validity and construct validity’
  • Documents: It is an inexpensive and unobtrusive method of data collection from locally available records or documents (existing research, hospital records, databases, videotapes, etc.).[ 9 , 10 , 11 ] There is disadvantage of accuracy, authenticity and availability (missing data/omission of needed data). Anaesthesia information management systems used in modern practice have the ability to collect data automatically, in large volumes, which can be converted for specific, focused outcome assessments for research purposes.

Compilation of data includes systematic arrangement of data to facilitate the presentation and analysis.[ 12 ] The data collected are entered in a database where the information about subjects and variables are stored. Simple study database can be maintained in a spreadsheet (MS Excel © ) or statistical software (e.g., Statistical Analysis System (SAS ® ) (NC, USA), IBM SPSS (Statistical Package for the Social Sciences) Statistics ® (IBM Inc., NY, USA). More complex database require integrated database management software (e.g., Access © (Windows) and Filemaker © Pro (Apple Inc.,).[ 13 ] Database ‘queries’ sort and filter the data as well as calculate values based on the raw data fields.[ 12 , 13 ] Queries are used to monitor data entry, report on study progress and format the results for analysis. Data must be stored in ‘secure servers’ so that confidentiality is maintained.[ 13 ] Backup files and off-site storage may be necessary to prevent any data loss. Common methods of summarising and presenting data are tables, pie charts, bar charts, histograms, frequency and cumulative frequency curves, dot plots and x-y scatterplots.[ 13 , 14 , 15 ]

RESEARCH TOOLS: DEVELOPMENT AND VALIDATION

‘Research tool’ is the means of collecting information for the purpose of a study. Observation forms, interview schedules, questionnaires are all classified as research tools. The first practical step in doing a research process is to construct a research tool. Four stage process is involved in developing a research tool.[ 9 , 10 , 11 , 12 ]

  • Concept development: The researcher should understand the basic knowledge pertaining to the study
  • Specification of concept dimensions: The researcher should be able to build in a dimension based on the concept of the study
  • Selection of indicators: Once the concept and its dimensions are developed, each concept element is measured by indicators (respondent's knowledge, opinion, expectation, etc., are measured with scales, devices). More than one indicator increases the score and validity of the study
  • Formation of index: Dimension of a concept or different measurements of a dimension are then put into an overall index.

The error may occur at any stage of research, i.e., from selection to interpretation of data to conclusion. Two types of error can occur – random and systematic error. The random error must be reduced as far as possible, and the systemic error should be eliminated. Errors can occur from three sources:[ 16 , 17 , 18 , 19 ]

  • Investigator: Due to ignorance, incompetence and bias
  • Instrument: Due to variability, calibration, problems and malfunctioning
  • Subject: Due to bias, noncompliance and biological variation in response.

Any research can be affected by factors that can invalidate the findings. A good research tool should meet the tests of validity, reliability and practicality.

Validity refers to the extent to which a test measures what we actually wish to measure. Reliability refers to accuracy and precision of a measurement procedure.

The practicality characteristic of a measuring instrument can be judged in terms of economy, convenience and interpretability.

Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do.

USES OF VALIDITY IN SCIENTIFIC METHODS

External validity refers to generalising the study results to other population groups with similar risk factors, settings, measurement and treatment variables.

Internal validity implies that the differences observed between the treatment groups, apart from random error, are only due to the treatments under investigation.[ 9 ]

Validity assessment can be performed in three ways:

  • Content validity is the extent to which a measuring tool provides adequate coverage of all the aspects of the topic under study. (e.g., quality of pain relief to include measurement of analgesia, haemodynamics, sedation, etc.). ‘Face validity’ assesses whether the measurements appear reasonable; a measure of how representative a research project is ‘at face value’, and whether it appears to be a good project
  • Construct validity refers to the degree to which a measurement conforms to theoretical constructs. Convergent validity tests whether and how well those ‘constructs’ that are expected to be related are, in fact, related. Discriminant validity or divergent validity tests those ‘constructs’ that should have no relationship do, in fact, not have any relationship
  • Criterion validity assesses the degree to which a new measurement correlates with well-accepted existing measures. Predictive validity is a strong variety of criterion validity, representing the ability of the measurement to predict an outcome.

Other Types: Concurrent validity refers to the degree of correlation of two measures of the same concept administered at the same time. Consensual validity is a process by which a panel of experts judge the validity.[ 1 , 16 , 17 , 18 , 19 ]

A measuring instrument is reliable if it provides consistent results.[ 1 , 11 ]

The stability aspect refers to securing consistent results with repeated measurements of the same person and with the same instrument. Determination of the degree of stability by comparing the results of repeated measurements.

The equivalence aspect considers how much error may get introduced by different investigators or different samples of the items being studied.

PRACTICALITY

Measuring instrument practicality is tested in terms of economy, convenience and interpretability.

Economy consideration suggests that some trade-off is needed between the ideal research project and that which the budget can afford.

Convenience test suggests that the measuring instrument should be easy to administer. Interpretability consideration is especially important when persons other than the designers of the test are to interpret the results.

ANALYSIS PLAN: QUALITY AND APPROPRIATENESS OF ANALYSIS

The statistics in research functions as a tool in designing research, analysing its data and drawing conclusions from it.[ 20 , 21 ] Descriptive statistics are the development of certain indices from the raw data, summarised in tables, charts or numerical forms. The inferential analysis is undertaken to apply various tests of significance to test hypotheses of a research question so as to validate conclusions. An essential part of presenting any type of inferential data is by probability ( P value) which reassures the reader that the outcome was secondary to the effect of the studied variable and has not occurred purely by chance.[ 22 ] P < 5% is considered statistically significant. Statistical tests are used for testing the significance. Various parametric tests (variable normally distributed) and nonparametric tests (variables are not normally distributed) are used to meet the objective of the study [ Table 3 ].[ 19 , 20 ] ‘Basic Statistical Tools in Research and Data analysis’ in this issue of IJA by Zulfiqar Ali describe these tests in detail.[ 23 ]

Tests of significance

An external file that holds a picture, illustration, etc.
Object name is IJA-60-646-g004.jpg

The ‘methodology’ in a research strategy outlines the steps involved in research process. The research problem is identified, aims and objectives are formulated, sample size is calculated; Ethics Committee approval and informed consent from the subject are taken; data collected are summarised. The research design is planned, and the collected data are then analysed using appropriate statistical tests. The derived evidence is put into clinical practice once the reader is convinced that the clinical study is valid and reliable.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Campus Life
  • Carsey School of Public Policy
  • College of Engineering & Physical Sciences
  • College of Health & Human Services
  • College of Liberal Arts
  • College of Life Sciences & Agriculture
  • Cooperative Extension
  • Graduate School
  • Peter T. Paul College of Business & Economics
  • School of Marine Science and Ocean Engineering
  • Thompson School of Applied Science
  • UNH Manchester
  • UNH Franklin Pierce School of Law
  • Faculty/Staff

Like us on Facebook

Follow us on Twitter

Follow us on YouTube

Follow us on Instagram

Find us on LinkIn

UNH Today RSS feeds

University of New Hampshire logo

UNH Today is produced for the UNH community and for friends of UNH. The stories are written by the staff of UNH Marketing. Email us: [email protected] .

Manage Your Subscription     Contact Us

UNH Today  •  UNH Main Directory: 603-862-1234 Copyright © 2024  •  TTY Users: 7-1-1 or 800-735-2964 (Relay NH) USNH Privacy Policies  •  USNH Terms of Use  •  ADA Acknowledgement

Somali brother and sister, photographed by Becky Field

Bull moose feeding in Central New Hampshire. Photo Credit: Remington Moll/UNH

A female cow moose looks into the camera.

A female cow moose looks into the camera near Pittsburg, NH. Photo Credit: Remington Moll/UNH

Visual representation of audience frequencies or vocalizations

Visual representation of audience frequencies or vocalizations, known as representative spectrograms, from a) cow, b) bull, and c) calf. Photo Credit: UNH

Key Research Finding

By analyzing online videos of moose vocalizations, researchers quantified moose calls and determined significant differences in calls by sex and age. 

Passive acoustic monitoring: The tracking and monitoring of animals and environments through recording sounds. Kloepper has previously used passive acoustic monitoring to estimate bat colony sizes and frog populations .

Sexual dimorphism: Notable differences in characteristics between sexes of the same species.

Drive around New Hampshire and many of the license plates will quickly show that the moose is iconic to the Granite State. As New England’s largest megafauna, moose serve as a major tourist draw for New Hampshire’s White Mountains  and their feeding patterns are also important to sustaining healthy forests and associated ecosystems . Moose populations can be affected by different forest and land management decisions and by environmental factors such as growing winter tick numbers , but tracking those impacts is difficult because moose are notoriously shy. A team of New Hampshire Agricultural Experiment Station scientists is developing methods to assess whether sound monitoring could be adapted to more cost-effectively and less invasively track moose behaviors.

Station scientist Laura Kloepper , an assistant professor of biological sciences at the UNH College of Life Sciences and Agriculture , is leading research that analyzes and quantifies moose calls, characterizing them by age and sex. In a recent study published in JASA Express Letters , Kloepper and her co-authors, including NHAES scientist Rem Moll , leverage audio from publicly available online videos to assess wild moose sounds, in their natural environment, and identify the distinct differences by age and sex. It’s a critical first step toward creating an acoustic sensor network in New Hampshire’s North Country to automatically detect and help determine moose population density and occupancy.

“Moose are such iconic wildlife for New Hampshire, so understanding how they’re using their landscape and how we can manage our forests while sharing the land with them is key to their conservation,” Kloepper said. “However, due to the moose’s wide roaming range and its low population densities, monitoring them is an ever-present challenge that can be aided by non-invasive technologies. So to accurately develop a moose acoustic sensor, we first needed to quantify a variety of moose calls — and these data were not available yet, so we crowdsourced it.”

“By tracking moose, scientists can also predict how forest habitat affects moose distribution...we can investigate how habitat disturbance, such as from timber management, affects where moose prefer to live. And we can also investigate if moose’s habitat preference changes with the seasons or time of day.” ~ Laura Kloepper, Assistant Professor of Biological Sciences

Moose have notable differences in characteristics between sexes, and the research team found that this dimorphism extends to vocal differences. Using online videos filmed by hunters and recreationalists and which captured more than 670 moose vocalizations from across the United States, the research team matched the mouth or throat movement to sound. This allowed them to quantify moose calls as well as characterize the calls by age and sex. They found that female moose had calls with higher pitches and longer duration. Moose calves, which remain with their mother for one year and could, therefore, be identified by their proximity to a female moose, had the highest pitched calls, with a duration approximately equivalent to a male.

The study’s development of the first bioacoustic moose vocalizations method is an important first step to creating an acoustic network that can add significant value to ongoing efforts of tracking and monitoring moose populations and, in turn, the effectiveness and impacts of forest and land management practices.

“By tracking moose, scientists can also predict how forest habitat affects moose distribution,” said Kloepper. “Specifically, we can investigate how habitat disturbance, such as from timber management, affects where moose prefer to live. And we can also investigate if moose’s habitat preference changes with the seasons or time of day.”

Kloepper’s current NH Agricultural Experiment Station work is focused on creating an automated passive acoustic detector capable of determining moose sex and maturity from sound recordings. An initial set of acoustic recorders and monitoring work was established alongside a portion of a statewide camera trap network established in 2021 for non-invasive wildlife tracking by Station scientist Moll , an assistant professor of natural resources and the environment at the UNH College of Life Sciences and Agriculture.

“From the five acoustic monitoring sites we set up, we were able to record 15 moose, including mother and calf grunts, breaths, sniffs and footsteps,” described Kloepper. “For the published paper, we supplemented these data with hunter-contributed video and audio recordings, which allowed us to better quantify the vocal characteristics of male and female moose and calves.”

Kloepper and her team will expand to 50 acoustic monitoring sites in forests in New Hampshire’s Coos County. The recorders will continuously capture sound activity in the morning and evening—times of peak moose activity.

“Although we specifically are interested in recording moose vocalizations, by recording all the sounds we can investigate how the overall acoustic environment—known as the soundscape—varies across forest type,” said Kloepper.

The bioacoustic sites will expand UNH’s passive wildlife monitoring research system, which already includes a network of 300 camera traps set up on private and public lands across New Hampshire by Moll.

“Each year, these cameras capture hundreds of thousands of wildlife images that we analyze to better understand and predict the distribution and abundance of many forest-dwelling wildlife species,” said Moll. “Work by UNH Emeritus Professor Peter Pekins showed that that the primary drivers of regional moose population dynamics are the availability of young forest habitat and winter tick infestations, which can be exacerbated by climate change and locally high moose densities.”

He added, “Technologies like audio and camera stations provide critical population-level data that inform moose conservation and management in a changing world.”

This material is based on work supported by the NH Agricultural Experiment Station through joint funding from the USDA National Institute of Food and Agriculture (under Hatch award numbers 1024128) and the state of New Hampshire.

This work is co-authored by Alex Zager, Sonja Ahlberg, Olivia Boyan, Jocelyn Brierly, Valeria Eddington, Remington Moll and Laura Kloepper.

You can read the published article, Characteristics of wild moose (Alces alces) vocalizations , in JASA Express Letters .

Written by:, related links, related articles.

A middle-aged white woman shops at a farmers' market, picking produce from a stall

Serving New England Farmers

Sizing Up Cover Crop Seeding Rates

Sizing Up Cover Crop Seeding Rates

A timelapse photo of the Lamprey River

The Changing Lamprey River

IMAGES

  1. Types of Research Methodology: Uses, Types & Benefits

    research methods key terms table

  2. 15 Types of Research Methods (2024)

    research methods key terms table

  3. Research Methods

    research methods key terms table

  4. Research methodology

    research methods key terms table

  5. Types of Research Methodology: Uses, Types & Benefits

    research methods key terms table

  6. Research methods table

    research methods key terms table

VIDEO

  1. Referencing Basics (Part 1b)

  2. Nelson Webinar Psychology Research Methods for the 2022 Study Design

  3. Metho 2: Types of Research

  4. Research Methods Definition, Principles, Pol. Sc. IR, Public Adm., PP, LLB, Dr. Ayaz Rana

  5. Research Methods Definitions Types and Examples

  6. 3.Three type of main Research in education

COMMENTS

  1. Research Methods Key Term Glossary

    Research Methods Key Term Glossary. This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision. Aim. The researcher's area of interest ...

  2. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  3. PDF CHAPTER 1 The Selection of a Research Approach

    research approaches, research designs, and research methods are four key terms representing a perspective about research flow from broad constructions of research to the narrow procedures of methods. Table 1.1 explains these key terms in more detail. Table 1.1 Key Terms and Their Definitions as Used in This Chapter Key Terms

  4. Glossary of Research Terms

    Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor.

  5. Qualitative and Quantitative Research: Glossary of Key Terms

    This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods seminar (E600) taught by Mike Palmquist in the 1990s and 2000s. Accuracy: A term used in survey research to refer to the match between the target ...

  6. 1.4 Understanding Key Research Concepts and Terms

    This general chart begins with two key concepts: ontology and epistemology, advances through other concepts, and concludes with three research methodological approaches: qualitative, quantitative and mixed methods. ... Figure 1.3 does not represent an all-encompassing list of concepts and terms related to research methods. Keep in mind that ...

  7. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  8. PDF Research Methods Handbook

    Research Methods are the tools used to explain social phenomena and ... There is also a glossary of technical terms at the back of the book. ... The following table provides a breakdown of the key features of each of these categorisation of research method and data.

  9. Research Methods

    To analyse data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyse the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. Thematic analysis.

  10. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  11. Research Glossary

    The research glossary defines terms used in conducting social science and policy research, for example those describing methods, measurements, statistical procedures, and other aspects of research; the child care glossary defines terms used to describe aspects of child care and early education practice and policy. In survey research, accuracy ...

  12. Research Methodology

    Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect, analyze, and interpret data to answer research questions or solve research problems.

  13. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  14. 2.2 Research Methods

    Planning the research design is a key step in any sociological study. Sociologists generally choose from widely used methods of social investigation: primary source data collection such as survey, participant observation, ethnography, case study, unobtrusive observations, experiment, and secondary data analysis , or use of existing sources.

  15. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  16. 1.4 Understanding Key Research Concepts and Terms

    This general chart begins with two key concepts: ontology and epistemology, advances through other concepts, and concludes with three research methodological approaches: qualitative, quantitative and mixed methods. ... Figure 1.3 does not represent an all-encompassing list of concepts and terms related to research methods. Keep in mind that ...

  17. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  18. 8.5: Key Terms

    8.5: Key Terms. Page ID. Table of contents. No headers. Alpha Level (statistical significance): The probability of rejecting the null hypothesis when it is true. Alternative hypothesis ( Ha): Also known as research hypothesis, it is simply an alternative working statement to the null hypothesis. Essentially, it is the claim a researcher is ...

  19. Sociological Research: Key Terms

    evidence that comes from direct experience, scientifically gathered data, or experimentation. ethnography. observing a complete social setting and all that it entails. experiment. the testing of a hypothesis under controlled conditions. field research. gathering data from a natural environment without doing a lab experiment or a survey.

  20. Research Methods

    This book provides an overview of ninety key concepts which often trouble those who are new to researching within the social sciences. It covers theories of knowledge, methodologies and methods. Each entry offers a definition of a concept, shows how researchers have used that concept in their research and discusses difficulties that the concept ...

  21. Tables in Research Paper

    How to Create Tables in Research Paper. Here are the steps to create tables in a research paper: Plan your table: Determine the purpose of the table and the type of information you want to include. Consider the layout and format that will best convey your information. Choose a table format: Decide on the type of table you want to create.

  22. A systematic approach to searching: an efficient and complete method to

    Key concepts are the topics or components that the desired articles should address, such as diseases or conditions, actions, substances, settings, domains (e.g., therapy, diagnosis, etiology), or study types. Key concepts from the research question can be grouped to create elements in the search strategy.

  23. Methodology for research II

    The IRB consists of 7-15 members and at least five members are required to form the quorum to make a decision on the research [Table 2]. Table 2. ... The practicality characteristic of a measuring instrument can be judged in terms of economy, convenience and interpretability. ... Research Methodology: Methods and Techniques. 2nd ed. New Delhi ...

  24. Listening to Moose Tracks

    Key Research Finding. ... researchers quantified moose calls and determined significant differences in calls by sex and age. Key Terms. ... The study's development of the first bioacoustic moose vocalizations method is an important first step to creating an acoustic network that can add significant value to ongoing efforts of tracking and ...