News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 25, 2024 11:09 AM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 30 November 2020

The method comes first

Nature Methods volume  17 ,  page 1169 ( 2020 ) Cite this article

3345 Accesses

2 Citations

10 Altmetric

Metrics details

  • Biological techniques

A new method should be thoroughly tested, applied, described — and peer-reviewed — before biological discoveries generated using the method are published.

Which comes first, the method or the result? We think that most of our readers would agree that this is definitely not a ‘chicken-or-the-egg’ conundrum. It stands to reason that a new method should be carefully and thoroughly characterized and benchmarked — and its full description and these results peer-reviewed — before biological findings generated using this new method can be fully trusted.

As editors of a methods journal, we have observed many instances where this ideal chain of events has not been followed. Certainly it is not surprising that researchers who have discovered something novel and exciting using their new method would prioritize publishing these findings, especially if there is competition from other groups. Further, two groups may collaborate, one developing a method and the other applying the method to a biological question; these groups will have different priorities and may have papers ready for journal submission at different times.

Though we are aware of and sympathetic to these types of situations, we argue that publishing new biological findings generated using a novel method before the methods paper is accepted for publication in a peer-reviewed journal is detrimental to research.

In the most egregious examples, authors of a findings paper that uses an unpublished method or software tool will provide no details about the method and simply cite “manuscript in preparation.” When reading a paper that has been peer-reviewed and undergone various editorial checks at a journal, a reader should be reasonably able to trust the results. But when the results hinge on a method that has not yet been vetted through peer review and communicated via publication, how can such findings be trusted? Even more worrying, how can the biological findings be reproduced by others? We urge peer reviewers to be on the lookout for this poor practice and flag it to the journal editor handling the paper.

Preprint servers allow authors to rapidly share unpublished work to the scientific community, something that we both support and encourage here at Nature Research. However, we argue that it is insufficient to cite a preprint reporting a method as evidence that the method has been properly validated. Our colleagues at Nature Biotechnology , for example, require that methods central to new results in a submitted manuscript be accepted for publication in a peer-reviewed journal before they will publish the manuscript, a stance we applaud. As they wrote in a 2017 Editorial , “peer-reviewed journals must ensure that the integration of minimally reviewed preprints into their papers does not compromise the reproducibility of the science they publish.”

We strongly encourage researchers who want to publish two papers, one reporting a new method and the other a new finding, to prioritize writing up both. If it is not practical to publish the methods paper in a journal before submitting the findings paper, submission should at least be done concurrently. If both papers are submitted to the same journal, or to the same publisher, peer review and publication can often be coordinated. If the papers are submitted to different journals, the other paper should be provided to the editors (note that this is a requirement at Nature Methods ). This allows the editors and the reviewers to understand how the method works and also to judge whether there is substantial overlap between the papers.

Even in cases where a methods and a findings paper have been simultaneously submitted to journals, peer review outcomes can be unpredictable. We advise authors to keep their editor informed about the status of the other paper and try to ensure that the methods paper is at least provisionally accepted (if not published) before the findings paper is published.

Authors should also be aware that if they describe a method in some detail in a paper where they report new biological findings, this may prevent them from later being able to publish a dedicated methods paper in a journal (such as Nature Methods ) where methodological novelty is an important editorial criterion. If we think a method is sufficiently exciting and important for us to potentially publish a paper focused on the method itself, we will occasionally consider it. But in such cases the methods paper must stand on its own: it must describe a new tool or an optimized workflow, or provide substantial additional characterization or validation data, or describe a novel application. In other words, there must be a good reason to justify publishing a dedicated methods paper following the initial report.

There are many examples of methods, tools and resources that have remained unpublished even for years. You might ask: why bother publishing a dedicated method paper at all? Methods are key to advancing scientific progress, and it’s just as important for the method as for a novel finding, if not even more important, that the work go through a careful vetting process. At Nature Methods , we also uphold strict editorial standards regarding a method or tool’s description (including making software code and unique materials available), its characterization and benchmarking in comparison to existing approaches (including making these data available), and a demonstration of general applicability. We think that these standards help improve the reliability and reproducibility of methods we publish, allowing readers to better trust new biological findings generated by such methods, as well as making the methods themselves more useful and practical for a broader audience.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

The method comes first. Nat Methods 17 , 1169 (2020). https://doi.org/10.1038/s41592-020-01017-y

Download citation

Published : 30 November 2020

Issue Date : December 2020

DOI : https://doi.org/10.1038/s41592-020-01017-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

new method research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Language: English | German

How to Construct a Mixed Methods Research Design

Wie man ein mixed methods-forschungs-design konstruiert, judith schoonenboom.

1 Institut für Bildungswissenschaft, Universität Wien, Sensengasse 3a, 1090 Wien, Austria

R. Burke Johnson

2 Department of Professional Studies, University of South Alabama, UCOM 3700, 36688-0002 Mobile, AL USA

This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

Zusammenfassung

Der Beitrag gibt einen Überblick darüber, wie das Forschungsdesign bei Mixed Methods-Studien angelegt sein sollte. Um ein Mixed Methods-Forschungsdesign aufzustellen, müssen Forschende sorgfältig alle Dimensionen von Methodenkombinationen abwägen und von Anfang an auf die Güte und damit verbundene etwaige Probleme achten. Wir erklären und diskutieren die für Forschungsdesigns relevanten sieben Dimensionen von Methodenkombinationen: Untersuchungsziel, Rolle von Theorie im Forschungsprozess, Timing (Simultanität und Abhängigkeit), Schnittstellen, an denen Integration stattfindet, systematische vs. interaktive Design-Ansätze, geplante vs. emergente Designs und Komplexität des Designs. Es gibt außerdem zahlreiche sekundäre Dimensionen, die bei der Aufstellung des Forschungsdesigns berücksichtigt werden müssen, von denen wir zehn erklären. Der Beitrag schließt mit zwei Fallbeispielen ab, anhand derer konkret gezeigt wird, wie Mixed Methods-Forschungsdesigns aufgestellt werden können.

What is a mixed methods design?

This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word “design” has at least two distinct meanings in mixed methods research (Maxwell 2013 ). One meaning focuses on the process of design; in this meaning, design is often used as a verb. Someone can be engaged in designing a study (in German: “eine Studie konzipieren” or “eine Studie designen”). Another meaning is that of a product, namely the result of designing. The result of designing as a verb is a mixed methods design as a noun (in German: “das Forschungsdesign” or “Design”), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing as an activity. Obeying these rules is not a guarantee of a strong design, but it does contribute to it. A mixed methods design is characterized by the combination of at least one qualitative and one quantitative research component. For the purpose of this article, we use the following definition of mixed methods research (Johnson et al. 2007 , p. 123):

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration.

Mixed methods research (“Mixed Methods” or “MM”) is the sibling of multimethod research (“Methodenkombination”) in which either solely multiple qualitative approaches or solely multiple quantitative approaches are combined.

In a commonly used mixed methods notation system (Morse 1991 ), the components are indicated as qual and quan (or QUAL and QUAN to emphasize primacy), respectively, for qualitative and quantitative research. As discussed below, plus (+) signs refer to concurrent implementation of components (“gleichzeitige Durchführung der Teilstudien” or “paralleles Mixed Methods-Design”) and arrows (→) refer to sequential implementation (“Sequenzielle Durchführung der Teilstudien” or “sequenzielles Mixed Methods-Design”) of components. Note that each research tradition receives an equal number of letters (four) in its abbreviation for equity. In this article, this notation system is used in some depth.

A mixed methods design as a product has several primary characteristics that should be considered during the design process. As shown in Table  1 , the following primary design “dimensions” are emphasized in this article: purpose of mixing, theoretical drive, timing, point of integration, typological use, and degree of complexity. These characteristics are discussed below. We also provide some secondary dimensions to consider when constructing a mixed methods design (Johnson and Christensen 2017 ).

List of Primary and Secondary Design Dimensions

On the basis of these dimensions, mixed methods designs can be classified into a mixed methods typology or taxonomy. In the mixed methods literature, various typologies of mixed methods designs have been proposed (for an overview see Creswell and Plano Clark 2011 , p. 69–72).

The overall goal of mixed methods research, of combining qualitative and quantitative research components, is to expand and strengthen a study’s conclusions and, therefore, contribute to the published literature. In all studies, the use of mixed methods should contribute to answering one’s research questions.

Ultimately, mixed methods research is about heightened knowledge and validity. The design as a product should be of sufficient quality to achieve multiple validities legitimation (Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ), which refers to the mixed methods research study meeting the relevant combination or set of quantitative, qualitative, and mixed methods validities in each research study.

Given this goal of answering the research question(s) with validity, a researcher can nevertheless have various reasons or purposes for wanting to strengthen the research study and its conclusions. Following is the first design dimension for one to consider when designing a study: Given the research question(s), what is the purpose of the mixed methods study?

A popular classification of purposes of mixed methods research was first introduced in 1989 by Greene, Caracelli, and Graham, based on an analysis of published mixed methods studies. This classification is still in use (Greene 2007 ). Greene et al. ( 1989 , p. 259) distinguished the following five purposes for mixing in mixed methods research:

1.  Triangulation seeks convergence, corroboration, correspondence of results from different methods; 2.  Complementarity seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method; 3.  Development seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions; 4.  Initiation seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method; 5.  Expansion seeks to extend the breadth and range of inquiry by using different methods for different inquiry components.

In the past 28 years, this classification has been supplemented by several others. On the basis of a review of the reasons for combining qualitative and quantitative research mentioned by the authors of mixed methods studies, Bryman ( 2006 ) formulated a list of more concrete rationales for performing mixed methods research (see Appendix). Bryman’s classification breaks down Greene et al.’s ( 1989 ) categories into several aspects, and he adds a number of additional aspects, such as the following:

(a)  Credibility – refers to suggestions that employing both approaches enhances the integrity of findings. (b)  Context – refers to cases in which the combination is justified in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey. (c)  Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings. (d)  Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others. (e)  Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project. (f)  Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research. (Bryman, p. 106)

Views can be diverse (f) in various ways. Some examples of mixed methods design that include a diversity of views are:

  • Iteratively/sequentially connecting local/idiographic knowledge with national/general/nomothetic knowledge;
  • Learning from different perspectives on teams and in the field and literature;
  • Achieving multiple participation, social justice, and action;
  • Determining what works for whom and the relevance/importance of context;
  • Producing interdisciplinary substantive theory, including/comparing multiple perspectives and data regarding a phenomenon;
  • Juxtaposition-dialogue/comparison-synthesis;
  • Breaking down binaries/dualisms (some of both);
  • Explaining interaction between/among natural and human systems;
  • Explaining complexity.

The number of possible purposes for mixing is very large and is increasing; hence, it is not possible to provide an exhaustive list. Greene et al.’s ( 1989 ) purposes, Bryman’s ( 2006 ) rationales, and our examples of a diversity of views were formulated as classifications on the basis of examination of many existing research studies. They indicate how the qualitative and quantitative research components of a study relate to each other. These purposes can be used post hoc to classify research or a priori in the design of a new study. When designing a mixed methods study, it is sometimes helpful to list the purpose in the title of the study design.

The key point of this section is for the researcher to begin a study with at least one research question and then carefully consider what the purposes for mixing are. One can use mixed methods to examine different aspects of a single research question, or one can use separate but related qualitative and quantitative research questions. In all cases, the mixing of methods, methodologies, and/or paradigms will help answer the research questions and make improvements over a more basic study design. Fuller and richer information will be obtained in the mixed methods study.

Theoretical drive

In addition to a mixing purpose, a mixed methods research study might have an overall “theoretical drive” (Morse and Niehaus 2009 ). When designing a mixed methods study, it is occasionally helpful to list the theoretical drive in the title of the study design. An investigation, in Morse and Niehaus’s ( 2009 ) view, is focused primarily on either exploration-and-description or on testing-and-prediction. In the first case, the theoretical drive is called “inductive” or “qualitative”; in the second case, it is called “deductive” or “quantitative”. In the case of mixed methods, the component that corresponds to the theoretical drive is referred to as the “core” component (“Kernkomponente”), and the other component is called the “supplemental” component (“ergänzende Komponente”). In Morse’s notation system, the core component is written in capitals and the supplemental component is written in lowercase letters. For example, in a QUAL → quan design, more weight is attached to the data coming from the core qualitative component. Due to the decisive character of the core component, the core component must be able to stand on its own, and should be implemented rigorously. The supplemental component does not have to stand on its own.

Although this distinction is useful in some circumstances, we do not advise to apply it to every mixed methods design. First, Morse and Niehaus contend that the supplemental component can be done “less rigorously” but do not explain which aspects of rigor can be dropped. In addition, the idea of decreased rigor is in conflict with one key theme of the present article, namely that mixed methods designs should always meet the criterion of multiple validities legitimation (Onwuegbuzie and Johnson 2006 ).

The idea of theoretical drive as explicated by Morse and Niehaus has been criticized. For example, we view a theoretical drive as a feature not of a whole study, but of a research question, or, more precisely, of an interpretation of a research question. For example, if one study includes multiple research questions, it might include several theoretical drives (Schoonenboom 2016 ).

Another criticism of Morse and Niehaus’ conceptualization of theoretical drive is that it does not allow for equal-status mixed methods research (“Mixed Methods Forschung, bei der qualitative und quantitative Methoden die gleiche Bedeutung haben” or “gleichrangige Mixed Methods-Designs”), in which both the qualitative and quantitative component are of equal value and weight; this same criticism applies to Morgan’s ( 2014 ) set of designs. We agree with Greene ( 2015 ) that mixed methods research can be integrated at the levels of method, methodology, and paradigm. In this view, equal-status mixed methods research designs are possible, and they result when both the qualitative and the quantitative components, approaches, and thinking are of equal value, they take control over the research process in alternation, they are in constant interaction, and the outcomes they produce are integrated during and at the end of the research process. Therefore, equal-status mixed methods research (that we often advocate) is also called “interactive mixed methods research”.

Mixed methods research can have three different drives, as formulated by Johnson et al. ( 2007 , p. 123):

Qualitative dominant [or qualitatively driven] mixed methods research is the type of mixed research in which one relies on a qualitative, constructivist-poststructuralist-critical view of the research process, while concurrently recognizing that the addition of quantitative data and approaches are likely to benefit most research projects. Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p. 124) The area around the center of the [qualitative-quantitative] continuum, equal status , is the home for the person that self-identifies as a mixed methods researcher. This researcher takes as his or her starting point the logic and philosophy of mixed methods research. These mixed methods researchers are likely to believe that qualitative and quantitative data and approaches will add insights as one considers most, if not all, research questions.

We leave it to the reader to decide if he or she desires to conduct a qualitatively driven study, a quantitatively driven study, or an equal-status/“interactive” study. According to the philosophies of pragmatism (Johnson and Onwuegbuzie 2004 ) and dialectical pluralism (Johnson 2017 ), interactive mixed methods research is very much a possibility. By successfully conducting an equal-status study, the pragmatist researcher shows that paradigms can be mixed or combined, and that the incompatibility thesis does not always apply to research practice. Equal status research is most easily conducted when a research team is composed of qualitative, quantitative, and mixed researchers, interacts continually, and conducts a study to address one superordinate goal.

Timing: simultaneity and dependence

Another important distinction when designing a mixed methods study relates to the timing of the two (or more) components. When designing a mixed methods study, it is usually helpful to include the word “concurrent” (“parallel”) or “sequential” (“sequenziell”) in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest 2013 ).

Simultaneity (“Simultanität”) forms the basis of the distinction between concurrent and sequential designs. In a  sequential design , the quantitative component precedes the qualitative component, or vice versa. In a  concurrent design , both components are executed (almost) simultaneously. In the notation of Morse ( 1991 ), concurrence is indicated by a “+” between components (e. g., QUAL + quan), while sequentiality is indicated with a “→” (QUAL → quan). Note that the use of capital letters for one component and lower case letters for another component in the same design suggest that one component is primary and the other is secondary or supplemental.

Some designs are sequential by nature. For example, in a  conversion design, qualitative categories and themes might be first obtained by collection and analysis of qualitative data, and then subsequently quantitized (Teddlie and Tashakkori 2009 ). Likewise, with Greene et al.’s ( 1989 ) initiation purpose, the initiation strand follows the unexpected results that it is supposed to explain. In other cases, the researcher has a choice. It is possible, e. g., to collect interview data and survey data of one inquiry simultaneously; in that case, the research activities would be concurrent. It is also possible to conduct the interviews after the survey data have been collected (or vice versa); in that case, research activities are performed sequentially. Similarly, a study with the purpose of expansion can be designed in which data on an effect and the intervention process are collected simultaneously, or they can be collected sequentially.

A second aspect of timing is dependence (“Abhängigkeit”) . We call two research components dependent if the implementation of the second component depends on the results of data analysis in the first component. Two research components are independent , if their implementation does not depend on the results of data analysis in the other component. Often, a researcher has a choice to perform data analysis independently or not. A researcher could analyze interview data and questionnaire data of one inquiry independently; in that case, the research activities would be independent. It is also possible to let the interview questions depend upon the outcomes of the analysis of the questionnaire data (or vice versa); in that case, research activities are performed dependently. Similarly, the empirical outcome/effect and process in a study with the purpose of expansion might be investigated independently, or the process study might take the effect/outcome as given (dependent).

In the mixed methods literature, the distinction between sequential and concurrent usually refers to the combination of concurrent/independent and sequential/dependent, and to the combination of data collection and data analysis. It is said that in a concurrent design, the data collection and data analysis of both components occurs (almost) simultaneously and independently, while in a sequential design, the data collection and data analysis of one component take place after the data collection and data analysis of the other component and depends on the outcomes of the other component.

In our opinion, simultaneity and dependence are two separate dimensions. Simultaneity indicates whether data collection is done concurrent or sequentially. Dependence indicates whether the implementation of one component depends upon the results of data analysis of the other component. As we will see in the example case studies, a concurrent design could include dependent data analysis, and a sequential design could include independent data analysis. It is conceivable that one simultaneously conducts interviews and collects questionnaire data (concurrent), while allowing the analysis focus of the interviews to depend on what emerges from the survey data (dependence).

Dependent research activities include a redirection of subsequent research inquiry. Using the outcomes of the first research component, the researcher decides what to do in the second component. Depending on the outcomes of the first research component, the researcher will do something else in the second component. If this is so, the research activities involved are said to be sequential-dependent, and any component preceded by another component should appropriately build on the previous component (see sequential validity legitimation ; Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ).

It is under the purposive discretion of the researcher to determine whether a concurrent-dependent design, a concurrent-independent design, a sequential-dependent design, or a sequential-dependent design is needed to answer a particular research question or set of research questions in a given situation.

Point of integration

Each true mixed methods study has at least one “point of integration” – called the “point of interface” by Morse and Niehaus ( 2009 ) and Guest ( 2013 ) –, at which the qualitative and quantitative components are brought together. Having one or more points of integration is the distinguishing feature of a design based on multiple components. It is at this point that the components are “mixed”, hence the label “mixed methods designs”. The term “mixing”, however, is misleading, as the components are not simply mixed, but have to be integrated very carefully.

Determining where the point of integration will be, and how the results will be integrated, is an important, if not the most important, decision in the design of mixed methods research. Morse and Niehaus ( 2009 ) identify two possible points of integration: the results point of integration and the analytical point of integration.

Most commonly, integration takes place in the results point of integration . At some point in writing down the results of the first component, the results of the second component are added and integrated. A  joint display (listing the qualitative and quantitative findings and an integrative statement) might be used to facilitate this process.

In the case of an analytical point of integration , a first analytical stage of a qualitative component is followed by a second analytical stage, in which the topics identified in the first analytical stage are quantitized. The results of the qualitative component ultimately, and before writing down the results of the analytical phase as a whole, become quantitative; qualitizing also is a possible strategy, which would be the converse of this.

Other authors assume more than two possible points of integration. Teddlie and Tashakkori ( 2009 ) distinguish four different stages of an investigation: the conceptualization stage, the methodological experimental stage (data collection), the analytical experimental stage (data analysis), and the inferential stage. According to these authors, in all four stages, mixing is possible, and thus all four stages are potential points or integration.

However, the four possible points of integration used by Teddlie and Tashakkori ( 2009 ) are still too coarse to distinguish some types of mixing. Mixing in the experiential stage can take many different forms, for example the use of cognitive interviews to improve a questionnaire (tool development), or selecting people for an interview on the basis of the results of a questionnaire (sampling). Extending the definition by Guest ( 2013 ), we define the point of integration as “any point in a study where two or more research components are mixed or connected in some way”. Then, the point of integration in the two examples of this paragraph can be defined more accurately as “instrument development”, and “development of the sample”.

It is at the point of integration that qualitative and quantitative components are integrated. Some primary ways that the components can be connected to each other are as follows:

(1) merging the two data sets, (2) connecting from the analysis of one set of data to the collection of a second set of data, (3) embedding of one form of data within a larger design or procedure, and (4) using a framework (theoretical or program) to bind together the data sets (Creswell and Plano Clark 2011 , p. 76).

More generally, one can consider mixing at any or all of the following research components: purposes, research questions, theoretical drive, methods, methodology, paradigm, data, analysis, and results. One can also include mixing views of different researchers, participants, or stakeholders. The creativity of the mixed methods researcher designing a study is extensive.

Substantively, it can be useful to think of integration or mixing as comparing and bringing together two (or more) components on the basis of one or more of the purposes set out in the first section of this article. For example, it is possible to use qualitative data to illustrate a quantitative effect, or to determine whether the qualitative and the quantitative component yield convergent results ( triangulation ). An integrated result could also consist of a combination of a quantitatively established effect and a qualitative description of the underlying process . In the case of development, integration consists of an adjustment of an, often quantitative, for example, instrument or model or interpretation, based on qualitative assessments by members of the target group.

A special case is the integration of divergent results. The power of mixed methods research is its ability to deal with diversity and divergence. In the literature, we find two kinds of strategies for dealing with divergent results. A first set of strategies takes the detected divergence as the starting point for further analysis, with the aim to resolve the divergence. One possibility is to carry out further research (Cook 1985 ; Greene and Hall 2010 ). Further research is not always necessary. One can also look for a more comprehensive theory, which is able to account for both the results of the first component and the deviating results of the second component. This is a form of abduction (Erzberger and Prein 1997 ).

A fruitful starting point in trying to resolve divergence through abduction is to determine which component has resulted in a finding that is somehow expected, logical, and/or in line with existing research. The results of this research component, called the “sense” (“Lesart”), are subsequently compared to the results of the other component, called the “anti-sense” (“alternative Lesart”), which are considered dissonant, unexpected, and/or contrary to what had been found in the literature. The aim is to develop an overall explanation that fits both the sense and the anti-sense (Bazeley and Kemp 2012 ; Mendlinger and Cwikel 2008 ). Finally, a reanalysis of the data can sometimes lead to resolving divergence (Creswell and Plano Clark 2011 ).

Alternatively, one can question the existence of the encountered divergence. In this regard, Mathison ( 1988 ) recommends determining whether deviating results shown by the data can be explained by knowledge about the research and/or knowledge of the social world. Differences between results from different data sources could also be the result of properties of the methods involved, rather than reflect differences in reality (Yanchar and Williams 2006 ). In general, the conclusions of the individual components can be subjected to an inference quality audit (Teddlie and Tashakkori 2009 ), in which the researcher investigates the strength of each of the divergent conclusions. We recommend that researchers first determine whether there is “real” divergence, according to the strategies mentioned in the last paragraph. Next, an attempt can be made to resolve cases of “true” divergence, using one or more of the methods mentioned in this paragraph.

Design typology utilization

As already mentioned in Sect. 1, mixed methods designs can be classified into a mixed methods typology or taxonomy. A typology serves several purposes, including the following: guiding practice, legitimizing the field, generating new possibilities, and serving as a useful pedagogical tool (Teddlie and Tashakkori 2009 ). Note, however, that not all types of typologies are equally suitable for all purposes. For generating new possibilities, one will need a more exhaustive typology, while a useful pedagogical tool might be better served by a non-exhaustive overview of the most common mixed methods designs. Although some of the current MM design typologies include more designs than others, none of the current typologies is fully exhaustive. When designing a mixed methods study, it is often useful to borrow its name from an existing typology, or to construct a superior and nuanced clear name when your design is based on a modification of one or more of the designs.

Various typologies of mixed methods designs have been proposed. Creswell and Plano Clark’s ( 2011 ) typology of some “commonly used designs” includes six “major mixed methods designs”. Our summary of these designs runs as follows:

  • Convergent parallel design (“paralleles Design”) (the quantitative and qualitative strands of the research are performed independently, and their results are brought together in the overall interpretation),
  • Explanatory sequential design (“explanatives Design”) (a first phase of quantitative data collection and analysis is followed by the collection of qualitative data, which are used to explain the initial quantitative results),
  • Exploratory sequential design (“exploratives Design”) (a first phase of qualitative data collection and analysis is followed by the collection of quantitative data to test or generalize the initial qualitative results),
  • Embedded design (“Einbettungs-Design”) (in a traditional qualitative or quantitative design, a strand of the other type is added to enhance the overall design),
  • Transformative design (“politisch-transformatives Design”) (a transformative theoretical framework, e. g. feminism or critical race theory, shapes the interaction, priority, timing and mixing of the qualitative and quantitative strand),
  • Multiphase design (“Mehrphasen-Design”) (more than two phases or both sequential and concurrent strands are combined over a period of time within a program of study addressing an overall program objective).

Most of their designs presuppose a specific juxtaposition of the qualitative and quantitative component. Note that the last design is a complex type that is required in many mixed methods studies.

The following are our adapted definitions of Teddlie and Tashakkori’s ( 2009 ) five sets of mixed methods research designs (adapted from Teddlie and Tashakkori 2009 , p. 151):

  • Parallel mixed designs (“paralleles Mixed-Methods-Design”) – In these designs, one has two or more parallel quantitative and qualitative strands, either with some minimal time lapse or simultaneously; the strand results are integrated into meta-inferences after separate analysis are conducted; related QUAN and QUAL research questions are answered or aspects of the same mixed research question is addressed.
  • Sequential mixed designs (“sequenzielles Mixed-Methods-Design”) – In these designs, QUAL and QUAN strands occur across chronological phases, and the procedures/questions from the later strand emerge/depend/build on on the previous strand; the research questions are interrelated and sometimes evolve during the study.
  • Conversion mixed designs (“Transfer-Design” or “Konversionsdesign”) – In these parallel designs, mixing occurs when one type of data is transformed to the other type and then analyzed, and the additional findings are added to the results; this design answers related aspects of the same research question,
  • Multilevel mixed designs (“Mehrebenen-Mixed-Methods-Design”) – In these parallel or sequential designs, mixing occurs across multiple levels of analysis, as QUAN and QUAL data are analyzed and integrated to answer related aspects of the same research question or related questions.
  • Fully integrated mixed designs (“voll integriertes Mixed-Methods-Design”) – In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur. For example, rather than including integration only at the findings/results stage, or only across phases in a sequential design, mixing might occur at the conceptualization stage, the methodological stage, the analysis stage, and the inferential stage.

We recommend adding to Teddlie and Tashakkori’s typology a sixth design type, specifically, a  “hybrid” design type to include complex combinations of two or more of the other design types. We expect that many published MM designs will fall into the hybrid design type.

Morse and Niehaus ( 2009 ) listed eight mixed methods designs in their book (and suggested that authors create more complex combinations when needed). Our shorthand labels and descriptions (adapted from Morse and Niehaus 2009 , p. 25) run as follows:

  • QUAL + quan (inductive-simultaneous design where, the core component is qualitative and the supplemental component is quantitative)
  • QUAL → quan (inductive-sequential design, where the core component is qualitative and the supplemental component is quantitative)
  • QUAN + qual (deductive-simultaneous design where, the core component is quantitative and the supplemental component is qualitative)
  • QUAN → qual (deductive-sequential design, where the core component is quantitative and the supplemental component is qualitative)
  • QUAL + qual (inductive-simultaneous design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAL → qual (inductive-sequential design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAN + quan (deductive-simultaneous design, where both components are quantitative; this is a multimethod design rather than a mixed methods design)
  • QUAN → quan (deductive-sequential design, where both components are quantitative; this is a multimethod design rather than a mixed methods design).

Notice that Morse and Niehaus ( 2009 ) included four mixed methods designs (the first four designs shown above) and four multimethod designs (the second set of four designs shown above) in their typology. The reader can, therefore, see that the design notation also works quite well for multimethod research designs. Notably absent from Morse and Niehaus’s book are equal-status or interactive designs. In addition, they assume that the core component should always be performed either concurrent with or before the supplemental component.

Johnson, Christensen, and Onwuegbuzie constructed a set of mixed methods designs without these limitations. The resulting mixed methods design matrix (see Johnson and Christensen 2017 , p. 478) contains nine designs, which we can label as follows (adapted from Johnson and Christensen 2017 , p. 478):

  • QUAL + QUAN (equal-status concurrent design),
  • QUAL + quan (qualitatively driven concurrent design),
  • QUAN + qual (quantitatively driven concurrent design),
  • QUAL → QUAN (equal-status sequential design),
  • QUAN → QUAL (equal-status sequential design),
  • QUAL → quan (qualitatively driven sequential design),
  • qual → QUAN (quantitatively driven sequential design),
  • QUAN → qual (quantitatively driven sequential design), and
  • quan → QUAL (qualitatively driven sequential design).

The above set of nine designs assumed only one qualitative and one quantitative component. However, this simplistic assumption can be relaxed in practice, allowing the reader to construct more complex designs. The Morse notation system is very powerful. For example, here is a three-stage equal-status concurrent-sequential design:

The key point here is that the Morse notation provides researchers with a powerful language for depicting and communicating the design constructed for a specific research study.

When designing a mixed methods study, it is sometimes helpful to include the mixing purpose (or characteristic on one of the other dimensions shown in Table  1 ) in the title of the study design (e. g., an explanatory sequential MM design, an exploratory-confirmatory MM design, a developmental MM design). Much more important, however, than a design name is for the author to provide an accurate description of what was done in the research study, so the reader will know exactly how the study was conducted. A design classification label can never replace such a description.

The common complexity of mixed methods design poses a problem to the above typologies of mixed methods research. The typologies were designed to classify whole mixed methods studies, and they are basically based on a classification of simple designs. In practice, many/most designs are complex. Complex designs are sometimes labeled “complex design”, “multiphase design”, “fully integrated design”, “hybrid design” and the like. Because complex designs occur very often in practice, the above typologies are not able to classify a large part of existing mixed methods research any further than by labeling them “complex”, which in itself is not very informative about the particular design. This problem does not fully apply to Morse’s notation system, which can be used to symbolize some more complex designs.

Something similar applies to the classification of the purposes of mixed methods research. The classifications of purposes mentioned in the “Purpose”-section, again, are basically meant for the classification of whole mixed methods studies. In practice, however, one single study often serves more than one purpose (Schoonenboom et al. 2017 ). The more purposes that are included in one study, the more difficult it becomes to select a design on the basis of the purpose of the investigation, as advised by Greene ( 2007 ). Of all purposes involved, then, which one should be the primary basis for the design? Or should the design be based upon all purposes included? And if so, how? For more information on how to articulate design complexity based on multiple purposes of mixing, see Schoonenboom et al. ( 2017 ).

It should be clear to the reader that, although much progress has been made in the area of mixed methods design typologies, the problem remains in developing a single typology that is effective in comprehensively listing a set of designs for mixed methods research. This is why we emphasize in this article the importance of learning to build on simple designs and construct one’s own design for one’s research questions. This will often result in a combination or “hybrid” design that goes beyond basic designs found in typologies, and a methodology section that provides much more information than a design name.

Typological versus interactive approaches to design

In the introduction, we made a distinction between design as a product and design as a process. Related to this, two different approaches to design can be distinguished: typological/taxonomic approaches (“systematische Ansätze”), such as those in the previous section, and interactive approaches (“interaktive Ansätze”) (the latter were called “dynamic” approaches by Creswell and Plano Clark 2011 ). Whereas typological/taxonomic approaches view designs as a sort of mold, in which the inquiry can be fit, interactive approaches (Maxwell 2013 ) view design as a process, in which a certain design-as-a-product might be the outcome of the process, but not its input.

The most frequently mentioned interactive approach to mixed methods research is the approach by Maxwell and Loomis ( 2003 ). Maxwell and Loomis distinguish the following components of a design: goals, conceptual framework, research question, methods, and validity. They argue convincingly that the most important task of the researcher is to deliver as the end product of the design process a design in which these five components fit together properly. During the design process, the researcher works alternately on the individual components, and as a result, their initial fit, if it existed, tends to get lost. The researcher should therefore regularly check during the research and continuing design process whether the components still fit together, and, if not, should adapt one or the other component to restore the fit between them. In an interactive approach, unlike the typological approach, design is viewed as an interactive process in which the components are continually compared during the research study to each other and adapted to each other.

Typological and interactive approaches to mixed methods research have been presented as mutually exclusive alternatives. In our view, however, they are not mutually exclusive. The interactive approach of Maxwell is a very powerful tool for conducting research, yet this approach is not specific to mixed methods research. Maxwell’s interactive approach emphasizes that the researcher should keep and monitor a close fit between the five components of research design. However, it does not indicate how one should combine qualitative and quantitative subcomponents within one of Maxwell’s five components (e. g., how one should combine a qualitative and a quantitative method, or a qualitative and a quantitative research question). Essential elements of the design process, such as timing and the point of integration are not covered by Maxwell’s approach. This is not a shortcoming of Maxwell’s approach, but it indicates that to support the design of mixed methods research, more is needed than Maxwell’s model currently has to offer.

Some authors state that design typologies are particularly useful for beginning researchers and interactive approaches are suited for experienced researchers (Creswell and Plano Clark 2011 ). However, like an experienced researcher, a research novice needs to align the components of his or her design properly with each other, and, like a beginning researcher, an advanced researcher should indicate how qualitative and quantitative components are combined with each other. This makes an interactive approach desirable, also for beginning researchers.

We see two merits of the typological/taxonomic approach . We agree with Greene ( 2007 ), who states that the value of the typological approach mainly lies in the different dimensions of mixed methods that result from its classifications. In this article, the primary dimensions include purpose, theoretical drive, timing, point of integration, typological vs. interactive approaches, planned vs. emergent designs, and complexity (also see secondary dimensions in Table  1 ). Unfortunately, all of these dimensions are not reflected in any single design typology reviewed here. A second merit of the typological approach is the provision of common mixed methods research designs, of common ways in which qualitative and quantitative research can be combined, as is done for example in the major designs of Creswell and Plano Clark ( 2011 ). Contrary to other authors, however, we do not consider these designs as a feature of a whole study, but rather, in line with Guest ( 2013 ), as a feature of one part of a design in which one qualitative and one quantitative component are combined. Although one study could have only one purpose, one point of integration, et cetera, we believe that combining “designs” is the rule and not the exception. Therefore, complex designs need to be constructed and modified as needed, and during the writing phase the design should be described in detail and perhaps given a creative and descriptive name.

Planned versus emergent designs

A mixed methods design can be thought out in advance, but can also arise during the course of the conduct of the study; the latter is called an “emergent” design (Creswell and Plano Clark 2011 ). Emergent designs arise, for example, when the researcher discovers during the study that one of the components is inadequate (Morse and Niehaus 2009 ). Addition of a component of the other type can sometimes remedy such an inadequacy. Some designs contain an emergent component by their nature. Initiation, for example, is the further exploration of unexpected outcomes. Unexpected outcomes are by definition not foreseen, and therefore cannot be included in the design in advance.

The question arises whether researchers should plan all these decisions beforehand, or whether they can make them during, and depending on the course of, the research process. The answer to this question is twofold. On the one hand, a researcher should decide beforehand which research components to include in the design, such that the conclusion that will be drawn will be robust. On the other hand, developments during research execution will sometimes prompt the researcher to decide to add additional components. In general, the advice is to be prepared for the unexpected. When one is able to plan for emergence, one should not refrain from doing so.

Dimension of complexity

Next, mixed methods designs are characterized by their complexity. In the literature, simple and complex designs are distinguished in various ways. A common distinction is between simple investigations with a single point of integration versus complex investigations with multiple points of integration (Guest 2013 ). When designing a mixed methods study, it can be useful to mention in the title whether the design of the study is simple or complex. The primary message of this section is as follows: It is the responsibility of the researcher to create more complex designs when needed to answer his or her research question(s) .

Teddlie and Tashakkori’s ( 2009 ) multilevel mixed designs and fully integrated mixed designs are both complex designs, but for different reasons. A multilevel mixed design is more complex ontologically, because it involves multiple levels of reality. For example, data might be collected both at the levels of schools and students, neighborhood and households, companies and employees, communities and inhabitants, or medical practices and patients (Yin 2013 ). Integration of these data does not only involve the integration of qualitative and quantitative data, but also the integration of data originating from different sources and existing at different levels. Little if any published research has discussed the possible ways of integrating data obtained in a multilevel mixed design (see Schoonenboom 2016 ). This is an area in need of additional research.

The fully-integrated mixed design is more complex because it contains multiple points of integration. As formulated by Teddlie and Tashakkori ( 2009 , p. 151):

In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur.

Complexity, then, not only depends on the number of components, but also on the extent to which they depend on each other (e. g., “one approach affects the formulation of the other”).

Many of our design dimensions ultimately refer to different ways in which the qualitative and quantitative research components are interdependent. Different purposes of mixing ultimately differ in the way one component relates to, and depends upon, the other component. For example, these purposes include dependencies, such as “x illustrates y” and “x explains y”. Dependencies in the implementation of x and y occur to the extent that the design of y depends on the results of x (sequentiality). The theoretical drive creates dependencies, because the supplemental component y is performed and interpreted within the context and the theoretical drive of core component x. As a general rule in designing mixed methods research, one should examine and plan carefully the ways in which and the extent to which the various components depend on each other.

The dependence among components, which may or may not be present, has been summarized by Greene ( 2007 ). It is seen in the distinction between component designs (“Komponenten-Designs”), in which the components are independent of each other, and integrated designs (“integrierte Designs”), in which the components are interdependent. Of these two design categories, integrated designs are the more complex designs.

Secondary design considerations

The primary design dimensions explained above have been the focus of this article. There are a number of secondary considerations for researchers to also think about when they design their studies (Johnson and Christensen 2017 ). Now we list some secondary design issues and questions that should be thoughtfully considered during the construction of a strong mixed methods research design.

  • Phenomenon: Will the study be addressing (a) the same part or different parts of one phenomenon? (b) different phenomena?, or (c) the phenomenon/phenomena from different perspectives? Is the phenomenon (a) expected to be unique (e. g., historical event, particular group)?, (b) something expected to be part of a more regular and predictable phenomenon, or (c) a complex mixture of these?
  • Social scientific theory: Will the study generate a new substantive theory, test an already constructed theory, or achieve both in a sequential arrangement? Or is the researcher not interested in substantive theory based on empirical data?
  • Ideological drive: Will the study have an explicitly articulated ideological drive (e. g., feminism, critical race paradigm, transformative paradigm)?
  • Combination of sampling methods: What specific quantitative sampling method(s) will be used? What specific qualitative sampling methods(s) will be used? How will these be combined or related?
  • Degree to which the research participants will be similar or different: For example, participants or stakeholders with known differences of perspective would provide participants that are quite different.
  • Degree to which the researchers on the research team will be similar or different: For example, an experiment conducted by one researcher would be high on similarity, but the use of a heterogeneous and participatory research team would include many differences.
  • Implementation setting: Will the phenomenon be studied naturalistically, experimentally, or through a combination of these?
  • Degree to which the methods similar or different: For example, a structured interview and questionnaire are fairly similar but administration of a standardized test and participant observation in the field are quite different.
  • Validity criteria and strategies: What validity criteria and strategies will be used to address the defensibility of the study and the conclusions that will be drawn from it (see Chapter 11 in Johnson and Christensen 2017 )?
  • Full study: Will there be essentially one research study or more than one? How will the research report be structured?

Two case studies

The above design dimensions are now illustrated by examples. A nice collection of examples of mixed methods studies can be found in Hesse-Biber ( 2010 ), from which the following examples are taken. The description of the first case example is shown in Box 1.

Box 1

Summary of Roth ( 2006 ), research regarding the gender-wage gap within Wall Street securities firms. Adapted from Hesse-Biber ( 2010 , pp. 457–458)

Louise Marie Roth’s research, Selling Women Short: Gender and Money on Wall Street ( 2006 ), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. […] Roth conducted semistructured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews […] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. […] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. […] Her respondents’ lived experiences over time revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

This example nicely illustrates the distinction we made between simultaneity and dependency. On the two aspects of the timing dimension, this study was a concurrent-dependent design answering a set of related research questions. The data collection in this example was conducted simultaneously, and was thus concurrent – the quantitative closed-ended questions were embedded into the qualitative in-depth interviews. In contrast, the analysis was dependent, as explained in the next paragraph.

One of the purposes of this study was explanation: The qualitative data were used to understand the processes underlying the quantitative outcomes. It is therefore an explanatory design, and might be labelled an “explanatory concurrent design”. Conceptually, explanatory designs are often dependent: The qualitative component is used to explain and clarify the outcomes of the quantitative component. In that sense, the qualitative analysis in the case study took the outcomes of the quantitative component (“the existence of the gender-wage gap” and “numerical understanding of the disparity”), and aimed at providing an explanation for that result of the quantitative data analysis , by relating it to the contextual circumstances in which the quantitative outcomes were produced. This purpose of mixing in the example corresponds to Bryman’s ( 2006 ) “contextual understanding”. On the other primary dimensions, (a) the design was ongoing over a three-year period but was not emergent, (b) the point of integration was results, and (c) the design was not complex with respect to the point of integration, as it had only one point of integration. Yet, it was complex in the sense of involving multiple levels; both the level of the individual and the organization were included. According to the approach of Johnson and Christensen ( 2017 ), this was a QUAL + quan design (that was qualitatively driven, explanatory, and concurrent). If we give this study design a name, perhaps it should focus on what was done in the study: “explaining an effect from the process by which it is produced”. Having said this, the name “explanatory concurrent design” could also be used.

The description of the second case example is shown in Box 2.

Box 2

Summary of McMahon’s ( 2007 ) explorative study of the meaning, role, and salience of rape myths within the subculture of college student athletes. Adapted from Hesse-Biber ( 2010 , pp. 461–462)

Sarah McMahon ( 2007 ) wanted to explore the subculture of college student athletes and specifically the meaning, role, and salience of rape myths within that culture. […] While she was looking for confirmation between the quantitative ([structured] survey) and qualitative (focus groups and individual interviews) findings, she entered this study skeptical of whether or not her quantitative and qualitative findings would mesh with one another. McMahon […] first administered a survey [instrument] to 205 sophomore and junior student athletes at one Northeast public university. […] The quantitative data revealed a very low acceptance of rape myths among this student population but revealed a higher acceptance of violence among men and individuals who did not know a survivor of sexual assault. In the second qualitative (QUAL) phase, “focus groups were conducted as semi-structured interviews” and facilitated by someone of the same gender as the participants (p. 360). […] She followed this up with a third qualitative component (QUAL), individual interviews, which were conducted to elaborate on themes discovered in the focus groups and determine any differences in students’ responses between situations (i. e., group setting vs. individual). The interview guide was designed specifically to address focus group topics that needed “more in-depth exploration” or clarification (p. 361). The qualitative findings from the focus groups and individual qualitative interviews revealed “subtle yet pervasive rape myths” that fell into four major themes: “the misunderstanding of consent, the belief in ‘accidental’ and fabricated rape, the contention that some women provoke rape, and the invulnerability of female athletes” (p. 363). She found that the survey’s finding of a “low acceptance of rape myths … was contradicted by the findings of the focus groups and individual interviews, which indicated the presence of subtle rape myths” (p. 362).

On the timing dimension, this is an example of a sequential-independent design. It is sequential, because the qualitative focus groups were conducted after the survey was administered. The analysis of the quantitative and qualitative data was independent: Both were analyzed independently, to see whether they yielded the same results (which they did not). This purpose, therefore, was triangulation. On the other primary dimensions, (a) the design was planned, (b) the point of integration was results, and (c) the design was not complex as it had only one point of integration, and involved only the level of the individual. The author called this a “sequential explanatory” design. We doubt, however, whether this is the most appropriate label, because the qualitative component did not provide an explanation for quantitative results that were taken as given. On the contrary, the qualitative results contradicted the quantitative results. Thus, a “sequential-independent” design, or a “sequential-triangulation” design or a “sequential-comparative” design would probably be a better name.

Notice further that the second case study had the same point of integration as the first case study. The two components were brought together in the results. Thus, although the case studies are very dissimilar in many respects, this does not become visible in their point of integration. It can therefore be helpful to determine whether their point of extension is different. A  point of extension is the point in the research process at which the second (or later) component comes into play. In the first case study, two related, but different research questions were answered, namely the quantitative question “How large is the gender-wage gap among highly performing Wall Street MBAs after controlling for any legitimate factors that might otherwise make a difference?”, and the qualitative research question “How do structural factors within the workplace setting contribute to the gender-wage gap and its persistence over time?” This case study contains one qualitative research question and one quantitative research question. Therefore, the point of extension is the research question. In the second case study, both components answered the same research question. They differed in their data collection (and subsequently in their data analysis): qualitative focus groups and individual interviews versus a quantitative questionnaire. In this case study, the point of extension was data collection. Thus, the point of extension can be used to distinguish between the two case studies.

Summary and conclusions

The purpose of this article is to help researchers to understand how to design a mixed methods research study. Perhaps the simplest approach is to design is to look at a single book and select one from the few designs included in that book. We believe that is only useful as a starting point. Here we have shown that one often needs to construct a research design to fit one’s unique research situation and questions.

First, we showed that there are there are many purposes for which qualitative and quantitative methods, methodologies, and paradigms can be mixed. This must be determined in interaction with the research questions. Inclusion of a purpose in the design name can sometimes provide readers with useful information about the study design, as in, e. g., an “explanatory sequential design” or an “exploratory-confirmatory design”.

The second dimension is theoretical drive in the sense that Morse and Niehaus ( 2009 ) use this term. That is, will the study have an inductive or a deductive drive, or, we added, a combination of these. Related to this idea is whether one will conduct a qualitatively driven, a quantitatively driven, or an equal-status mixed methods study. This language is sometimes included in the design name to communicate this characteristic of the study design (e. g., a “quantitatively driven sequential mixed methods design”).

The third dimension is timing , which has two aspects: simultaneity and dependence. Simultaneity refers to whether the components are to be implemented concurrently, sequentially, or a combination of these in a multiphase design. Simultaneity is commonly used in the naming of a mixed methods design because it communicates key information. The second aspect of timing, dependence , refers to whether a later component depends on the results of an earlier component, e. g., Did phase two specifically build on phase one in the research study? The fourth design dimension is the point of integration, which is where the qualitative and quantitative components are brought together and integrated. This is an essential dimension, but it usually does not need to be incorporated into the design name.

The fifth design dimension is that of typological vs. interactive design approaches . That is, will one select a design from a typology or use a more interactive approach to construct one’s own design? There are many typologies of designs currently in the literature. Our recommendation is that readers examine multiple design typologies to better understand the design process in mixed methods research and to understand what designs have been identified as popular in the field. However, when a design that would follow from one’s research questions is not available, the researcher can and should (a) combine designs into new designs or (b) simply construct a new and unique design. One can go a long way in depicting a complex design with Morse’s ( 1991 ) notation when used to its full potential. We also recommend that researchers understand the process approach to design from Maxwell and Loomis ( 2003 ), and realize that research design is a process and it needs, oftentimes, to be flexible and interactive.

The sixth design dimension or consideration is whether a design will be fully specified during the planning of the research study or if the design (or part of the design) will be allowed to emerge during the research process, or a combination of these. The seventh design dimension is called complexity . One sort of complexity mentioned was multilevel designs, but there are many complexities that can enter designs. The key point is that good research often requires the use of complex designs to answer one’s research questions. This is not something to avoid. It is the responsibility of the researcher to learn how to construct and describe and name mixed methods research designs. Always remember that designs should follow from one’s research questions and purposes, rather than questions and purposes following from a few currently named designs.

In addition to the six primary design dimensions or considerations, we provided a set of additional or secondary dimensions/considerations or questions to ask when constructing a mixed methods study design. Our purpose throughout this article has been to show what factors must be considered to design a high quality mixed methods research study. The more one knows and thinks about the primary and secondary dimensions of mixed methods design the better equipped one will be to pursue mixed methods research.

Acknowledgments

Open access funding provided by University of Vienna.

Biographies

1965, Dr., Professor of Empirical Pedagogy at University of Vienna, Austria. Research Areas: Mixed Methods Design, Philosophy of Mixed Methods Research, Innovation in Higher Education, Design and Evaluation of Intervention Studies, Educational Technology. Publications: Mixed methods in early childhood education. In: M. Fleer & B. v. Oers (Eds.), International handbook on early childhood education (Vol. 1). Dordrecht, The Netherlands: Springer 2017; The multilevel mixed intact group analysis: A mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research 10, 2016; The realist survey: How respondents’ voices can be used to test and revise correlational models. Journal of Mixed Methods Research 2015. Advance online publication.

1957, PhD, Professor of Professional Studies at University of South Alabama, Mobile, Alabama USA. Research Areas: Methods of Social Research, Program Evaluation, Quantitative, Qualitative and Mixed Methods, Philosophy of Social Science. Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

Bryman’s ( 2006 ) scheme of rationales for combining quantitative and qualitative research 1

  • Triangulation or greater validity – refers to the traditional view that quantitative and qualitative research might be combined to triangulate findings in order that they may be mutually corroborated. If the term was used as a synonym for integrating quantitative and qualitative research, it was not coded as triangulation.
  • Offset – refers to the suggestion that the research methods associated with both quantitative and qualitative research have their own strengths and weaknesses so that combining them allows the researcher to offset their weaknesses to draw on the strengths of both.
  • Completeness – refers to the notion that the researcher can bring together a more comprehensive account of the area of enquiry in which he or she is interested if both quantitative and qualitative research are employed.
  • Process – quantitative research provides an account of structures in social life but qualitative research provides sense of process.
  • Different research questions – this is the argument that quantitative and qualitative research can each answer different research questions but this item was coded only if authors explicitly stated that they were doing this.
  • Explanation – one is used to help explain findings generated by the other.
  • Unexpected results – refers to the suggestion that quantitative and qualitative research can be fruitfully combined when one generates surprising results that can be understood by employing the other.
  • Instrument development – refers to contexts in which qualitative research is employed to develop questionnaire and scale items – for example, so that better wording or more comprehensive closed answers can be generated.
  • Sampling – refers to situations in which one approach is used to facilitate the sampling of respondents or cases.
  • Credibility – refer s to suggestions that employing both approaches enhances the integrity of findings.
  • Context – refers to cases in which the combination is rationalized in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey.
  • Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings.
  • Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others.
  • Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project.
  • Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research.
  • Enhancement or building upon quantitative/qualitative findings – this entails a reference to making more of or augmenting either quantitative or qualitative findings by gathering data using a qualitative or quantitative research approach.
  • Other/unclear.
  • Not stated.

1 Reprinted with permission from “Integrating quantitative and qualitative research: How is it done?” by Alan Bryman ( 2006 ), Qualitative Research, 6, pp. 105–107.

Contributor Information

Judith Schoonenboom, Email: [email protected] .

R. Burke Johnson, Email: ude.amabalahtuos@nosnhojb .

  • Bazeley, Pat, Lynn Kemp Mosaics, triangles, and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 :55–72. doi: 10.1177/1558689811419514. [ CrossRef ] [ Google Scholar ]
  • Bryman A. Integrating quantitative and qualitative research: how is it done? Qualitative Research. 2006; 6 :97–113. doi: 10.1177/1468794106058877. [ CrossRef ] [ Google Scholar ]
  • Cook TD. Postpositivist critical multiplism. In: Shotland RL, Mark MM, editors. Social science and social policy. Beverly Hills: SAGE; 1985. pp. 21–62. [ Google Scholar ]
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Los Angeles: SAGE; 2011. [ Google Scholar ]
  • Erzberger C, Prein G. Triangulation: Validity and empirically-based hypothesis construction. Quality and Quantity. 1997; 31 :141–154. doi: 10.1023/A:1004249313062. [ CrossRef ] [ Google Scholar ]
  • Greene JC. Mixed methods in social inquiry. San Francisco: Jossey-Bass; 2007. [ Google Scholar ]
  • Greene JC. Preserving distinctions within the multimethod and mixed methods research merger. Sharlene Hesse-Biber and R. Burke Johnson. New York: Oxford University Press; 2015. [ Google Scholar ]
  • Greene JC, Valerie J, Caracelli, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. doi: 10.3102/01623737011003255. [ CrossRef ] [ Google Scholar ]
  • Greene JC, Hall JN. Dialectics and pragmatism. In: Tashakkori A, Teddlie C, editors. SAGE handbook of mixed methods in social & behavioral research. 2. Los Angeles: SAGE; 2010. pp. 119–167. [ Google Scholar ]
  • Guest, Greg Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research. 2013; 7 :141–151. doi: 10.1177/1558689812461179. [ CrossRef ] [ Google Scholar ]
  • Hesse-Biber S. Qualitative approaches to mixed methods practice. Qualitative Inquiry. 2010; 16 :455–468. doi: 10.1177/1077800410364611. [ CrossRef ] [ Google Scholar ]
  • Johnson BR. Dialectical pluralism: A metaparadigm whose time has come. Journal of Mixed Methods Research. 2017; 11 :156–173. doi: 10.1177/1558689815607692. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Christensen LB. Educational research: Quantitative, qualitative, and mixed approaches. 6. Los Angeles: SAGE; 2017. [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14–26. doi: 10.3102/0013189X033007014. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007; 1 :112–133. doi: 10.1177/1558689806298224. [ CrossRef ] [ Google Scholar ]
  • Mathison S. Why triangulate? Educational Researcher. 1988; 17 :13–17. doi: 10.3102/0013189X017002013. [ CrossRef ] [ Google Scholar ]
  • Maxwell JA. Qualitative research design: An interactive approach. 3. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Maxwell, Joseph A., and Diane M. Loomis. 2003. Mixed methods design: An alternative approach. In Handbook of mixed methods in social & behavioral research , Eds. Abbas Tashakkori and Charles Teddlie, 241–271. Thousand Oaks: Sage.
  • McMahon S. Understanding community-specific rape myths: Exploring student athlete culture. Affilia. 2007; 22 :357–370. doi: 10.1177/0886109907306331. [ CrossRef ] [ Google Scholar ]
  • Mendlinger S, Cwikel J. Spiraling between qualitative and quantitative data on women’s health behaviors: A double helix model for mixed methods. Qualitative Health Research. 2008; 18 :280–293. doi: 10.1177/1049732307312392. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL. Integrating qualitative and quantitative methods: a pragmatic approach. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. doi: 10.1097/00006199-199103000-00014. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Niehaus L. Mixed method design: Principles and procedures. Walnut Creek: Left Coast Press; 2009. [ Google Scholar ]
  • Onwuegbuzie AJ, Burke Johnson R. The “validity” issue in mixed research. Research in the Schools. 2006; 13 :48–63. [ Google Scholar ]
  • Roth LM. Selling women short: Gender and money on Wall Street. Princeton: Princeton University Press; 2006. [ Google Scholar ]
  • Schoonenboom J. The multilevel mixed intact group analysis: a mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research. 2016; 10 :129–146. doi: 10.1177/1558689814536283. [ CrossRef ] [ Google Scholar ]
  • Schoonenboom, Judith, R. Burke Johnson, and Dominik E. Froehlich. 2017, in press. Combining multiple purposes of mixing within a mixed methods research design. International Journal of Multiple Research Approaches .
  • Teddlie CB, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Yanchar SC, Williams DD. Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher. 2006; 35 (9):3–12. doi: 10.3102/0013189X035009003. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: design and methods. 5. Los Angeles: SAGE; 2013. [ Google Scholar ]

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Prevent plagiarism, run a free check.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples

Pfeiffer Library

Research Methodologies

  • What are research designs?
  • What are research methodologies?

What are research methods?

Quantitative research methods, qualitative research methods, mixed method approach, selecting the best research method.

  • Additional Sources

Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project.  The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data.  The following boxes below contain a list of quantitative, qualitative, and mixed research methods.

  • Closed-ended questionnaires/survey: These types of questionnaires or surveys are like "multiple choice" tests, where participants must select from a list of premade answers.  According to the content of the question, they must select the one that they agree with the most.  This approach is the simplest form of quantitative research because the data is easy to combine and quantify.
  • Structured interviews: These are a common research method in market research because the data can be quantified.  They are strictly designed for little "wiggle room" in the interview process so that the data will not be skewed.  You can conduct structured interviews in-person, online, or over the phone (Dawson, 2019).

Constructing Questionnaires

When constructing your questions for a survey or questionnaire, there are things you can do to ensure that your questions are accurate and easy to understand (Dawson, 2019):

  • Keep the questions brief and simple.
  • Eliminate any potential bias from your questions.  Make sure that they do not word things in a way that favor one perspective over another.
  • If your topic is very sensitive, you may want to ask indirect questions rather than direct ones.  This prevents participants from being intimidated and becoming unwilling to share their true responses.
  • If you are using a closed-ended question, try to offer every possible answer that a participant could give to that question.
  • Do not ask questions that assume something of the participant.  The question "How often do you exercise?" assumes that the participant exercises (when they may not), so you would want to include a question that asks if they exercise at all before asking them how often.
  • Try and keep the questionnaire as short as possible.  The longer a questionnaire takes, the more likely the participant will not complete it or get too tired to put truthful answers.
  • Promise confidentiality to your participants at the beginning of the questionnaire.

Quantitative Research Measures

When you are considering a quantitative approach to your research, you need to identify why types of measures you will use in your study.  This will determine what type of numbers you will be using to collect your data.  There are four levels of measurement:

  • Nominal: These are numbers where the order of the numbers do not matter.  They aim to identify separate information.  One example is collecting zip codes from research participants.  The order of the numbers does not matter, but the series of numbers in each zip code indicate different information (Adamson and Prion, 2013).
  • Ordinal: Also known as rankings because the order of these numbers matter.  This is when items are given a specific rank according to specific criteria.  A common example of ordinal measurements include ranking-based questionnaires, where participants are asked to rank items from least favorite to most favorite.  Another common example is a pain scale, where a patient is asked to rank their pain on a scale from 1 to 10 (Adamson and Prion, 2013).
  • Interval: This is when the data are ordered and the distance between the numbers matters to the researcher (Adamson and Prion, 2013).  The distance between each number is the same.  An example of interval data is test grades.
  • Ratio: This is when the data are ordered and have a consistent distance between numbers, but has a "zero point."  This means that there could be a measurement of zero of whatever you are measuring in your study (Adamson and Prion, 2013).  An example of ratio data is measuring the height of something because the "zero point" remains constant in all measurements.  The height of something could also be zero.

Focus Groups

This is when a select group of people gather to talk about a particular topic.  They can also be called discussion groups or group interviews (Dawson, 2019).  They are usually lead by a moderator  to help guide the discussion and ask certain questions.  It is critical that a moderator allows everyone in the group to get a chance to speak so that no one dominates the discussion.  The data that are gathered from focus groups tend to be thoughts, opinions, and perspectives about an issue.

Advantages of Focus Groups

  • Only requires one meeting to get different types of responses.
  • Less researcher bias due to participants being able to speak openly.
  • Helps participants overcome insecurities or fears about a topic.
  • The researcher can also consider the impact of participant interaction.

Disadvantages of Focus Groups

  • Participants may feel uncomfortable to speak in front of an audience, especially if the topic is sensitive or controversial.
  • Since participation is voluntary, not every participant may contribute equally to the discussion.
  • Participants may impact what others say or think.
  • A researcher may feel intimidated by running a focus group on their own.
  • A researcher may need extra funds/resources to provide a safe space to host the focus group.
  • Because the data is collective, it may be difficult to determine a participant's individual thoughts about the research topic.

Observation

There are two ways to conduct research observations:

  • Direct Observation: The researcher observes a participant in an environment.  The researcher often takes notes or uses technology to gather data, such as a voice recorder or video camera.  The researcher does not interact or interfere with the participants.  This approach is often used in psychology and health studies (Dawson, 2019).
  • Participant Observation:  The researcher interacts directly with the participants to get a better understanding of the research topic.  This is a common research method when trying to understand another culture or community.  It is important to decide if you will conduct a covert (participants do not know they are part of the research) or overt (participants know the researcher is observing them) observation because it can be unethical in some situations (Dawson, 2019).

Open-Ended Questionnaires

These types of questionnaires are the opposite of "multiple choice" questionnaires because the answer boxes are left open for the participant to complete.  This means that participants can write short or extended answers to the questions.  Upon gathering the responses, researchers will often "quantify" the data by organizing the responses into different categories.  This can be time consuming because the researcher needs to read all responses carefully.

Semi-structured Interviews

This is the most common type of interview where researchers aim to get specific information so they can compare it to other interview data.  This requires asking the same questions for each interview, but keeping their responses flexible.  This means including follow-up questions if a subject answers a certain way.  Interview schedules are commonly used to aid the interviewers, which list topics or questions that will be discussed at each interview (Dawson, 2019).

Theoretical Analysis

Often used for nonhuman research, theoretical analysis is a qualitative approach where the researcher applies a theoretical framework to analyze something about their topic.  A theoretical framework gives the researcher a specific "lens" to view the topic and think about it critically. it also serves as context to guide the entire study.  This is a popular research method for analyzing works of literature, films, and other forms of media.  You can implement more than one theoretical framework with this method, as many theories complement one another.

Common theoretical frameworks for qualitative research are (Grant and Osanloo, 2014):

  • Behavioral theory
  • Change theory
  • Cognitive theory
  • Content analysis
  • Cross-sectional analysis
  • Developmental theory
  • Feminist theory
  • Gender theory
  • Marxist theory
  • Queer theory
  • Systems theory
  • Transformational theory

Unstructured Interviews

These are in-depth interviews where the researcher tries to understand an interviewee's perspective on a situation or issue.  They are sometimes called life history interviews.  It is important not to bombard the interviewee with too many questions so they can freely disclose their thoughts (Dawson, 2019).

  • Open-ended and closed-ended questionnaires: This approach means implementing elements of both questionnaire types into your data collection.  Participants may answer some questions with premade answers and write their own answers to other questions.  The advantage to this method is that you benefit from both types of data collection to get a broader understanding of you participants.  However, you must think carefully about how you will analyze this data to arrive at a conclusion.

Other mixed method approaches that incorporate quantitative and qualitative research methods depend heavily on the research topic.  It is strongly recommended that you collaborate with your academic advisor before finalizing a mixed method approach.

How do you determine which research method would be best for your proposal?  This heavily depends on your research objective.  According to Dawson (2019), there are several questions to ask yourself when determining the best research method for your project:

  • Are you good with numbers and mathematics?
  • Would you be interested in conducting interviews with human subjects?
  • Would you enjoy creating a questionnaire for participants to complete?
  • Do you prefer written communication or face-to-face interaction?
  • What skills or experiences do you have that might help you with your research?  Do you have any experiences from past research projects that can help with this one?
  • How much time do you have to complete the research?  Some methods take longer to collect data than others.
  • What is your budget?  Do you have adequate funding to conduct the research in the method you  want?
  • How much data do you need?  Some research topics need only a small amount of data while others may need significantly larger amounts.
  • What is the purpose of your research? This can provide a good indicator as to what research method will be most appropriate.
  • << Previous: What are research methodologies?
  • Next: Additional Sources >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies
  • Updated on November 12, 2023
  • By Market Research Guy
  • In Overviews

New Market Research Methods and Techniques

While traditional market research techniques such as surveys and focus groups are still widely used, there are many new market research methods and techniques to spice things up.  As technology and socioeconomic trends change, so will our means of gaining customer insights.  As you’ll notice, many of these are really just new technologies applied to traditional methods, as opposed to radically different methodologies.  In any case, here are a sampling of some of the new market research trends and techniques popular now, in no particular order:

1. A shift from data collection to data analysis :  Today, actual customer behavior data is collected with ease, to the point where analysis (or data mining) is much more challenging than obtaining the data.  For example, Google Analytics  provides webmasters with tons of information about website visitors, including languages, pages visited, screen resolutions, etc.  All of this information can be used to fine tune a website to the audience.  Another example of “ big data ” data mining of is Amazon’s predictive recommendations.  By carefully monitoring the products a user purchases/views and correlating that information with purchase histories of others, Amazon is able to very effectively present product recommendations.  All of this is done through data mining, without having to ask the user “what other products might you like?, which would be crazy.”  Twitter is another great source of readily available data that can be mined (text analytics).  

Jonathan Harris performed a great TED talk that beautifully demonstrates how readily available data can be visualized.

2. A shift from “how do think you will behave?” (self-reporting) to “I know how you actually behaved” (observational research) :  If you wanted to know what color cereal box would sell the most cereal, would you rather base your decision on a survey or an actual experiment where colors are tested?  Of course the experiment would be more valuable.  I want to know what customers actually do/want, not what they think they do/want.  It’s not that customers are trying to deceive researchers; it’s just that it’s difficult for users to predict their own future actions.  In any case, the world of new market research methods is shifting from self-reporting techniques (surveys, focus groups), to observational  research methods.  The data is much more reliable.

3. Mobile market research methods : Smart phones and tablets have taken the world by storm.  These devices are becoming a preferred platform for many applications and markets, including market research.  Examples of how these devices are being used in terms of new market research techniques include:

  • Text messaging surveys and voting  (SMS Surveys) – One good example of this is a company called “ Poll Everywhere .”  They allow seminar attendees to vote and respond to poll questions via SMS (text messaging).
  • Smartphone designed surveys – Good mobile surveys are ones designed specifically for the smartphone form factor.  There are many companies working on this such as  OpinionMeter .  These surveys can be web-based, optimized for phones, or they can be native applications built specifically for iOS, Android, or Windows mobile operating systems.  In today’s environment, it’s imperative for online surveys to be usable regardless of device (laptop, tablet, or mobile).
  • Location Awareness – Advanced phone market research techniques can leverage smartphone location (GPS) information to trigger questions or simply track movement over time.  For example, you can imagine a survey question that only appears when the phone knows the user is at the gas station.
  • Mobile Ethnography – Using information like location awareness, researchers are able to gather rich contextual data (using mobile phones) about behaviors, allowing them to really understand the habits and lifestyles of subjects.

4. Biometric Market Research Techniques : New biometric research methods that measure a subject’s physical response to stimuli (e.g., television commercial) provide valuable data that a subject might not be able or willing to express verbally.  Examples of biometric market research methods include heart rate monitoring, respiration monitoring, skin and muscle activity, brain activity (using functional MRI) and eye tracking.  A good article on the subject can be found here .   Campbell Soup has used such methods in their market research.

5. Prediction Markets:  A prediction market is like a mini stock market, where a group of people can buy and sell “predictions” of various events.  For example, one event might be “who will win the presidency?”  Participants could use their “currency” (fake or real) to buy or sell whoever they think will win.  Early on, the price of one candidate or the other might be $0.50, but as the election probability becomes more certain, a bid on one candidate will grow closer to $1.00.  At the end of an election, one candidate will be worth $1.00 and the other $0.00.  Participants can buy and sell their stake in a candidate along the way.  

The beauty of these prediction markets is that they tend to be good indications of reality.  So what does this have to do with market research?  Well, forward thinking companies are setting up these prediction markets to tap into the wisdom of their employees.  For example, a company could ask employees to bid on a prediction market that has to do with competitors, industry trends, or the success of product concepts in order to get an early read on those ideas.  If this is still foggy, check out PredictIt , a public prediction market. Consensus Point makes business to business software that has been used by companies like Best Buy.

6. Virtual Shopping:  This involves the use of  virtual store simulation to mimic a shopping experience for participants–a good way to test things retail issues like product placement, store layout, packaging, etc.  Once again, the idea is to replicate a real situation for research subjects and observe behavior, as opposed to asking them what they think they will do.  Virtual Reality is certainly a new market research method to keep an eye on. 

7. Live Audience Response:  In conferences or lectures, presenters often have difficulty engaging with the audience.  One tool to remedy this problem is live audience response systems.  These systems involve a handhold remote control for audience members to respond to questions that appear on-screen (usually in a PowerPoint slide).  You can imaging the applications for this: professors doing on-the-fly quizzes to see if students understand the concepts, presenters asking demographic questions to better understand their audience, polling, etc.

7. Online Collaboration Tools:  Tools like Skype (video calling), instant messaging, and shared whiteboarding allow researchers to conduct a variety of “traditional” market research techniques using new technology.  These technologies are often much cheaper than physically gathering people.  They also allow researchers to gather people from broader geographies much easier.

8. Social Media Market Research: Social media dominates the Web, so it is natural that market researchers are looking for ways to leverage this technology.  When people say “social media market research” they might mean several different things:

  • Research of social media — Simply researching the market of social media.  For example, “X% of people use Facebook and the average age of a Google+ user is X.”
  • Research using social media data — There is a lot of data that can be gleaned from social media sites.  Looking at how many times a certain news story or product is shared across sites can tell researchers a lot about what works and doesn’t work in journalism, product concepts, etc.  “Listening” to social media is like eavesdropping on a million conversations and can be a great place to pick up on trends.
  • Research using social media as part of the methodology or delivery mechanism — Many companies have a large following on social media sites and can leverage that audience to ask questions.  Often, if a customer is willing to follow/friend/subscribe/whatever to a company on a social media site, they are a big fan of that company and one of the best customers (probably a “promoter” in NPS, or net promoter score language).  What a gold mine for companies to have instant access to their highly loyal and interested customers for market research purposes.  Twitter now allows polling as a native Twitter feature.  Very cool.

9.  QR Code Surveys : This overlaps with mobile phone market research.  A poster could ask a simple survey question and provide two QR codes, asking people to scan their choice.  Such an approach makes it very easy for someone to take a one-question survey without doing much more than pointing a phone.  A webmaster would then be able to gather the response data in aggregate.  Other companies are using QR codes as a simple launch point to a mobile survey.  A good example of this is Tiipz .

There you have it–an overview of new market research methods and techniques . This article will continue to evolve and update over time as new research methodologies and technologies emerge.  I hope this was informative.  If you have other examples of new market research, or if you have anything to add, please do so in the comments below.

10. AI Powered Research

Whether it’s AI to write surveys, interact as a chatbots , or analyze results, there is no doubt that AI will play a major role in market research from here until the end of time.

Leave a reply Cancel reply

Your email address will not be published. Required fields are marked *

You may also like

3d virtual shop research.

Computer-driven 3D virtual shop research is becoming increasingly common amongst retailers and consumer product manufacturers. Done properly, the tests can… Read more

Recent comments

Market research vs marketing research: what’s the difference.

When describing the “research” industry, I hear both terms “market research” and “marketing research” used…so what’s the difference?  Can they… Read more

ScienceDaily

Study opens new avenue for immunotherapy drug development

Method utilizes engineered peptides to build up the body's natural response.

In a new study published today in Nature Biomedical Engineering , researchers at The University of Texas MD Anderson Cancer Center have designed a new method for developing immunotherapy drugs using engineered peptides to elicit a natural immune response inside the body.

In preclinical models of locally advanced and metastatic breast cancer, this method improved tumor control and prolonged survival, both as a monotherapy and in combination with immune checkpoint inhibitors.

"Amino acids are the building blocks of life and, when a few of them are linked together, they create a peptide. All the biological functions performed by our body are done by proteins and peptides, so our goal was to find a way to redesign these small molecules to possess the unique ability to activate our immune system," said senior author Betty Kim, M.D., Ph.D., professor of Neurosurgery.

The body's immune system is built to patrol and identify infected or diseased cells to eliminate, but cancer cells often exploit weaknesses in the immune system to avoid detection. The goal of immunotherapy is to bolster the body's natural ability to identify and destroy cancer cells. Current immune checkpoint inhibitors are antibodies designed to block specific immune signaling pathways.

The engineered peptide improves the immune system's ability to detect and destroy cancer cells in a unique way. Rather than using an external compound to initiate a response, or harvesting and modifying immune cells for cell therapies, the peptide serves as a messenger to activate specific signaling pathways in immune cells to boost their performance.

"These findings open a whole new avenue for developing immunotherapy drugs. By using designed polypeptides, we can potently activate immune signaling pathways to enhance anti-tumor responses. Additionally, since these are naturally derived agents, we anticipate the toxicity profile would be significantly better than with synthetic compounds," said co-corresponding author Wen Jiang, M.D., Ph.D., associate professor of Radiation Oncology.

This study was supported by the National Cancer Institute (CA241070) and the U.S. Department of Defense.

  • Immune System
  • Lung Cancer
  • Breast Cancer
  • Colon Cancer
  • Brain Tumor
  • Prostate Cancer
  • Chemotherapy
  • Biopharmaceutical
  • Natural killer cell
  • Immune system
  • Incident Command System
  • Anti-obesity drug

Story Source:

Materials provided by University of Texas M. D. Anderson Cancer Center . Note: Content may be edited for style and length.

Journal Reference :

  • DaeYong Lee, Kristin Huntoon, Yifan Wang, Minjeong Kang, Yifei Lu, Seong Dong Jeong, Todd M. Link, Thomas D. Gallup, Yaqing Qie, Xuefeng Li, Shiyan Dong, Benjamin R. Schrank, Adam J. Grippin, Abin Antony, JongHoon Ha, Mengyu Chang, Yi An, Liang Wang, Dadi Jiang, Jing Li, Albert C. Koong, John A. Tainer, Wen Jiang, Betty Y. S. Kim. Synthetic cationic helical polypeptides for the stimulation of antitumour innate immune pathways in antigen-presenting cells . Nature Biomedical Engineering , 2024; DOI: 10.1038/s41551-024-01194-7

Cite This Page :

Explore More

  • Mice Given Mouse-Rat Brains Can Smell Again
  • New Circuit Boards Can Be Repeatedly Recycled
  • Collisions of Neutron Stars and Black Holes
  • Advance in Heart Regenerative Therapy
  • Bioluminescence in Animals 540 Million Years Ago
  • Profound Link Between Diet and Brain Health
  • Loneliness Runs Deep Among Parents
  • Food in Sight? The Liver Is Ready!
  • Acid Reflux Drugs and Risk of Migraine
  • Do Cells Have a Hidden Communication System?

Trending Topics

Strange & offbeat.

new method research

Researchers reveal new method for calculating mechanical properties of solids using machine learning

A research team from Skoltech introduced a new method that takes advantage of machine learning for studying the properties of polycrystals, composites, and multiphase systems. It attained high accuracy, nearly as good as that of quantum-mechanical methods, which are only applicable to materials with less than a few hundred atoms.

The new method also benefits from active learning on local atomic environments. The paper is published in the Advanced Theory and Simulations journal.

"Many industrial materials are synthesized as polycrystals or multiphase systems. They contain both a single crystal and amorphous components between single crystal grains. The large number of atoms makes it hard to calculate the properties of these systems using modern quantum-mechanical methods. Density functional theory can only be applied to materials with a few hundred atoms."

"To address the problem, we use a machine-learning approach based on Moment Tensor Potentials (MTP). These potentials have also been developed at Skoltech under the guidance of Professor Alexander Shapeev," commented Faridun Jalolov, the leading author of the study and a Skoltech Ph.D. student in the Materials Science and Engineering program.

As compared to other solutions, the authors see the potential of the new method in active learning on local atomic environments. When calculating a large structure with many hundreds of thousands of atoms, the MTP identifies which atom makes a mistake in the calculations, or is calculated incorrectly. The reason for this could be the limited training dataset, which prevents all possible system configurations from being considered.

A local environment of this atom is then "cut out," and its energy is calculated using quantum mechanics. Afterward, the data is added back to the training set for further learning. As the on-the-fly learning progresses, the calculations continue until they come across another configuration that needs to be included in the training process. Other known machine-learning potentials cannot be learned on small local parts of large structures, which limits their applicability and accuracy.

"As an example, we studied the mechanical properties of diamond polycrystals, which are the hardest naturally occurring materials and often used in industry—for example, when manufacturing drilling equipment for oil wells. The results show that the mechanical properties of these polycrystalline diamonds depend on the grain size—the bigger the grain, the more similar the properties are to those of a single crystal diamond," continued Jalolov.

The authors pointed out that this approach will allow for studying the mechanical properties of nonsingle crystalline materials that are typically synthesized and used in experiments, as well as conducting comprehensive studies of polycrystalline and composite materials and obtaining data as close to experimental results as possible.

"In actual use, materials that are not perfect crystals are frequently employed due to their inability for perfect crystals to meet the requirements of a specific piece of equipment fully."

"A good example of this is tungsten carbide and cobalt. By adding cobalt to tungsten carbide, the material becomes more crack-resistant, making it so valuable in applications. The new method will allow us to investigate the causes and ways of altering the mechanical properties of these multiphase systems on an atomic level," said Alexander Kvashnin, the head of the research and a professor at the Energy Transition Center.

More information: Faridun N. Jalolov et al, Mechanical Properties of Single and Polycrystalline Solids from Machine Learning, Advanced Theory and Simulations (2024). DOI: 10.1002/adts.202301171

Provided by Skolkovo Institute of Science and Technology

Schematic illustration of learning on the local atomistic environment. The region highlighted by the red circle contains atoms with the highest extrapolative grade, which are then cut from the structure and used to build the periodic configuration for further Density Functional Theory calculations of energy, forces, and stresses. Credit: Advanced Theory and Simulations (2024). DOI: 10.1002/adts.202301171

A new method for satellite-based remote sensing analysis of plant-specific biomass yield patterns for precision farming applications

  • Open access
  • Published: 28 April 2024

Cite this article

You have full access to this open access article

new method research

  • Ludwig Hagn   ORCID: orcid.org/0009-0003-9472-6223 1 ,
  • Johannes Schuster 1 ,
  • Martin Mittermayer 1 &
  • Kurt-Jürgen Hülsbergen 1  

This study describes a new method for satellite-based remote sensing analysis of plant-specific biomass yield patterns for precision farming applications. The relative biomass potential (rel. BMP) serves as an indicator for multiyear stable and homogeneous yield zones. The rel. BMP is derived from satellite data corresponding to specific growth stages and the normalized difference vegetation index (NDVI) to analyze crop-specific yield patterns. The development of this methodology is based on data from arable fields of two research farms; the validation was conducted on arable fields of commercial farms in southern Germany. Close relationships (up to r > 0.9) were found between the rel. BMP of different crop types and study years, indicating stable yield patterns in arable fields. The relative BMP showed moderate correlations (up to r = 0.64) with the yields determined by the combine harvester, strong correlations with the vegetation index red edge inflection point (REIP) (up to r = 0.88, determined by a tractor-mounted sensor system) and moderate correlations with the yield determined by biomass sampling (up to r = 0.57). The study investigated the relationship between the rel. BMP and key soil parameters. There was a consistently strong correlation between multiyear rel. BMP and soil organic carbon (SOC) and total nitrogen (TN) contents (r = 0.62 to 0.73), demonstrating that the methodology effectively reflects the impact of these key soil properties on crop yield. The approach is well suited for deriving yield zones, with extensive application potential in agriculture.

Avoid common mistakes on your manuscript.

Introduction

Spatial variability in crop yields on arable fields.

Arable fields are characterized by a more or less pronounced spatial variability in crop yields (Schuster et al., 2023 ). One of the main causes of spatial variability in plant growth and biomass production is heterogeneous soil properties (Godwin et al., 2003 ; Hatfield, 2000 ; Mittermayer et al., 2021 , 2022 ), such as soil texture, soil organic carbon content (SOC), total nitrogen content (TN), macro- and micronutrient content, and available water capacity (López-Lozano et al., 2010 ; Servadio et al., 2017 ). The spatial variation in soil properties is influenced by pedogenesis, topography and soil erosion (Gregory et al., 2015 ; Raimondi et al., 2010 ). Cultivation practices can also contribute substantially to yield variability (Ngoune & Shelton, 2020 ), e.g., through soil compaction by heavy machinery (Horn & Fleige, 2003 ; Shaheb et al., 2021 ) or management practices (fertilizer and pesticide application, weed control). Natural field boundaries through adjacent forest or tree strips and agroforestry systems also influence yield variability within fields (Karlson et al., 2023 ; Pardon et al., 2018 ). In addition, yield variability is influenced by weather conditions. In years with drought stress, yield zones are more pronounced than in years with better rainfall distribution (Heil et al., 2023 ; Maestrini & Basso, 2018 ; Martinez-Feria & Basso, 2020 ). Some factors influencing yield variability are stable over a long-term period (e.g., soil texture), whereas others change from year to year.

Consideration of spatial yield variation is an important factor for precision agriculture and the key to optimizing crop production through more efficient use of inputs (fertilizers and pesticides) (Gebbers & Adamchuk, 2010 ; Mulla, 2013 ).

Although some farmers are currently considering the use of precision farming applications with variable, yield-dependent application rates, the use of these digital technologies rarely exceeds more than 20% of farms, as farmers are not yet convinced of the benefits of these methods (Lowenberg‐DeBoer & Erickson, 2019 ; Gabriel & Gandorfer, 2022 ). Considering that uniform nitrogen fertilization of arable fields is still a common practice, spatial variability in yields may lead to nitrogen oversupply in low-yield zones, which can cause high nitrogen losses, e.g., nitrate leaching (Mittermayer et al., 2021 ; Schuster et al., 2022 ). In general, the yield potential should be accounted for to optimally adapt crop management measures, such as fertilization and plant protection accordingly, which requires yield analysis methods that are as precise, cost-effective and widely available as possible.

Research needs

To delineate yield zones for precision farming applications, knowledge of the spatial variation of crop yield is crucial. In agricultural practice, digital combine harvester yield sensing systems are most commonly used for the determination of yield variability within fields, although raw data from combine harvesters have a large error potential (Bachmaier, 2010 ; Fulton et al., 2018 ; Kharel et al., 2019 ). Ground truth data on spatial variation in grain yield may be collected using nondigital methods, such as georeferenced plant sampling or plot harvesting with a plot combine harvester (Mittermayer et al., 2021 ; Spicker, 2017 ; Stettmer et al., 2022b ). These methods are expensive and labor intensive and can therefore only be used for scientific studies.

Several studies have been conducted on spatial yield estimation based on multispectral measurements by remote and proximal sensing (Aranguren et al., 2020 ; Barmeier et al., 2017 ; Maidl et al., 2019 ). Based on reflectance data, vegetation indices (VI), such as the normalized difference vegetation index (NDVI), were found to be closely related to grain yield, plant biomass and nitrogen uptake at specific growth stages (Cabrera-Bosquet et al., 2011 ; Prabhakara et al., 2015 ). Various methods for spatial yield estimation have been developed using artificial intelligence (AI) (Ruan et al., 2022 ; van Klompenburg et al., 2020 ).

However, most methodical approaches for digital spatial yield analysis are not fully transparent and comprehensible because the algorithms used are not described in sufficient detail. In some cases, commercial suppliers use very complex soil process and plant growth models that are difficult for users to understand. Moreover, important agronomic parameters (e.g., crop type, growth stages and weather conditions) are neglected in some satellite-based yield estimation systems (Li et al., 2007 ). Furthermore, in some cases, the methods have not been sufficiently validated; thus, the accuracy of the delineated zones is questionable.

Independent validations showed that in some cases substantial deviations in the satellite-derived yields from measured yields can occure (yield differences by several Mg ha −1 ) (Mittermayer et al., 2021 ; Stettmer et al., 2022b ). Therefore, management decisions should not be based on such large deviations between estimated yields and actual yields.

Most studies analyzing spatial yield variability focus on plant-based variables (e.g., grain yield, biomass, nitrogen uptake), while soil-related causes are neglected. Due to high costs, high-resolution soil mapping is not often conducted, and thus, important influencing factors that are partly responsible for the variability in yield and biomass formation of crops are not accounted for (Feng et al., 2022 ; Juhos et al., 2015 ).

Although there are numerous scientific studies on satellite-based yield analysis, as well as commercial service providers selling yield maps to farmers, there is a need to further develop satellite-based yield analysis methods, improve their accuracy and validate them under differentiated management conditions.

This study describes a new approach to generate relative biomass potential maps (rel. BMP). In this context, relative biomass potential means the determination of multiyear stable yield zones within fields at a high spatial resolution, without information on the absolute yield level in the yield zones. For many applications, knowledge of relative yield differences is sufficient; estimating absolute yields requires more input data and more complicated models.

Data (time series of several observation dates) from Sentinel-2 satellites of the European Space Agency (ESA), the vegetation index NDVI and agronomic knowledge were used to determine the relative biomass potential.

Over a twenty-year research period at the Technical University of Munich, multispectral proximal sensor systems have been systematically used to determine the vegetation indices, and at what vegetation stages, that correlate best with yield, nitrogen uptake, and biomass formation (Mistele & Schmidhalter, 2010 ; Mistele et al., 2004 ; Schmidhalter et al., 2003a ; Spicker, 2017 ; Sticksel et al., 2004 ). Analysis of the relationships between vegetation indices and agronomic parameters has shown the importance of considering the correct growth stages of plant stands as influencing factors (Erdle et al., 2011 ; Prey & Schmidhalter, 2019 ; Schmidhalter et al., 2003b )). Therefore, one aim of this study was to test whether the knowledge gained from previous proximal sensor-based studies, the relationships found between vegetation indices, and agronomic parameters can be transferred to satellite-based yield estimation methods. Satellite data from observation dates corresponding to specific growth stages were used to analyze crop-specific yield patterns. For winter wheat, for example, the growth stages jointing (GS 32), booting (GS 39) and flowering (GS 65) were considered (AHDB, 2023 ; Zadoks et al., 1974 ). To achieve a high accuracy of the biomass potential estimates, data from many observations and different crop types in the crop rotation were analyzed and combined. In this study, the crop species winter wheat ( Triticum aestivum L.), winter barley ( Hordeum vulgare L.), canola ( Brassica napus L.), corn ( Zea mays L.) and soybeans ( Glycine max L. Merr.) were analyzed. Due to yearly changing weather conditions, site-specific yield patterns may vary from year to year, and yield maps can be year specific. Therefore, the rel. BMP maps were analyzed for years with different weather conditions (dry, wet, normal conditions), and whether a multiyear data analysis increases accuracy and validity was investigated.

An analysis of the relationship among yield patterns with spatial variability in soil property contents (e.g. SOC, TN, phosphorus (P), potassium (K), pH and soil texture) was part of this study. In addition, multispectral reflectance measurements were conducted with a tractor-mounted sensor system to obtain high-quality validation data using existing and validated yield algorithms, independent of the satellite data.

Based on the current state of knowledge, the following hypotheses were formulated:

Consideration of the crop type and crop-specific growth stages increases the accuracy in deriving yield zones compared to methods that do not include this information.

The yield patterns of the respective crops are similar if the correct crop-specific growth stages are selected.

When deriving yield zones, satellite data almost reach the same accuracy as data from tractor-mounted sensor systems.

The spatial distribution of crop-specific relative biomass yield potential is closely related to the spatial distribution of ground truth data (e.g., biomass yield, soil properties influencing yield).

Materials and methods

Site and weather conditions.

The investigations were conducted on arable fields at three different sites. The methodology for compiling biomass potential maps was derived from fields at two research stations (Roggenstein (site A) and Dürnast (site B)) of the Technical University of Munich (Bavaria, Germany) and was validated on arable fields of farms in the Burghausen region (80 km east of Munich 48° 7′ 51″ N 12°44′ 5″ E, 450 m a.s.l., site C) under different management conditions. Research station Dürnast is located 30 km north of Munich (48° 24′ 3″ N 11°38′ 42″ E, 425 m a.s.l.) and research station Roggenstein is located 20 km west of Munich (48° 10′ 47″ N 11° 18′50″ E, 480 m a.s.l.).

Due to the high availability of yield data (determined with different systems) and many years of precision farming experience, the research stations were used to derive the relative BMP algorithm. Sites A and B are located in the tertiary hill country of southern Germany. The major soil types are cambisols (medium to deep brown soil of medium quality) (FAO, 2014 ). The 30-year mean annual precipitation was 888 mm (site A) and 813 mm (site B), and the temperature was 8.9 °C (Online Resource 1, Online Resource 2). Site C is located on the Alzplatte, which is characterized by a smooth, hilly landscape. The 30-year mean annual precipitation was 849 mm, and the mean annual temperature was 8.9 °C (Online Resource 3).

To derive and validate the satellite-based method for analyzing relative yield potential, multiyear data (2018–2022) were used to account for different weather conditions in the study regions. A dry and warm spring and a hot summer characterized 2018 and 2019. At all sites, the mean precipitation in 2018 and 2019 was 10 to 20% below average. In 2020, the spring was dry at all sites, while the summer was within the normal range with some heavy rainfall. The year 2021 was very wet with heavy rainfall. The temperatures were average. In 2022, the spring was dry, and the summer months were warmer with less precipitation (12 to 25%) than the 30-year average.

Farm management of the investigated fields

The investigated fields were cultivated in conventional arable farming systems. All fields were at least four hectares in size (above average size in the regions). The crop rotation of each field is shown in Table  1 . In fields A1 (site A) and B1 (site B), only mineral fertilizer was applied. In fields C1 (site C) and C2 (site C), organic fertilizers have been applied for several years. The major crops of all farm sites were winter wheat, winter barley, corn, canola and soybeans (Online Resource 4).

Remote sensing data

The algorithm was developed using Sentinel-2 MSI-Level 2A (MAJA Tiles) satellite data provided by the German Aerospace Center (DLR). The Sentinel-2A mission from the Copernicus program by the European Space Agency (ESA) provides satellite images with 13 spectral bands, four of them (red (665 nm), green (560 nm), blue (490 nm), and VNIR (842 nm)) at a 10 × 10 m resolution. The return rate of the two satellites of the Sentinel-2 mission is 5 days (Sentinel Online, 2023 ). The satellite data were preprocessed using the time-series-based MAJA processor, a multitemporal atmospheric correction and cloud screening algorithm developed by the DLR (German Aerospace Center, 2019 ; Hagolle et al., 2017 ).

Description of the relative biomass potential map algorithm

To develop the algorithm, satellite data from 2018 to 2022 were retrospectively acquired from DLR. To display within-field spatial variation, a 5-year time series of Sentinel-2A images was processed. As the vegetation index NDVI has already been demonstrated to estimate plant aboveground biomass (Kross et al., 2015 ; Perry et al., 2022 ), NDVI values were used as an indicator of the biomass growth potential on cropland. An essential aspect of the algorithm is the knowledge of the cultivated crop types and the dates of the characteristic development stages. This information was available in detail for the fields to derive the algorithm at the research stations, but also on the validation fields of farmers. An overview of the workflow for determining the rel. BMP map according to Hagn et al. ( 2023 ) is shown in Fig.  1 . The work steps are described as follows:

figure 1

Workflow of the rel. BMP (%) map algorithm, presented using the example of winter wheat

Step (1): Selecting the satellite scenes according to the grown crops

Depending on the crops grown in the field, satellite scenes at characteristic growth stages are selected (Online Resource 5). The growth stages of booting (GS 30–32), jointing (GS 39) and flowering (GS 65) of winter wheat and winter barley correlate well with yield and biomass growth (Barmeier et al., 2017 ; Erdle et al., 2011 ; Maidl et al., 2019 ; Mistele & Schmidhalter, 2008a ; Prey & Schmidhalter, 2019 ; Spicker, 2017 ). However, for corn and soybeans sown in split rows, growth stages have to be selected at which row closure has already been reached, for corn the growth stages GS 39 and GS 65 (Mistele & Schmidhalter, 2008b ), and for soybeans the growth stages V5 (fifth trifoliate), R2 (full bloom) and R5 (beginning of seed filling) (Crusiol et al., 2016 ; Farias et al., 2023 ). For canola, the growth stages budding (GS 50), beginning of flowering (GS 60), where no yellow coloration occurs yet, and podding (GS 70), where the yellow coloration of the flowers has already faded again, are used (Spicker, 2017 ).

Step (2): Clipping of Sentinel-2A data to the field and checking for clouds

The satellite scenes are clipped to the field. Then, for each satellite scene date, the NIR and red bands are used to check whether the area to be analyzed was covered by clouds. If so, the image is rejected.

Step (3): Calculation of the NDVI

The vegetation index NDVI is calculated for each 10 × 10 m raster cell of each satellite scene according to the growth stages, and the mean NDVI of the whole field of each satellite scene is calculated. Steps (1) to (3) are repeated for all growth stages of the cultivated crop mentioned in Step (1).

Step (4): Calculation of the relative NDVI (rel. NDVI)

The rel. NDVI (%) is calculated by dividing the NDVI of each 10 × 10 m raster cell of each satellite scene by the mean NDVI ( \(\overline{NDVI }\) ) of the entire field of each satellite scene and multiplying it by 100. Meaning that in total three rel. NDVI maps are generated per year for each crop (exept for corn) of the crop rotation.

Step (5): Calculation of the singleyear rel. BMP (%)

The singleyear rel. BMP is determined by calculating the arithmetic mean of all relative NDVI values of the raster cells of the rel. NDVI maps generated in Step (4). In the case of wheat, the rel. NDVI values of the respective grid cells of the rel. NDVI maps of GS 32, GS 39 and GS 65 are summarized and divided by their number. Steps (4) and (5) are repeated for all crops of the crop rotation, or at least for a period of five years, resulting in five singleyear rel. BMP maps.

Step (6): Calculation of the multiyear rel. BMP

The multiyear rel. BMP (%) is determined by calculating the the arithmetic mean of all singleyear rel. BMP maps. For a five year crop rotation, five singleyear rel. BMP are created in Step (5), meaning that the total of all singleyear rel. BMP are divided by five.

Step (7): Meaning and interpretation of the rel. BMP map

The map of relative biomass potential reflects the multiyear site-specific yield potential of a crop rotation. By accounting for different crops and observation dates, greater precision should be achieved than with a single analysis of a specific crop. The primary objective of the relative BMP maps is the identification of different yield zones within arable fields to identify high- and low-yield zones for site-specific management.

Methods of determining grain yield

At field A1 and field B1, the yield was measured by combine harvesters with an integrated yield sensing system (Claas Lexion 5500 and New Holland CX 790). The grain yield was determined by three main components (GPS-position with Real Time Kinematic, determination of the harvested area, determination of the grain moisture and the grain yield by means of a moisture sensor and volume flow measurement sensor (Noack, 2006 ).

At fields C1 and C2 the yield was derived based on the spectral reflectance measurements from the tractor-mounted sensor system in combination with a yield algorithm according to Maidl et al. ( 2019 ).

As a nondigital method for grain yield determination, 50 georeferenced biomass samples were taken before grain harvest at field A1. Eight 2 m long winter wheat rows were cut close to the ground with hand shears. The samples were threshed with a stationary combine thresher (Wintersteiger, 2023 ). After drying the grain at 60 °C, the dry matter (DM) content and the yield (t ha −1 ) at 86% DM were determined.

Methods of determining spatial reflectance data by proximal sensing

Spectral reflectance measurements during the flowering of winter wheat were carried out with a tractor-mounted multispectral sensor system (TEC5 2010) in 2018, 2021 and 2022. The system is equipped with two multispectral reflectance sensors (360 nm to 900 nm). Approximately every second reflectance measurements are conducted. Natural influences of solar radiation are considered in the data output by the implementation of a reflectance reference module. Based on the reflectance measurements, the vegetation index REIP (red edge inflection point) was calculated (Guyot et al., 1988 ). Since various studies have shown that REIP is closely related to the aboveground biomass and N content of winter wheat (Mistele & Schmidhalter, 2008a ; Prey & Schmidhalter, 2019 ), the REIP index based on data from tractor-mounted sensing was used as an indicator of the spatial variability in plant biomass. In addition, there are reliable yield algorithms based on REIP (Maidl et al., 2019 ) that have been validated in several studies (Mittermayer et al., 2021 , 2022 ; Schuster et al., 2022 , 2023 ); thus, the REIP map was used to verify the rel. BMP map from satellite-sensing data.

Methods of determining soil properties

Georeferenced soil samples (SOC, TN, P CAL , K CAL , pH and texture) were collected after the grain harvest between 2018 and 2022 from the investigated. Eight soil samples at a depth of 0.3 m were taken at a maximum radius of 0.5 m around a georeferenced sampling point and combined into a composite sample. The distribution pattern of the georeferenced points was ‘systematically random’ (Thompson, 2002 ). SOC and TN were analyzed with a C/N Analyzer (DIN ISO 10694, 1996 ), K CAL and P CAL were determined by the CAL method (VDLUFA, 2012 ), and pH (VDLUFA, 2016 ) was measured with the CaCl 2 method (VDLUFA, 2016 ). The soil texture was determined by the feel method.

Statistical and geostatistical analysis

The geostatistical data analysis was performed using R (RStudio Version 2022.12.0). The spatial resolution of the collected data varied greatly; thus, all data had to be transferred to the same raster grid and the same resolution of 10 × 10 m. Georeferenced soil samples, combine harvester data and sensor data were interpolated using ordinary kriging (Oliver & Webster, 2015 ). To reduce distortions of the output due to errors in the reflectance datasets, satellite imagery, combine harvester data and sensor data, outliers were cleared from the dataset using a twofold standard deviation. Before conducting ordinary kriging, for each dataset, a semivariogram was created. A semivariogram is the variance in the data according to distance classes and shows the spatial relationship of the variable with increasing separating distance (spatial autocorrelation effect). According to the distribution of the data variance and the distance classes of the semivariogram, a model was fitted. Depending on the fitted model, the data are weighted in the kriging neighborhood to predict values between sample points (Oliver & Webster, 2015 ). Ordinary kriging was performed for all datasets using the same target raster grid, which was based on the field boundaries of the fields.

Using the same target raster grid enabled the performance of a correlation analysis of all data points of the available datasets. Based on the Pearson correlation coefficients (r), the relationship between the digital variables and soil parameters was analyzed. The R libraries ‘tiff’, ‘rgdal’, ‘rgeos’, ‘gstat’, and ‘raster’ were used for spatial analyses and loading vector or raster files. The correlation coefficients were classified as very strong ( r  > 0.9), strong (0.9 >  r  > 0.7), moderate (0.7 >  r  > 0.5), weak (0.5 >  r  > 0.3), or very weak ( r  < 0.3) according to (Mittermayer et al., 2021 ).

Results of field A1

Spatial variation in the yield pattern of winter wheat.

On field A 1, was is investigated (a) whether the relative biomass potential of winter wheat determined via the NDVI shows similar distribution patterns in two cultivation years (2018 and 2020), (b) whether two-year relative biomass potential maps (2018 and 2020) have a higher accuracy and informative value than singleyear maps, (c) how close the relationships between the relative biomass potential and measured or digitally determined yields are and (d) how close the relationships between the relative biomass potential and soil parameters are.

The relative BMP maps of winter wheat in 2018 (Fig.  2 a) and 2020 (Fig.  2 b) showed similar yield patterns, although the weather in 2018 was clearly different from that in 2020, especially the temperature and rainfall distribution in spring (Online Resource 1). Accordingly, the relative BMP map of 2018 and 2020 (Fig.  2 c) also showed a similar pattern of high- and low-yield zones.

figure 2

Interpolated maps of the spatial distribution (in a 10 × 10 m grid) of the relative biomass potential of winter wheat ( a – c ), the winter wheat yield determined with the combine harvester ( d ) and by biomass sampling ( e ), the REIP determined by spectral reflectance measurements ( f ) and the soil parameters soil organic carbon (SOC) ( g ), total nitrogen (TN) ( h ) and sand content ( i ) at field A1

The yield patterns derived with other digital methods (combine yield sensing system, Fig.  2 d, tractor-mounted sensor system, Fig.  2 f)) and the ground truth data (biomass samples, Fig.  2 e) matched well with the pattern of the relative BMP maps.

The biomass potential according to the rel. BMP map of winter wheat (2018) ranged from 87.8% to 110.6% (Table  2 ). The absolute grain yield measured with the combine harvester yield sensing system (2018) averaged 8.6 t ha −1 and varied from 5.4 t ha −1 (62.8%) to 11.3 t ha −1 (131.4%). The absolute grain yield determined with biomass samples (2018) was higher than the yield measured at the combine, with an average of 9.9 t ha −1 , varying from 6.3 t ha −1 (63.6%) to 12.9 t ha −1 (130.3%). In purely mathematical terms, this means that a change by one percent relative BMP corresponds to an absolute yield increase or decrease by approximately 3%.

The soil parameters SOC, TN and texture (sand content) also showed considerable variability and clearly visible zones within the field (Fig.  2 g–i and Table  3 ). While SOC and TN showed an almost equal distribution within the field, the sand fraction showed an inverse relationship. In areas with high sand content, SOC and NT contents were low, and conversely, in areas with low sand contents, SOC and NT contents were high. The relative BMP maps showed a similar distribution pattern as the soil parameters SOC and TN. Accordingly, the yield potential is higher when the SOC and TN contents are higher, while the yield potential is lower when the sand content is higher.

Correlations between the plant and soil parameters

The rel. BMP of 2018 and 2020 were strongly correlated (r = 0.77), while the combination of 2018 and 2020 was very strongly correlated with the individual BMPs (2018, 2020) (r = 0.95 and 0.93) (Table  4 ). The yield measured by the combine harvester yield sensing system was moderately correlated with the rel. BMP (r = 0.57 to 0.61). The correlations of the biomass yield (r = 0.43 to 0.53) and the tractor-mounted reflectance measurements (r = 0.47 to 0.64) with the rel. BMP were weak to moderate. The relationship with the rel. BMP was greater in 2018 and in both years combined (2018 & 2020). The correlations between the relative BMP and the soil parameters SOC (r = 0.37 to 0.46), TN (r = 0.39 to 0.47) and AWC (r = 0.47 to 0.49) were closer than those to the other measured soil parameters; there was a negative relationship to the sand content (r = − 0.29 to 0.34).

Results of field B1

Spatial variation in the yield pattern of different crop types at different growth stages.

In field B1, the pattern of the rel. BMPs of winter wheat (2018), winter barley (2019) and canola (2020) was compared at different growth stages (Fig.  3 ). The rel. BMP had a similar pattern at the different development stages of the analyzed crops. However, there were more or less clear differences between the relative BMP patterns of winter wheat, winter barley and canola. With the exception of canola at GS 70 (Fig.  3 i), there were nonetheless stable areas in the field that had higher or lower biomass potential across crop types (Fig.  3 a–h), even though weather conditions varied considerably from 2018 to 2020 (Online resource 2).

figure 3

Interpolated maps of the spatial distribution (in a 10 × 10 m grid) of the relative biomass potential of winter wheat ( a – c ), winter barley ( d – f ) and canola ( g – i ) at crop-specific growth stages at field B1

The variation in the rel. BMP (Online Resource 8) was initially greater for winter wheat at GS 32 and GS 39 (Fig.  3 a and b) as well as for canola at GS50 and GS 60 (Fig.  3 g and h) and then decreased at GS 65 and GS 70 (Fig.  3 c and i). The variation in the relative biomass potential of winter barley was consistent throughout the different growth stages (Fig.  3 d–f).

The biomass potential according to the rel. BMP of winter wheat (2018) ranged from 72.8 to 122.5% in growth stage GS 39 and from 88.5 to 108.4% in GS 65; thus, the variability is stage dependent. The absolute grain yield measured with the combine harvester yield sensing system (2018) averaged 10 t ha −1 and ranged from 6.2 t ha −1 (62.0%) to 13.6 t ha −1 (136.0%). A change in relative BMP by one percent is equivalent to a change in grain yield by approximately 1.5% (GS 39) to over 3% (GS 65).

For canola, the relative BMP at GS 50 ranged between 62 and 136%. At GS 60, the variation decreases (84% to 116%). After flowering (GS 70), the variation in the rel. BMP was even lower (95% to 106%).

The soil properties in field B1 varied greatly (Online Resource 9), e.g., SOC content from 1.2 to 2.1% DM, TN content from 0.11 to 0.23% DM, sand content from 16.9 to 35.2%, P CAL content from 3.4 to 12.4, and AWC from 15.9 to 22.0%. Thus, the study area was characterized by substantial differences in yield-relevant soil properties, particularly regarding nutrient supply and water retention capacity. In contrast, there were only slight differences in elevation (471 m and 487 m) across the area.

The correlations between the rel. BMP at different growth stages of the crops were strong to very strong (Table  5 ). The strongest correlations were found for winter wheat (r = 0.94 to 0.97). The correlations between the growth stages of winter barley were strong (r = 0.74 to 0.90), and for canola, they were moderate to strong (r = 0.58 to 0.79). Between the different crops, the correlations ranged from r = 0.20 to 0.77 and were predominantly moderate. Correlations between the rel. BMP of winter wheat and winter barley were strongest at GS 39 (r = 0.65) and GS 65 (r = 0.68). The rel. BMP of canola correlated best at GS 50 with winter wheat at GS 39 (r = 0.64) and with winter barley at GS 65 (r = 0.77).

The highest correlation of the relative BMP of winter wheat with the grain yield of winter wheat measured by the combine harvester yield measurement system was found at GS 39 (r = 0.64). Surprisingly, the correlation between the winter wheat yield measured in 2018 and the rel. BMP of the winter barley and canola was in some cases closer than to the rel. BMP of winter wheat (e.g., r = 0.79 for canola, GS 60). This also indicates that the yield zones are very stable over the years.

The spectral reflectance measurements of the tractor-mounted sensor system correlated strongly with the rel. BMP of winter wheat during GS 32 and GS 39 (r = 0.86). This demonstrates that the satellite-based rel. BMP estimated based on the NDVI leads to a similar yield differentiation as the REIP determined by the tractor-mounted sensor system. The correlations of the rel. BMP with the soil parameters were weak to very weak. The highest correlations between soil parameters (SOC, TN, silt) and the rel. BMP of winter wheat were found at GS 65 (r = 0.40, r = 0.42 and r = 0.54). No relationships were found between elevation and rel. BMP.

Results of field C1

Spatial variation in the yield pattern of different crop types.

The fields C1 and C2, which were used for model validation under practical conditions, had slightly less measurement data and results than the fields (A1 and B1) of the test stations. Nevertheless, in principle, the same analyses could be carried out, and correlations could be investigated. The rel. BMP maps of the individual crops from 2018 to 2022 (Fig.  4 a–e) showed a similar yield pattern, although weather conditions were different in these years, including droughts, wet conditions and heavy rainfall (Online resource 3). The winter wheat patterns of 2018 and 2020 matched well with the rel. BMP pattern of canola 2019 (Fig.  4 a, b and e). The patterns of winter barley and corn were similar to those of winter wheat, winter barley and corn, but with some differences (Fig.  4 c and d). However, the main parts of the high- and low-yield zones coincided for all crops. The map of multiyear rel. BMP (Fig.  4 f) showed the integrated results of the rel. BMP of all crops grown in the crop rotation.

figure 4

Interpolated maps of the spatial distribution (in a 10 × 10 m grid) of the relative biomass potential of winter wheat 2018 ( a ), of canola 2019 ( b ), of winter barley 2020 ( c ), of corn 2021 ( d ), of winter wheat 2022 ( e ) and of the multiyear relative biomass potential 2018–2022 ( f ) and the soil parameters soil organic carbon (SOC) ( g ), total nitrogen (TN) ( h ) and pH ( i ) on field C1

All rel. BMP of the crops showed a similar spatial variability, with minima of 91.8 to 94.5% and maxima of 105.2 to 106.9% (Online Resource 10). The rel. BMP of winter wheat (2022) and the yield derived from the tractor-mounted sensor system had a similar variation and ranged from 94.0 to 105.4% (rel. BMP) and from 8.9 t ha −1 (94.2%) to 10 t ha −1 (106.3%) (grain yield). Therefore, a 1% change in rel. BMP corresponds to an absolute change in yield by 1.07%.

The spatial distribution of the rel. BMP of all crops was consistent with the spatial distribution of the soil parameters SOC and TN, indicating higher yield potential in zones with higher SOC and TN contents (Fig.  4 g and h). PH had a slightly different distribution pattern (Fig.  4 i).

There was great variation in the soil parameters. The maps of SOC and TN had very similar distribution patterns. The values are shown as follows: SOC content 1.1–2.1% DM, TN content 0.12–0.22% DM, P CAL content 3.1–31.7 mg (100 g −1 ), K CAL content 7.0–25.3 mg (100 g −1 ) and pH 6.0–7.2 (Online Resource 11).

The correlations between the rel. BMPs of the different crops were moderate to strong (Table  6 ). The highest correlation between years and crops was found between the rel. BMP of winter wheat 2018 and BMP of winter wheat 2022 (r = 0.86). The highest correlations between the rel. BMP of the crops were found between winter wheat and canola (r = 0.77 and 0.84), and the lowest correlations were found between corn and winter barley (r = 0.61) as well as between canola and winter barley (r = 0.60). The rel. BMP over all crops correlated strongly to very strongly with the individual rel. BMP of the crops (r = 0.80 to 0.92).

The correlations between the rel. BMP and the yield derived from the tractor-mounted sensor were moderate (r = 0.54 to r = 0.66). The highest correlation (r = 0.66) was found between REIP and the multiyear rel. BMP.

The rel. BMP maps correlated weakly to moderately with the soil parameters. The correlations with SOC and TN were highest with corn (r = 0.71 and 0.66), winter barley (r = 0.68 and 0.65) and over all crops (r = 0.66 and 0.62). The relationships between the rel. BMP of winter wheat (2018 & 2022) and SOC and TN were similar (r = 0.49 to 0.53). P CAL correlated moderately over all crops with the rel. BMP (r = 0.50 to 0.64). The highest correlation with P was found with the rel. BMP over all crops (2018–2022). The correlations with K CAL (r = 0.19 to 0.44) and pH (r = − 0.05 to 0.38) were very weak to weak.

Results of site C2

Field C2 was highly variable in terms of rel. BMP (Fig.  5 ). A comparison of the distribution pattern of the yield potential of the individual crops revealed some differences (Fig.  5 a–e). The rel. BMP maps of corn (2018), soybeans (2019), winter wheat (2020) and corn (2022) matched well in most parts of the field (Fig.  5 a–c and e). However, the map of rel. BMP of winter barley (2021) showed a different distribution pattern (Fig.  5 d). The multiyear rel. BMP indicated a stable yield pattern (Fig.  5 f).

figure 5

Interpolated maps of the spatial distribution (in a 10 × 10 m grid) of the relative biomass potential of winter wheat 2018 ( a ), of soybeans 2019 ( b ), of winter wheat 2020 ( c ), of winter barley 2021 ( d ), of corn 2022 ( e ) and of the multiyear relative biomass potential 2018–2022 ( f ) and the soil parameters soil organic carbon (SOC) ( g ), total nitrogen (TN) ( h ) and pH ( i ) on field C2

The variation in the relative yield potential of corn (2018) and soybeans was low (6.98% and 7.73% variation) and ranged from 96.1 to 103.0% and from 95.7 to 103.5%, respectively (Online Resource 12). In contrast, the relative yield potential of winter barley (2021) and corn (2022) varied by a far greater margin (15.08% and 18.25%) and ranged from 90.9% to 106.0% and 91.3% to 109.6%, respectively. The variability in the relative BMP of winter wheat (2020) was not as high (93.5% to 105.5%) as the variability in the yield derived from the tractor-mounted sensor system (90.3% to 108.8%). An approximately one percent change in relative BMP corresponds to a 1.5% change in absolute yield.

The soil parameters SOC, TN and pH also showed considerable heterogeneity and distinct zones within the field (Fig.  5 g–i). SOC and TN were almost evenly distributed, while pH had a different distribution pattern. The comparison of the rel. BMP maps, the multiyear rel. BMP maps and soil maps (SOC and TN) in particular, indicated that the relative yield potential is higher in areas where the SOC and TN contents are higher. The soil parameters varied considerably: SOC content 1.4–1.8% DM, TN content 0.13–0.19% DM, P CAL content 2.8–5.9 mg (100 g −1 ), K CAL content 3.2–14.1 mg (100 g −1 ) and pH 6.0–6.9 (Online Resource 13).

The correlations between the crop-specific rel. BMP maps ranged from very weak to moderate (r = 0.10 to 0.53) (Table  7 ). Winter wheat (2022) and winter barley correlated almost equally to the other crops (r = 0.40 to 0.53). However, there was no match between corn (2022) and winter barley (2021) (r = 0.10). Corn (2018) and corn (2022) were also weakly correlated (r = 0.21). The rel. BMP map of corn (2022) only matched well with winter wheat (2020). The multiyear rel. BMP map correlated moderately to strongly with the individual crop-specific maps, while soybeans (2019) and winter wheat (2020) had the highest correlations with the multiyear map (r = 0.75 and 0.80).

The yield derived from the tractor-mounted sensor correlated strongly with the rel. BMP of winter wheat (2020) (r = 0.73) and the multiyear rel. BMP (r = 0.81) and correlated moderately with the rel. BMP of soybeans (2019) and corn (2022).

The rel. BMP maps, with the exception of winter barley (2021), were well correlated with the soil parameters SOC and TN. The correlations ranged from r = 0.50 to 0.73. The highest agreement with the maps of SOC and TN was observed for the multiyear rel. BMP. P CAL , K CAL and pH were only weakly correlated with rel. BMP maps.

Discussion of methods

This study describes a new method for satellite-based remote sensing of plant-specific biomass yield patterns to determine yield zones, involving multiple observation dates, crop-specific evaluation and consideration of relevant development stages. The aim was to achieve a high level of accuracy with a simple, cost-effective and, in principle, large-scale application. The relative biomass potential is an indicator for multiyear stable and homogeneous yield zones. Areas with a higher relative biomass potential indicate a higher yield potential within the field, and information about the absolute yield level is not given by this method. For absolute yield estimation, more input data and more complicated models are needed (Klompenburg et al., 2020 ). However, for many applications in agriculture, knowledge of relative yield differences is sufficient, e.g., georeferenced soil sampling, measures to improve soil properties in areas with low rel. BMP, variable-rate applications of organic fertilizers, variable-rate seeding, variable-rate irrigation, and detection of unproductive areas (Bökle et al., 2023 ; Schuster et al., 2023 ).

The relative BMP maps serve as a suitable alternative to established methods based on sensor measurements or direct yield measurements for the delineation of yield zones. For instance, tractor-mounted multispectral sensor systems have sophisticated technology and methodology as well as advanced algorithms that have been sufficiently validated (Mittermayer et al., 2021 , 2022 ; Schuster et al., 2022 , 2023 ; Stettmer et al., 2022b ); nonetheless, they are expensive and therefore not widely used in practice. They require expert knowledge and are not suitable for large-scale applications.

Yield mapping with a yield sensing system on the combine harvester is the most common method of delineating yield patterns and yield zones in modern agriculture (Chung et al., 2016 ). Many modern combine harvesters are equipped with yield sensing systems; however, these have a high potential for error, especially if calibration is omitted (Bachmaier, 2010 ; Fulton et al., 2018 ; Kharel et al., 2019 ). Yield recording with combine harvesters can provide accurate values; however, they cannot be relied upon, especially with data of unknown origin (Mittermayer et al., 2021 , 2022 ; Stettmer et al., 2022b ).

Yield measurement using a plot combine harvester and georeferenced biomass sampling is not applicable in agricultural practice and is only suitable for scientific applications. In order to derive high-quality yield maps, a sufficient number of samples must be obtained. Sample distribution as well as sample density may affect the performance of the spatial interpolation (Li & Heap, 2011 ); thus, the quality of the interpolated map may be insufficient if the number of samples is too low. Since the biomass samples are manually cut, they are subject to errors due to plot selection or cutting area (Petersen et al., 2005 ). Therefore, the biomass samples are also not the "true" values. Previous studies have shown that yields determined with biomass samples were almost always higher than yields determined with other digital and nondigital methods (Mittermayer et al., 2021 , 2022 ; Stettmer et al., 2022a ).

To conclude, the only suitable data sources for plant-based delineation of yield zones are either sensor-based or satellite-based systems. The large-scale availability of satellite data is favorable; provided that the data quality is as high as that of sensor data and that there is no loss of information. There are also satellite-based models (Promet) and commercial applications that can be used to estimate the absolute yields of crops (e.g., winter wheat) (Bach & Mauser, 2018 ; Hank et al., 2015 ). Independent validations have shown that this method is well suited for the delineation of yield zones; however, in terms of absolute yield estimations, significant yield deviations can occur (Mittermayer et al., 2021 ; Schuster et al., 2023 ; Stettmer et al., 2022b ).

Discussion of results

The results at field A1 show that the rel. BMP method provides well interpretable and reliable data. The singleyear relative BMP matched well with the yield patterns of winter wheat derived from the combine harvester and derived from ground truth data (tractor-mounted sensor system and biomass samples) (Fig.  2 ).

The results of site B show few variations in the rel. BMP pattern during the growth period of the individual crops (Fig.  3 ); however, only suitable growth stages are shown. For instance, very early growth stages with low biomass development were unsuitable, as was the flowering stage in canola, which does not allow any differentiation (not shown in the figure). Therefore, the accuracy of the rel. BMP (which is based on the vegetation index NDVI) is mainly dependent on the choice of the correct growth stages according to the crop type. If incorrect growth stages are chosen, e.g., too early stages, the spatial distribution of the yield pattern will not be represented correctly, as the reflection of bare soil will lead to disturbances due to the sensitivity of the NDVI to bare soil (Mistele, 2006 ; Rondeaux et al., 1996 ). In particular, for plants that are grown in split rows, such as corn or soybeans, a growth stage in which row closure is reached must be selected. For winter wheat and winter barley, the flowering stage is one of the most suitable growth stages (Maidl et al., 2019 ; Prücklmaier, 2020 ; Spicker, 2017 ). This is not the case with canola. The yellow-coloured flowers lead to distortions in the yield pattern. Thus, growth stages before flowering are more suitable for canola (Spicker, 2017 ). For soybeans, the growth stages V5 (fifth trifoliate, R2 (full bloom) and R5 (beginning of seed filling) were selected and performed well in the yield pattern compared to the other crops. According to Andrade et al. ( 2022 ), the NDVI derived from Sentinel 2 and Landsat 8 at growth stages V5 and R2 was promising for predicting soybean grain yield.

For the delineation of yield zones, multiyear data are crucial since weather conditions are variable from year to year. The yield level in wet years can deviate from the yield level in dry years, especially if weather extremes (drought, excess water) occur (Eck et al., 2020 ; Gammans et al., 2017 ; Sjulgård et al., 2023 ), meaning that the yield pattern also varies. In dry years, the effects of soil properties (e.g., available water capacity and texture) on yield are more pronounced than in wet years, particularly at sites with limited water availability (Godwin & Miller, 2003 ; Lawes et al., 2009 ; Taylor et al., 2003 ). The relative BMP approach is a multiyear approach, so the variability in weather conditions is accounted for (Figs. 3 – 5 ).

SOC and TN are long-term stable soil parameters (Wiesmeier et al., 2019 ) and one of the main reasons for soil-related yield variability (Mittermayer et al., 2021 , 2022 ; Schuster et al., 2022 , 2023 ). As shown in the results, the relationships between the multiyear rel. BMP maps and SOC and TN were always well correlated, indicating that by this approach, the effects of soil properties on yield are well reflected (Tables 6 and 7 ).

Conclusion, outlook and further research

The new methodology for determining the relative BMP described in this paper can contribute to the extended application of precision farming technologies. The method is an important step towards the delineation of yield zones. The main yield zones (low-yield and high-yield zones) are well mapped using this method. The main low-yield and high-yield zones are mainly determined by soil parameters (e.g. soil texture, available water capacity) and are therefore rather stable in the long term. Influenced by the amount of precipitation, these zones are sometimes more and sometimes less important. Crop-specific patterns can vary from year to year, as the crops have different nutrient and growth requirements and there can be year-specific differences in terms of weed and disease pressure influencing the yield pattern derived from remote sensing technologies. Nonetheless, the rel. BMP approach can help to identify areas with low yield potential to manage them accordingly. This can include an adaptation of inputs to the low yield potential (e.g., seeds, mineral and organic fertilizers) or extensification by converting arable land into permanent grassland or biotopes as a contribution to environmental protection and nature conservation (Kvítek et al., 2009 ; Münier et al., 2004 ). Conversely, the BMP approach can be used to reliably delineate high-yield zones. Corresponding to the higher yield potential, these areas have higher nutrient uptake by plant stands, which must be compensated for by appropriate fertilization to avoid a decline in soil fertility (Hartemink, 2006 ; Sun et al., 2020 ). Further applications can be developed if the rel. BMP approach is consistently improved to determine both relative and absolute yield potentials. In this study, the crops grown and the respective stages of development were known. For a general scalability of this method to larger areas and regions, an algorithm that recognizes crop types and development stages dependent on the crop types must be implemented (Goldberg et al., 2021 ). We will address these questions in further research work. The examination of the hypothesis led to the following results:

Hypothesis 1:

The consideration of the crop type and crop-specific growth stages increases the accuracy in deriving yield zones compared to methods that do not include this information.

The results of the application of the rel. BMP method show that the choice of correct growth stages has a decisive influence on the accuracy of the derivation in yield zones.

Hypothesis 2:

Similar yield patterns were observed among the crops, not only between winter cereals with similar nutrient and growth requirements, but also for soybeans (field C1 and partly field C2). In field C2, the distribution pattern of corn (2022) differed from that of the other crops. The correlations between the rel. BMP of the individual crops were moderate to strong (r = 0.80–0.92) in field C1. In field C2, the correlations were very weak to moderate (r = 0.1–0.51).

Hypothesis 3:

The correlations between the rel. BMP and REIP derived from the tractor-mounted sensor were (a) strong to moderate (r = 0.56–0.86), (b) the yield patterns were similar (Fig.  2 ), and (c) similar correlations between rel. BMP and REIP to key soil factors SOC and TN were found (r = 0.46; 0.51 at field A1, r = 0.51; 0.76 at field C1, and r = 0.50; 0.56 at field C2).

Hypothesis 4:

The multiyear rel. BMP was moderately to strongly correlated with SOC (r = 0.62; 0.68) and TN (r = 0.64; 0.73) (sites C1 and C2).

Thus, Hypotheses 1, 3 and 4 are confirmed. Hypothesis 2 can only be accepted with reservations.

Further research

A new methodology was successfully tested in this study. However, further validation and optimization of the BMP algorithm under completely different soil, climate and farming conditions −, e.g., large fields (> 50 ha) with extreme heterogeneity due to soil genesis −, is needed. In addition, further crop-specific validation must be carried out at different yield levels, e.g., under the conditions of organic farming.

In addition, an algorithm for estimating absolute yields will be developed by using AI methods and possibly other indices or other satellite-based spectra.

Data availability

Not applicable.

Code availability

AHDB. (2023). The growth stages of cereals. Agriculture and Horticulture Development Board. Retrieved Nov 20, 2023, from https://ahdb.org.uk/knowledge-library/the-growth-stages-of-cereals

Andrade, T. G., De Andrade Junior, A. S., Souza, M. O., Lopes, J. W. B., & Vieira, P. F. D. M. J. (2022). Soybean yield prediction using remote sensing in southwestern Piauí State. Brazil. Revista Caatinga, 35 (1), 105–116. https://doi.org/10.1590/1983-21252022v35n111rc

Article   Google Scholar  

Aranguren, M., Castellón, A., & Aizpurua, A. (2020). Wheat yield estimation with NDVI values using a proximal sensing tool. Remote Sensing, 12 (17), 2749. https://doi.org/10.3390/rs12172749

Bach, H., & Mauser, W. (2018). Sustainable agriculture and smart farming. In P. P. Mathieu & C. Aubrecht (Eds.), Earth observation open science and innovation (pp. 261–269). Springer International Publishing.

Chapter   Google Scholar  

Bachmaier, M. (2010). Yield mapping based on moving butterfly neighborhoods and the optimization of their length and width by comparing with yield data from a combine harvester. In M. A. Rosen & R. Perryman (Eds.), Proceedings of the 5th IASME/WSEAS International Conference on Energy & Environment (pp. 76–82). UK.

Barmeier, G., Hofer, K., & Schmidhalter, U. (2017). Mid-season prediction of grain yield and protein content of spring barley cultivars using high-throughput spectral sensing. European Journal of Agronomy, 90 , 108–116. https://doi.org/10.1016/j.eja.2017.07.005

Bökle, S., Karampoiki, M., Paraforos, D. S., & Griepentrog, H. W. (2023). Using an open source and resilient technology framework to generate and execute prescription maps for site-specific manure application. Smart Agricultural Technology, 5 , 100272. https://doi.org/10.1016/j.atech.2023.100272

Cabrera-Bosquet, L., Molero, G., Stellacci, A., Bort, J., Nogués, S., & Araus, J. (2011). NDVI as a potential tool for predicting biomass, plant nitrogen content and growth in wheat genotypes subjected to different water and nitrogen conditions. Cereal Research Communications, 39 (1), 147–159. https://doi.org/10.1556/crc.39.2011.1.15

Chung, S. O., Choi, M. C., Lee, K. H., Kim, Y. J., Hong, S. J., & Li, M. (2016). Sensing technologies for grain crop yield monitoring systems: A review. Journal of Biosystems Engineering, 41 (4), 408–417. https://doi.org/10.5307/jbe.2016.41.4.408

Crusiol, L. G. T., Carvalho, J. D. F. C., Sibaldelli, R. N. R., Neiverth, W., do Rio, A., Ferreira, L. C., Procópio, S. D. O., Mertz-Henning, L. M., Nepomuceno, A. L., Neumaier, N., & Farias, J. R. B. (2016). NDVI variation according to the time of measurement, sampling size, positioning of sensor and water regime in different soybean cultivars. Precision Agriculture, 18 (4), 470–490. https://doi.org/10.1007/s11119-016-9465-6

DIN ISO 10694. (1996). Bestimmung von organischem Kohlenstoff und Gesamtkohlenstoff nach trockener Verbrennung (Elementaranalyse)

Eck, M. A., Murray, A. R., Ward, A. R., & Konrad, C. E. (2020). Influence of growing season temperature and precipitation anomalies on crop yield in the southeastern United States. Agricultural and Forest Meteorology, 291 , 108053. https://doi.org/10.1016/j.agrformet.2020.108053

Erdle, K., Mistele, B., & Schmidhalter, U. (2011). Comparison of active and passive spectral sensors in discriminating biomass parameters and nitrogen status in wheat cultivars. Field Crops Research, 124 (1), 74–84. https://doi.org/10.1016/j.fcr.2011.06.007

FAO. (2014). World reference base for soil resources 2014: International soil classifcation system for naming soils and creating legends for soil maps . World soil resources reports. Food and Agriculture Organization of the United Nations.

Farias, G. D., Bremm, C., Bredemeier, C., de Lima Menezes, J., Alves, L. A., Tiecher, T., Martins, A. P., Fioravanço, G. P., da Silva, G. P., & de Faccio Carvalho, P. C. (2023). Normalized Difference Vegetation Index (NDVI) for soybean biomass and nutrient uptake estimation in response to production systems and fertilization strategies. Frontiers in Sustainable Food Systems, 6 , 959681. https://doi.org/10.3389/fsufs.2022.959681

Feng, P., Wang, B., Harrison, M. T., Wang, J., Liu, K., Huang, M., Liu, D. L., Yu, Q., & Hu, K. (2022). Soil properties resulting in superior maize yields upon climate warming. Agronomy for Sustainable Development, 42 (5), 1–13. https://doi.org/10.1007/s13593-022-00818-z

Article   CAS   Google Scholar  

Fulton, J., Hawkins, E., Taylor, R., & Franzen, A. (2018). Yield monitoring and mapping. In D. K. Shannon, D. E. Clay, & N. R. Kitchen (Eds.), Precision agriculture basics (pp. 63–77). American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.

Gabriel, A., & Gandorfer, M. (2022). Adoption of digital technologies in agriculture—An inventory in a european small-scale farming region. Precision Agriculture, 24 (1), 68–91. https://doi.org/10.1007/s11119-022-09931-1

Gammans, M., Mérel, P., & Ortiz-Bobea, A. (2017). Negative impacts of climate change on cereal yields: Statistical evidence from France. Environmental Research Letters, 12 (5), 054007. https://doi.org/10.1088/1748-9326/aa6b0c

Gebbers, R., & Adamchuk, V. I. (2010). Precision agriculture and food security. Science, 327 (5967), 828–831. https://doi.org/10.1126/science.1183899

Article   CAS   PubMed   Google Scholar  

German Aerospace Center. (2019). Sentinel-2 MSI - Level 2A (MAJA Tiles) . DLR. Retrieved Sep 25, 2023, from https://doi.org/10.15489/ifczsszkcp63

Godwin, R. J., & Miller, P. C. H. (2003). A review of the technologies for mapping within-field variability. Biosystems Engineering, 84 (4), 393–407. https://doi.org/10.1016/s1537-5110(02)00283-0

Godwin, R. J., Wood, G. A., Taylor, J. C., Knight, S. M., & Welsh, J. P. (2003). Precision farming of cereal crops: A review of a six year experiment to develop management guidelines. Biosystems Engineering, 84 (4), 375–391. https://doi.org/10.1016/s1537-5110(03)00031-x

Goldberg, K., Herrmann, I., Hochberg, U., & Rozenstein, O. (2021). Generating up-to-date crop maps optimized for sentinel-2 imagery in Israel. Remote Sensing, 13 (17), 3488. https://doi.org/10.3390/rs13173488

Gregory, A. S., Ritz, K., McGrath, S. P., Quinton, J. N., Goulding, K. W. T., Jones, R. J. A., Harris, J. A., Bol, R., Wallace, P., Pilgrim, E. S., & Whitmore, A. P. (2015). A review of the impacts of degradation threats on soil properties in the UK. Soil Use and Management, 31 (Suppl 1), 1–15. https://doi.org/10.1111/sum.12212

Article   CAS   PubMed   PubMed Central   Google Scholar  

Guyot, G., Baret, F., & Major, D. J. (1988). High spectral resolution: Determination of spectral shifts between thered and infrared. International Archives of Photogrammetry and Remote Sensing, 11 , 750–760.

Google Scholar  

Hagn, L., Mittermayer, M., Schuster, J., Hu, Y., & Hülsbergen, K. J. (2023). Identifying key soil factors influencing spatial and temporal variability of cereal crops estimated using time-series of satellite-sensing data. In J. V. Stafford (Ed.), Precision agriculture ’23. 14th European Conference on Precision Agriculture (pp. 903–908). Wageningen Academic Publishers.

Hagolle, O., Huc, M., Desjardins, C., Auer, S., & Richter, R. (2017). Maja algorithm theoretical basis document . Zenodo. Retrieved September 25, 2023, from https://zenodo.org/record/1209633

Hank, T., Bach, H., & Mauser, W. (2015). Using a remote sensing-supported hydro-agroecological model for field-scale simulation of heterogeneous crop growth and yield: Application for wheat in Central Europe. Remote Sensing, 7 (4), 3934–3965. https://doi.org/10.3390/rs70403934

Hartemink, A. E. (2006). Soil fertility decline: Definitions and assessment. In R. Lal (Ed.), Encyclopedia of soil science (pp. 1618–1621). Taylor & Francis.

Hatfield, J. L. (2000). Precision agriculture and environmental quality; challenges for research and education . National Soil Tilth Laboratory, Agricultural Research Service, USDA.

Heil, K., Klöpfer, C., Hülsbergen, K.-J., & Schmidhalter, U. (2023). Description of meteorological indices presented based on long-term yields of winter wheat in Southern Germany. Agriculture, 13 (10), 1904. https://doi.org/10.3390/agriculture13101904

Horn, R., & Fleige, H. (2003). A method for assessing the impact of load on mechanical stability and on physical properties of soils. Soil and Tillage Research, 73 (1–2), 89–99. https://doi.org/10.1016/s0167-1987(03)00102-8

Juhos, K., Szabó, S., & Ladányi, M. (2015). Influence of soil properties on crop yield: A multivariate statistical approach. International Agrophysics, 29 (4), 433–440. https://doi.org/10.1515/intag-2015-0049

Karlson, M., Bolin, D., Bazié, H. R., Ouedraogo, A. S., Soro, B., Sanou, J., Bayala, J., & Ostwald, M. (2023). Exploring the landscape scale influences of tree cover on crop yield in an agroforestry parkland using satellite data and spatial statistics. Journal of Arid Environments, 218 , 105051. https://doi.org/10.1016/j.jaridenv.2023.105051

Kharel, T. P., Swink, S. N., Maresma, A., Youngerman, C., Kharel, D., Czymmek, K. J., & Ketterings, Q. M. (2019). Yield monitor data cleaning is essential for accurate corn grain and silage yield determination. Agronomy Journal, 111 (2), 509–516. https://doi.org/10.2134/agronj2018.05.0317

Kross, A., McNairn, H., Lapen, D., Sunohara, M., & Champagne, C. (2015). Assessment of RapidEye vegetation indices for estimation of leaf area index and biomass in corn and soybean crops. International Journal of Applied Earth Observation and Geoinformation, 34 , 235–248. https://doi.org/10.1016/j.jag.2014.08.002

Kvítek, T., Žlábek, P., Bystřický, V., Fučík, P., Lexa, M., Gergel, J., Novák, P., & Ondr, P. (2009). Changes of nitrate concentrations in surface waters influenced by land use in the crystalline complex of the Czech Republic. Physics and Chemistry of the Earth, Parts a/b/c, 34 (8–9), 541–551. https://doi.org/10.1016/j.pce.2008.07.003

Lawes, R. A., Oliver, Y. M., & Robertson, M. J. (2009). Integrating the effects of climate and plant available soil water holding capacity on wheat yield. Field Crops Research, 113 (3), 297–305. https://doi.org/10.1016/j.fcr.2009.06.008

Li, J., & Heap, A. D. (2011). A review of comparative studies of spatial interpolation methods in environmental sciences: Performance and impact factors. Ecological Informatics, 6 (3–4), 228–241. https://doi.org/10.1016/j.ecoinf.2010.12.003

Li, Y., Shi, Z., Li, F., & Li, H.-Y. (2007). Delineation of site-specific management zones using fuzzy clustering analysis in a coastal saline land. Computers and Electronics in Agriculture, 56 (2), 174–186. https://doi.org/10.1016/j.compag.2007.01.013

López-Lozano, R., Casterad, M. A., & Herrero, J. (2010). Site-specific management units in a commercial maize plot delineated using very high resolution remote sensing and soil properties mapping. Computers and Electronics in Agriculture, 73 (2), 219–229. https://doi.org/10.1016/j.compag.2010.04.011

Lowenberg-DeBoer, J., & Erickson, B. (2019). Setting the record straight on precision agriculture adoption. Agronomy Journal, 111 (4), 1552–1569. https://doi.org/10.2134/agronj2018.12.0779

Maestrini, B., & Basso, B. (2018). Drivers of within-field spatial and temporal variability of crop yield across the US Midwest. Scientific Reports, 8 (1), 14833. https://doi.org/10.1038/s41598-018-32779-3

Maidl, F. X., Spicker, A. B., Weng, A., & Hülsbergen, K. J. (2019). Ableitung des teilflächenspezifischen Kornertrags von Getreide aus Reflexionsdaten [Derivation of the site-specific grain yield from reflection data]. In M. Aurich (Ed.), Informatik in der Land-, Forst- und Ernährungswirtschaft. Digitalisierung für landwirtschaftliche Betriebe in kleinstrukturierten Regionen - ein Widerspruch in sich? (pp. 131–134). Gesellschaft für Informatik.

Martinez-Feria, R. A., & Basso, B. (2020). Unstable crop yields reveal opportunities for site-specific adaptations to climate variability. Scientific Reports, 10 (1), 2885. https://doi.org/10.1038/s41598-020-59494-2

Mistele, B., Gutser, R., & Schmidhalter, U. (2004). Validation of field-scaled spectral measurements of the nitrogen status of winter wheat. In R. Khosla (Ed.), 7th International Conference on Precision Agriculture and other Precision Resources Management (pp. 629-639) , Mineapolis.

Mistele, B. (2006). Tractor based spectral reflectance measurements using an oligo view optic to detect biomass, nitrogen content and nitrogen uptake of wheat and maize and the nitrogen nutrition index of wheat . [Dissertation, Technische Universität München]. Freising-Weihenstephan.

Mistele, B., & Schmidhalter, U. (2008a). Estimating the nitrogen nutrition index using spectral canopy reflectance measurements. European Journal of Agronomy, 29 (4), 184–190. https://doi.org/10.1016/j.eja.2008.05.007

Mistele, B., & Schmidhalter, U. (2008b). Spectral measurements of the total aerial N and biomass dry weight in maize using a quadrilateral-view optic. Field Crops Research, 106 (1), 94–103. https://doi.org/10.1016/j.fcr.2007.11.002

Mistele, B., & Schmidhalter, U. (2010). Tractor-based quadrilateral spectral reflectance measurements to detect biomass and total aerial nitrogen in winter wheat. Agronomy Journal, 102 (2), 499–506. https://doi.org/10.2134/agronj2009.0282

Mittermayer, M., Gilg, A., Maidl, F.-X., Nätscher, L., & Hülsbergen, K.-J. (2021). Site-specific nitrogen balances based on spatially variable soil and plant properties. Precision Agriculture, 22 (5), 1416–1436. https://doi.org/10.1007/s11119-021-09789-9

Mittermayer, M., Maidl, F.-X., Nätscher, L., & Hülsbergen, K.-J. (2022). Analysis of site-specific N balances in heterogeneous croplands using digital methods. European Journal of Agronomy, 133 , 126442. https://doi.org/10.1016/j.eja.2021.126442

Mulla, D. J. (2013). Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosystems Engineering, 114 (4), 358–371. https://doi.org/10.1016/j.biosystemseng.2012.08.009

Münier, B., Birr-Pedersen, K., & Schou, J. S. (2004). Combined ecological and economic modelling in agricultural land use scenarios. Ecological Modelling, 174 (1–2), 5–18. https://doi.org/10.1016/j.ecolmodel.2003.12.040

Ngoune, L. T., & Shelton, C. M. (2020). Factors affecting yield of crops. In Amanullah (Ed.), Agronomy. IntechOpen.

Noack, P. O. (2006). Entwicklung fahrspurbasierter algorithmen zur korrektur von ertragsdaten im precision farming . Retrieved 5 November, 2020, from https://www.tec.wzw.tum.de/downloads/diss/2006_noack.pdf

Oliver, M. A., & Webster, R. (2015). Basic steps in geostatistics: The variogram and kriging . Springer International Publishing.

Book   Google Scholar  

Sentinel Online. (2023, August 9). Spatial resolutions - Sentinel-2 MSI . Retrieved 9 August, 2023, from https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/resolutions/spatial

Pardon, P., Reubens, B., Mertens, J., Verheyen, K., De Frenne, P., De Smet, G., Van Waes, C., & Reheul, D. (2018). Effects of temperate agroforestry on yield and quality of different arable intercrops. Agricultural Systems, 166 , 135–151. https://doi.org/10.1016/j.agsy.2018.08.008

Perry, E., Sheffield, K., Crawford, D., Akpa, S., Clancy, A., & Clark, R. (2022). Spatial and temporal biomass and growth for grain crops using NDVI time series. Remote Sensing, 14 (13), 3071. https://doi.org/10.3390/rs14133071

Petersen, L., Minkkinen, P., & Esbensen, K. H. (2005). Representative sampling for reliable data analysis: Theory of Sampling. Chemometrics and Intelligent Laboratory Systems, 77 (1–2), 261–277. https://doi.org/10.1016/j.chemolab.2004.09.013

Prabhakara, K., Hively, W. D., & McCarty, G. W. (2015). Evaluating the relationship between biomass, percent groundcover and remote sensing indices across six winter cover crop fields in Maryland, United States. International Journal of Applied Earth Observation and Geoinformation, 39 , 88–102. https://doi.org/10.1016/j.jag.2015.03.002

Prey, L., & Schmidhalter, U. (2019). Simulation of satellite reflectance data using high-frequency ground based hyperspectral canopy measurements for in-season estimation of grain yield and grain nitrogen status in winter wheat. ISPRS Journal of Photogrammetry and Remote Sensing, 149 , 176–187. https://doi.org/10.1016/j.isprsjprs.2019.01.023

Prücklmaier, J. X. (2020). Feldexperimentielle Analysen zur Ertragsbildung und Stickstoffeffizienz bei organisch-mineralischer Düngung auf heterogenen Standorten und Möglichkeiten zur Effizienzsteigerung durch computer- und sensortgestützte Düngesysteme . [Dissertation, Technische Universität München].Freising-Weihenstephan.

Raimondi, S., Perrone, E., & Barbera, V. (2010). Pedogenesis and variability in soil properties in a floodplain of a semiarid environment in southwestern sicily (Italy). Soil Science, 175 (12), 614–623. https://doi.org/10.1097/ss.0b013e3181fe2ec8

Rondeaux, G., Steven, M., & Baret, F. (1996). Optimization of soil-adjusted vegetation indices. Remote Sensing of Environment, 55 (2), 95–107. https://doi.org/10.1016/0034-4257(95)00186-7

Ruan, G., Li, X., Yuan, F., Cammarano, D., Ata-Ui-Karim, S. T., Liu, X., Tian, Y., Zhu, Y., Cao, W., & Cao, Q. (2022). Improving wheat yield prediction integrating proximal sensing and weather data with machine learning. Computers and Electronics in Agriculture, 195 , 106852. https://doi.org/10.1016/j.compag.2022.106852

Schmidhalter, U., Jungert, S. B., S., Gutser, R., Manhart, R., Mistele, B., & Gerl, G. (2003a). Field spectroscopic measurements to characterize nitrogen status and dry matter production of winter wheat. In J. V. Stafford & A. Werner (Eds.), Precision agriculture '03 . 4th European Conference on Precision Agriculture (pp. 615–619). Wageningen Academic Publishers.

Schmidhalter, U., Jungert, S., Bredemeier, S., Gutser, R., Manhart, R., Mistele, B., & Gerl, G. (2003b). Field-scale validation of a tractor based multispectral crop scanner to determine biomass and nitrogen uptake of winter wheat. In J. V. Stafford & A. Werner (Eds.), Precision agriculture '03. 4th European Conference on Precision Agriculture (pp. 615–619). Wageningen Academic Publishers.

Schuster, J., Hagn, L., Mittermayer, M., Maidl, F.-X., & Hülsbergen, K.-J. (2023). Using remote and proximal sensing in organic agriculture to assess yield and environmental performance. Agronomy, 13 (7), 1868. https://doi.org/10.3390/agronomy13071868

Schuster, J., Mittermayer, M., Maidl, F.-X., Nätscher, L., & Hülsbergen, K.-J. (2022). Spatial variability of soil properties, nitrogen balance and nitrate leaching using digital methods on heterogeneous arable fields in southern Germany. Precision Agriculture, 24 (2), 647–676. https://doi.org/10.1007/s11119-022-09967-3

Servadio, P., Bergonzoli, S., & Verotti, M. (2017). Delineation of management zones based on soil mechanical-chemical properties to apply variable rates of inputs throughout a field (VRA). Engineering in Agriculture, Environment and Food, 10 (1), 20–30. https://doi.org/10.1016/j.eaef.2016.07.001

Shaheb, M. R., Venkatesh, R., & Shearer, S. A. (2021). A review on the effect of soil compaction and its management for sustainable crop production. Journal of Biosystems Engineering, 46 (4), 417–439. https://doi.org/10.1007/s42853-021-00117-7

Sjulgård, H., Keller, T., Garland, G., & Colombi, T. (2023). Relationships between weather and yield anomalies vary with crop type and latitude in Sweden. Agricultural Systems, 211 , 103757. https://doi.org/10.1016/j.agsy.2023.103757

Spicker, A. B. (2017). Entwicklung von Verfahren der teilflächenspezifischen Stickstoffdüngung zu Wintergerste (Hordeum vulgare L.) und Winterraps (Brassica napus L.) auf Grundlage reflexionsoptischer Messungen. (Development of sensorbased nitrogen fertilization systems for oilseed rape (Brassica napus L.) and winter wheat (Hordeum vulgare L.)) . [Dissertation, Technische Universität München]. Freising-Weihenstephan.

Stettmer, M., Maidl, F.-X., Schwarzensteiner, J., Hülsbergen, K.-J., & Bernhardt, H. (2022a). Analysis of nitrogen uptake in winter wheat using sensor and satellite data for site-specific fertilization. Agronomy, 12 (6), 1455. https://doi.org/10.3390/agronomy12061455

Stettmer, M., Mittermayer, M., Maidl, F.-X., Schwarzensteiner, J., Hülsbergen, K.-J., & Bernhardt, H. (2022b). Three methods of site-specific yield mapping as a data source for the delineation of management zones in winter wheat. Agriculture, 12 (8), 1128. https://doi.org/10.3390/agriculture12081128

Sticksel, E., Huber, G., Liebler, J., Schächtl, J., & Maidl, F. X. (2004). The effect of diurnal variations of canopy reflectance on the assessment of biomass formation in wheat. In D. J. Mulla (Ed.), Proceedings of the 7th International Conference on Precision Agriculture and Other Precision Resources Management (pp.509.520). Hyatt Regency.

Sun, J., Li, W., Li, C., Chang, W., Zhang, S., Zeng, Y., Zeng, C., & Peng, M. (2020). Effect of different rates of nitrogen fertilization on crop yield, soil properties and leaf physiological attributes in banana under subtropical regions of China. Frontiers in Plant Science, 11 , 613760. https://doi.org/10.3389/fpls.2020.613760

Article   PubMed   PubMed Central   Google Scholar  

Taylor, J. C., Wood, G. A., Earl, R., & Godwin, R. J. (2003). Soil factors and their influence on within-field crop variability, part II: Spatial analysis and determination of management zones. Biosystems Engineering, 84 (4), 441–453. https://doi.org/10.1016/s1537-5110(03)00005-9

Thompson, S. K. (2002). On sampling and experiments. Environmetrics, 13 (5–6), 429–436. https://doi.org/10.1002/env.532

van Klompenburg, T., Kassahun, A., & Catal, C. (2020). Crop yield prediction using machine learning: A systematic literature review. Computers and Electronics in Agriculture, 177 , 105709. https://doi.org/10.1016/j.compag.2020.105709

VDLUFA. (2012). Methodenbuch I Verband deutscher landwirtschaftlicher Untersuchungs- und Forschungsanstalten (VDLUFA); Methode A 6.2.1.1 Bestimmung von Phosphor und Kalium im Calcium-Acetat-Lactat-Auszug. In VDLUFA-Methodenbuch (Ed.), Handbuch der Landwirtschaftlichen Versuchs- und Untersuchungsmethodik: Direkte Bestimmung von organischen Kohlenstoff durch Verbrennung von 550 °C und Gasanalyse . VDLUFA-Verlag.

VDLUFA. (2016). Methodenbuch I Verband deutscher landwirtschaftlicher Untersuchungs- und Forschungsanstalten (VDLUFA); Methode A 5.1.1 Bestimmung des pH-Wertes. In VDLUFA-Methodenbuch (Ed.), Handbuch der Landwirtschaftlichen Versuchs- und Untersuchungsmethodik (VDLUFA-Methodenbuch) . VDLUFA-Verlag.

Wiesmeier, M., Urbanski, L., Hobley, E., Lang, B., von Lützow, M., Marin-Spiotta, E., van Wesemael, B., Rabot, E., Ließ, M., Garcia-Franco, N., Wollschläger, U., Vogel, H.-J., & Kögel-Knabner, I. (2019). Soil organic carbon storage as a key function of soils - A review of drivers and indicators at various scales. Geoderma, 333 , 149–162. https://doi.org/10.1016/j.geoderma.2018.07.026

Wintersteiger, A. G. (2023). Classic ST . Retrieved 18 August, 2023, from https://www.wintersteiger.com/us/Plant-Breeding-and-Research/Products/Product-range/Stationary-thresher/39-Classic-ST

Zadoks, J. C., Chang, T. T., & Konzak, C. F. (1974). A decimal code for the growth stages of cereals. Weed Research, 14 (6), 415–421. https://doi.org/10.1111/j.1365-3180.1974.tb01084.x

Download references

Open Access funding enabled and organized by Projekt DEAL. This study was funded by the Bavarian State Ministry of Food, Agriculture and Forestry (Bayerisches Staatsministerium für Ernährung, Landwirtschaft und Forsten).

Author information

Authors and affiliations.

Chair of Organic Agriculture and Agronomy, Technische Universität München, Liesel – Beckmann – Straße 2, 85354, Freising, Germany

Ludwig Hagn, Johannes Schuster, Martin Mittermayer & Kurt-Jürgen Hülsbergen

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, L.H. and K.-J.H.; methodology, L.H., J.S., M.M., and K.-J.H.; validation, L.H. and J.S.; formal analysis, L.H., M.M. and J.S.; investigation, L.H. and J.S.; data curation, M.M., J.S. and L.H.; writing—original draft preparation, L.H.; writing—review and editing, L.H., M.M., K.-J.H. and J.S.; supervision, K.-J.H.; project administration, K.-J.H.; funding acquisition, K.-J.H. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Ludwig Hagn .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 599 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hagn, L., Schuster, J., Mittermayer, M. et al. A new method for satellite-based remote sensing analysis of plant-specific biomass yield patterns for precision farming applications. Precision Agric (2024). https://doi.org/10.1007/s11119-024-10144-x

Download citation

Accepted : 07 April 2024

Published : 28 April 2024

DOI : https://doi.org/10.1007/s11119-024-10144-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Satellite data
  • Remote sensing
  • Multispectral sensor
  • Model validation
  • Yield zones
  • Yield potential
  • Find a journal
  • Publish with us
  • Track your research
  • Privacy Policy

Research Method

Home » Research – Types, Methods and Examples

Research – Types, Methods and Examples

Table of Contents

What is Research

Definition:

Research refers to the process of investigating a particular topic or question in order to discover new information , develop new insights, or confirm or refute existing knowledge. It involves a systematic and rigorous approach to collecting, analyzing, and interpreting data, and requires careful planning and attention to detail.

History of Research

The history of research can be traced back to ancient times when early humans observed and experimented with the natural world around them. Over time, research evolved and became more systematic as people sought to better understand the world and solve problems.

In ancient civilizations such as those in Greece, Egypt, and China, scholars pursued knowledge through observation, experimentation, and the development of theories. They explored various fields, including medicine, astronomy, and mathematics.

During the Middle Ages, research was often conducted by religious scholars who sought to reconcile scientific discoveries with their faith. The Renaissance brought about a renewed interest in science and the scientific method, and the Enlightenment period marked a major shift towards empirical observation and experimentation as the primary means of acquiring knowledge.

The 19th and 20th centuries saw significant advancements in research, with the development of new scientific disciplines and fields such as psychology, sociology, and computer science. Advances in technology and communication also greatly facilitated research efforts.

Today, research is conducted in a wide range of fields and is a critical component of many industries, including healthcare, technology, and academia. The process of research continues to evolve as new methods and technologies emerge, but the fundamental principles of observation, experimentation, and hypothesis testing remain at its core.

Types of Research

Types of Research are as follows:

  • Applied Research : This type of research aims to solve practical problems or answer specific questions, often in a real-world context.
  • Basic Research : This type of research aims to increase our understanding of a phenomenon or process, often without immediate practical applications.
  • Experimental Research : This type of research involves manipulating one or more variables to determine their effects on another variable, while controlling all other variables.
  • Descriptive Research : This type of research aims to describe and measure phenomena or characteristics, without attempting to manipulate or control any variables.
  • Correlational Research: This type of research examines the relationships between two or more variables, without manipulating any variables.
  • Qualitative Research : This type of research focuses on exploring and understanding the meaning and experience of individuals or groups, often through methods such as interviews, focus groups, and observation.
  • Quantitative Research : This type of research uses numerical data and statistical analysis to draw conclusions about phenomena or populations.
  • Action Research: This type of research is often used in education, healthcare, and other fields, and involves collaborating with practitioners or participants to identify and solve problems in real-world settings.
  • Mixed Methods Research : This type of research combines both quantitative and qualitative research methods to gain a more comprehensive understanding of a phenomenon or problem.
  • Case Study Research: This type of research involves in-depth examination of a specific individual, group, or situation, often using multiple data sources.
  • Longitudinal Research: This type of research follows a group of individuals over an extended period of time, often to study changes in behavior, attitudes, or health outcomes.
  • Cross-Sectional Research : This type of research examines a population at a single point in time, often to study differences or similarities among individuals or groups.
  • Survey Research: This type of research uses questionnaires or interviews to gather information from a sample of individuals about their attitudes, beliefs, behaviors, or experiences.
  • Ethnographic Research : This type of research involves immersion in a cultural group or community to understand their way of life, beliefs, values, and practices.
  • Historical Research : This type of research investigates events or phenomena from the past using primary sources, such as archival records, newspapers, and diaries.
  • Content Analysis Research : This type of research involves analyzing written, spoken, or visual material to identify patterns, themes, or messages.
  • Participatory Research : This type of research involves collaboration between researchers and participants throughout the research process, often to promote empowerment, social justice, or community development.
  • Comparative Research: This type of research compares two or more groups or phenomena to identify similarities and differences, often across different countries or cultures.
  • Exploratory Research : This type of research is used to gain a preliminary understanding of a topic or phenomenon, often in the absence of prior research or theories.
  • Explanatory Research: This type of research aims to identify the causes or reasons behind a particular phenomenon, often through the testing of theories or hypotheses.
  • Evaluative Research: This type of research assesses the effectiveness or impact of an intervention, program, or policy, often through the use of outcome measures.
  • Simulation Research : This type of research involves creating a model or simulation of a phenomenon or process, often to predict outcomes or test theories.

Data Collection Methods

  • Surveys : Surveys are used to collect data from a sample of individuals using questionnaires or interviews. Surveys can be conducted face-to-face, by phone, mail, email, or online.
  • Experiments : Experiments involve manipulating one or more variables to measure their effects on another variable, while controlling for other factors. Experiments can be conducted in a laboratory or in a natural setting.
  • Case studies : Case studies involve in-depth analysis of a single case, such as an individual, group, organization, or event. Case studies can use a variety of data collection methods, including interviews, observation, and document analysis.
  • Observational research : Observational research involves observing and recording the behavior of individuals or groups in a natural setting. Observational research can be conducted covertly or overtly.
  • Content analysis : Content analysis involves analyzing written, spoken, or visual material to identify patterns, themes, or messages. Content analysis can be used to study media, social media, or other forms of communication.
  • Ethnography : Ethnography involves immersion in a cultural group or community to understand their way of life, beliefs, values, and practices. Ethnographic research can use a range of data collection methods, including observation, interviews, and document analysis.
  • Secondary data analysis : Secondary data analysis involves using existing data from sources such as government agencies, research institutions, or commercial organizations. Secondary data can be used to answer research questions, without collecting new data.
  • Focus groups: Focus groups involve gathering a small group of people together to discuss a topic or issue. The discussions are usually guided by a moderator who asks questions and encourages discussion.
  • Interviews : Interviews involve one-on-one conversations between a researcher and a participant. Interviews can be structured, semi-structured, or unstructured, and can be conducted in person, by phone, or online.
  • Document analysis : Document analysis involves collecting and analyzing written documents, such as reports, memos, and emails. Document analysis can be used to study organizational communication, policy documents, and other forms of written material.

Data Analysis Methods

Data Analysis Methods in Research are as follows:

  • Descriptive statistics : Descriptive statistics involve summarizing and describing the characteristics of a dataset, such as mean, median, mode, standard deviation, and frequency distributions.
  • Inferential statistics: Inferential statistics involve making inferences or predictions about a population based on a sample of data, using methods such as hypothesis testing, confidence intervals, and regression analysis.
  • Qualitative analysis: Qualitative analysis involves analyzing non-numerical data, such as text, images, or audio, to identify patterns, themes, or meanings. Qualitative analysis can be used to study subjective experiences, social norms, and cultural practices.
  • Content analysis: Content analysis involves analyzing written, spoken, or visual material to identify patterns, themes, or messages. Content analysis can be used to study media, social media, or other forms of communication.
  • Grounded theory: Grounded theory involves developing a theory or model based on empirical data, using methods such as constant comparison, memo writing, and theoretical sampling.
  • Discourse analysis : Discourse analysis involves analyzing language use, including the structure, function, and meaning of words and phrases, to understand how language reflects and shapes social relationships and power dynamics.
  • Network analysis: Network analysis involves analyzing the structure and dynamics of social networks, including the relationships between individuals and groups, to understand social processes and outcomes.

Research Methodology

Research methodology refers to the overall approach and strategy used to conduct a research study. It involves the systematic planning, design, and execution of research to answer specific research questions or test hypotheses. The main components of research methodology include:

  • Research design : Research design refers to the overall plan and structure of the study, including the type of study (e.g., observational, experimental), the sampling strategy, and the data collection and analysis methods.
  • Sampling strategy: Sampling strategy refers to the method used to select a representative sample of participants or units from the population of interest. The choice of sampling strategy will depend on the research question and the nature of the population being studied.
  • Data collection methods : Data collection methods refer to the techniques used to collect data from study participants or sources, such as surveys, interviews, observations, or secondary data sources.
  • Data analysis methods: Data analysis methods refer to the techniques used to analyze and interpret the data collected in the study, such as descriptive statistics, inferential statistics, qualitative analysis, or content analysis.
  • Ethical considerations: Ethical considerations refer to the principles and guidelines that govern the treatment of human participants or the use of sensitive data in the research study.
  • Validity and reliability : Validity and reliability refer to the extent to which the study measures what it is intended to measure and the degree to which the study produces consistent and accurate results.

Applications of Research

Research has a wide range of applications across various fields and industries. Some of the key applications of research include:

  • Advancing scientific knowledge : Research plays a critical role in advancing our understanding of the world around us. Through research, scientists are able to discover new knowledge, uncover patterns and relationships, and develop new theories and models.
  • Improving healthcare: Research is instrumental in advancing medical knowledge and developing new treatments and therapies. Clinical trials and studies help to identify the effectiveness and safety of new drugs and medical devices, while basic research helps to uncover the underlying causes of diseases and conditions.
  • Enhancing education: Research helps to improve the quality of education by identifying effective teaching methods, developing new educational tools and technologies, and assessing the impact of various educational interventions.
  • Driving innovation: Research is a key driver of innovation, helping to develop new products, services, and technologies. By conducting research, businesses and organizations can identify new market opportunities, gain a competitive advantage, and improve their operations.
  • Informing public policy : Research plays an important role in informing public policy decisions. Policy makers rely on research to develop evidence-based policies that address societal challenges, such as healthcare, education, and environmental issues.
  • Understanding human behavior : Research helps us to better understand human behavior, including social, cognitive, and emotional processes. This understanding can be applied in a variety of settings, such as marketing, organizational management, and public policy.

Importance of Research

Research plays a crucial role in advancing human knowledge and understanding in various fields of study. It is the foundation upon which new discoveries, innovations, and technologies are built. Here are some of the key reasons why research is essential:

  • Advancing knowledge: Research helps to expand our understanding of the world around us, including the natural world, social structures, and human behavior.
  • Problem-solving: Research can help to identify problems, develop solutions, and assess the effectiveness of interventions in various fields, including medicine, engineering, and social sciences.
  • Innovation : Research is the driving force behind the development of new technologies, products, and processes. It helps to identify new possibilities and opportunities for improvement.
  • Evidence-based decision making: Research provides the evidence needed to make informed decisions in various fields, including policy making, business, and healthcare.
  • Education and training : Research provides the foundation for education and training in various fields, helping to prepare individuals for careers and advancing their knowledge.
  • Economic growth: Research can drive economic growth by facilitating the development of new technologies and innovations, creating new markets and job opportunities.

When to use Research

Research is typically used when seeking to answer questions or solve problems that require a systematic approach to gathering and analyzing information. Here are some examples of when research may be appropriate:

  • To explore a new area of knowledge : Research can be used to investigate a new area of knowledge and gain a better understanding of a topic.
  • To identify problems and find solutions: Research can be used to identify problems and develop solutions to address them.
  • To evaluate the effectiveness of programs or interventions : Research can be used to evaluate the effectiveness of programs or interventions in various fields, such as healthcare, education, and social services.
  • To inform policy decisions: Research can be used to provide evidence to inform policy decisions in areas such as economics, politics, and environmental issues.
  • To develop new products or technologies : Research can be used to develop new products or technologies and improve existing ones.
  • To understand human behavior : Research can be used to better understand human behavior and social structures, such as in psychology, sociology, and anthropology.

Characteristics of Research

The following are some of the characteristics of research:

  • Purpose : Research is conducted to address a specific problem or question and to generate new knowledge or insights.
  • Systematic : Research is conducted in a systematic and organized manner, following a set of procedures and guidelines.
  • Empirical : Research is based on evidence and data, rather than personal opinion or intuition.
  • Objective: Research is conducted with an objective and impartial perspective, avoiding biases and personal beliefs.
  • Rigorous : Research involves a rigorous and critical examination of the evidence and data, using reliable and valid methods of data collection and analysis.
  • Logical : Research is based on logical and rational thinking, following a well-defined and logical structure.
  • Generalizable : Research findings are often generalized to broader populations or contexts, based on a representative sample of the population.
  • Replicable : Research is conducted in a way that allows others to replicate the study and obtain similar results.
  • Ethical : Research is conducted in an ethical manner, following established ethical guidelines and principles, to ensure the protection of participants’ rights and well-being.
  • Cumulative : Research builds on previous studies and contributes to the overall body of knowledge in a particular field.

Advantages of Research

Research has several advantages, including:

  • Generates new knowledge: Research is conducted to generate new knowledge and understanding of a particular topic or phenomenon, which can be used to inform policy, practice, and decision-making.
  • Provides evidence-based solutions : Research provides evidence-based solutions to problems and issues, which can be used to develop effective interventions and strategies.
  • Improves quality : Research can improve the quality of products, services, and programs by identifying areas for improvement and developing solutions to address them.
  • Enhances credibility : Research enhances the credibility of an organization or individual by providing evidence to support claims and assertions.
  • Enables innovation: Research can lead to innovation by identifying new ideas, approaches, and technologies.
  • Informs decision-making : Research provides information that can inform decision-making, helping individuals and organizations make more informed and effective choices.
  • Facilitates progress: Research can facilitate progress by identifying challenges and opportunities and developing solutions to address them.
  • Enhances understanding: Research can enhance understanding of complex issues and phenomena, helping individuals and organizations navigate challenges and opportunities more effectively.
  • Promotes accountability : Research promotes accountability by providing a basis for evaluating the effectiveness of policies, programs, and interventions.
  • Fosters collaboration: Research can foster collaboration by bringing together individuals and organizations with diverse perspectives and expertise to address complex issues and problems.

Limitations of Research

Some Limitations of Research are as follows:

  • Cost : Research can be expensive, particularly when large-scale studies are required. This can limit the number of studies that can be conducted and the amount of data that can be collected.
  • Time : Research can be time-consuming, particularly when longitudinal studies are required. This can limit the speed at which research findings can be generated and disseminated.
  • Sample size: The size of the sample used in research can limit the generalizability of the findings to larger populations.
  • Bias : Research can be affected by bias, both in the design and implementation of the study, as well as in the analysis and interpretation of the data.
  • Ethics : Research can present ethical challenges, particularly when human or animal subjects are involved. This can limit the types of research that can be conducted and the methods that can be used.
  • Data quality: The quality of the data collected in research can be affected by a range of factors, including the reliability and validity of the measures used, as well as the accuracy of the data entry and analysis.
  • Subjectivity : Research can be subjective, particularly when qualitative methods are used. This can limit the objectivity and reliability of the findings.
  • Accessibility : Research findings may not be accessible to all stakeholders, particularly those who are not part of the academic or research community.
  • Interpretation : Research findings can be open to interpretation, particularly when the data is complex or contradictory. This can limit the ability of researchers to draw firm conclusions.
  • Unforeseen events : Unexpected events, such as changes in the environment or the emergence of new technologies, can limit the relevance and applicability of research findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is Art

What is Art – Definition, Types, Examples

What is Anthropology

What is Anthropology – Definition and Overview

What is Literature

What is Literature – Definition, Types, Examples

Economist

Economist – Definition, Types, Work Area

Anthropologist

Anthropologist – Definition, Types, Work Area

What is History

New methodology to measure microplastics in EU’s drinking water

A JRC-developed methodology will support the Drinking Water Directive in the important domain of monitoring microplastics in tap water across the EU.

Image of glass being filled with water from a tap

Every day, our body needs about one and a half litres of water to function properly.

But what if the water we drink contains very small particles of plastics, so-called ‘microplastics’?

While it is now generally accepted that microplastics may be present in our drinking water, links to human health effects remain uncertain – due in large part to the poor understanding we have of their presence and distribution in our water supplies. Fortunately, in cases such as these, European drinking water legislation provides us with legal tools to introduce Union wide measuring and monitoring of new, emerging pollutants.

Monitoring microplastics’ presence in our drinking water are crucial steps forward for protecting human health and our environment.

However, measuring microplastics remains challenging because they vary vastly in size, shape, composition and chemical identity, complicating efforts to assess their presence accurately.

Harmonised approach to measuring microplastics 

To better harmonise the measurement process, the JRC has designed a methodology which represents a uniform approach to sampling, analysis and data reporting.

This will contribute to the generation of more consistent and inter-comparable data, which is a first and important step towards the eventual establishment of exposure levels in European drinking water.

To define the methodology, JRC scientists first reviewed the scientific knowledge base on the nature, distribution and quantities of microplastics. The findings are published in the report ’Analytical methods to measure microplastics in drinking water ’.

It showed that the levels of microplastics reported in drinking water are generally lower than a few tens of particles per litre, with more recent studies undertaken in Europe showing lower, or much lower levels (0.0000-0.6 particles per litre).

Microplastics in drinking water in Europe and beyond

Graph showing microplastics concentration across countries

Such information proved to be a key indicator for the analytical methods to be used and the need to sample large water volumes (>50 litres).

The most frequently found polymers in drinking water appeared to be polyethylene, polyethylene terephthalate, polyester other than PET, and polypropylene. 

The “how to” in the JRC methodology

The JRC methodology initially defines which materials have to be addressed, the relevant size ranges, shapes and the unit of measurement.

For the sampling, at least 1000 litres are required to quantify microplastics.

Samples are collected using filters of different micron sizes (100 and 20 micron filters), to collect the solids in two size ranges.

These samples are then analysed via one of two possible methods – either by Infrared microscopy or Raman microscopy. These techniques allow the identification of the polymer type, its size and whether it is a particle or a fibre – all this information may in the future be relevant to understanding the nature and extent of our exposure to microplastics.

Essential data from the analysis are recorded for comprehensive reporting.

Policy context

The European Commission is driving the development of legislation needed to tackle the potential threat of microplastics to people’s health and the environment.

Among the initiatives, the recast Drinking Water Directive , the EU’s main law on drinking water, covers both the access to and the quality of water intended for human consumption to protect human health.

Under the recast Drinking Water Directive, the Commission is empowered to establish a methodology to measure microplastics in drinking water. The methodology developed by the JRC is embedded in the Commission Delegated Decision adopted on 11 March 2024.      

The Commission has established a first watch list addressing substances and compounds of concern for water intended for human consumption. The watch list indicates a guidance value for each substance and compound and where necessary a possible method of analysis.

The Commission has established the methodology with a view to include microplastics on this watch list. Member States will then have to put in place monitoring requirements, using the JRC methodology set out in the Commission Delegated Decision.

Related links

Report: Analytical methods to measure microplastics in drinking water

Towards better water quality, quantity management and more sustainable use of seas

  • Zero pollution

More news on a similar topic

new method research

  • General publications
  • 25 January 2024

Someone picks up a plastic bottle on a path bordered by fields

  • News announcement

Tractor spraying soybean field

  • 20 December 2023

Beach with litter

  • 12 December 2023

Share this page

Blog Other Blogs McAfee Labs Redline Stealer: A Novel Approach

McAfee Labs

Redline Stealer: A Novel Approach

new method research

Apr 17, 2024

10 MIN READ

Facebook

Authored by Mohansundaram M and Neil Tyagi

new method research

Infection Chain

new method research

  • GitHub was being abused to host the malware file at Microsoft’s official account in the vcpkg repository https[:]//github[.]com/microsoft/vcpkg/files/14125503/Cheat.Lab.2.7.2.zip

new method research

  • McAfee Web Advisor blocks access to this malicious download
  • Cheat.Lab.2.7.2.zip is a zip file with hash 5e37b3289054d5e774c02a6ec4915a60156d715f3a02aaceb7256cc3ebdc6610
  • The zip file contains an MSI installer.

new method research

  • Compiler.exe and lua51.dll are binaries from the Lua project. However, they are modified slightly by a threat actor to serve their purpose; they are used here with readme.txt (Which contains the Lua bytecode) to compile and execute at Runtime.

new method research

  • The magic number 1B 4C 4A 02 typically corresponds to Lua 5.1 bytecode.
  • The above image is readme.txt, which contains the Lua bytecode. This approach provides the advantage of obfuscating malicious stings and avoiding the use of easily recognizable scripts like wscript, JScript, or PowerShell script, thereby enhancing stealth and evasion capabilities for the threat actor.
  • Upon execution, the MSI installer displays a user interface.

new method research

  • During installation, a text message is displayed urging the user to spread the malware by installing it onto a friend’s computer to get the full application version.

new method research

  • During installation, we can observe that three files are being written to Disk to C:\program Files\Cheat Lab Inc\ Cheat Lab\ path.

new method research

  • Below, the three files are placed inside the new path.

new method research

  • During installation, msiexec.exe creates a scheduled task to execute compiler.exe with readme.txt as an argument.
  • Apart from the above technique for persistence, this malware uses a 2 nd fallback technique to ensure execution.

new method research

  • Note that the name compiler.exe has been changed to NzUW.exe.
  • Then it drops a file ErrorHandler.cmd at C:\Windows\Setup\Scripts\
  • The contents of cmd can be seen here. It executes compiler.exe under the new name of NzUw.exe with the Lua byte code as a parameter.

new method research

  • Executing ErrorHandler.cmd uses a LolBin in the system32 folder. For that, it creates another scheduled task.

new method research

  • The above image shows a new task created with Windows Setup, which will launch C:\Windows\system32\oobe\Setup.exe without any argument.

new method research

Source: Add a Custom Script to Windows Setup | Microsoft Learn

  • c:\WINDOWS\system32\oobe\Setup.exe is expecting an argument. When it is not provided, it causes an error, which leads to the execution of ErrorHandler.cmd, which executes compiler.exe, which loads the malicious Lua code.

new method research

We can confirm that c:\WINDOWS\system32\oobe\Setup.exe launches cmd.exe with ErrorHandler.cmd script as argument, which runs NzUw.exe(compiler.exe)

new method research

  • We can see JSON was written to Disk.

C2 Communication and stealer activity

new method research

  • A base64 encoded string is returned.

new method research

  • An HTTP PUT request was sent to the threat actors server with the URL /loader/screen.

new method research

  • Further inspection of the packet shows it is a bitmap image file.
  • The name of the file is Screen.bmp
  • Also, note the unique user agent used in this put request, i.e., Winter

new method research

  • After Dumping the bitmap image resource from Wireshark to disc and opening it as a .bmp(bitmap image) extension, we see.
  • The screenshot was sent to the threat actors’ server.

Analysis of bytecode File

  • It is challenging to get the true decomplication of the bytecode file.
  • Many open source decompilers were used, giving a slightly different Lua script.
  • The script file was not compiling and throwing some errors.

new method research

  • The script file was sensitized based on errors so that it could be compiled.
  • Debugging process

new method research

  • One table (var_0_19) is populated by passing data values to 2 functions.
  • In the console output, we can see base64 encoded values being stored in var_0_19.
  • These base64 strings decode to more encoded data and not to plain strings.

new method research

  • All data in var_0_19 is assigned to var_0_26

new method research

  • The same technique is populating 2nd table (var_0_20)

new method research

  • The above pic is a decryption loop. It iterates over var_0_26 element by element and decrypts it.
  • This loop is also very long and contains many junk lines.

new method research

  • We can see decrypted strings like Tamper Detected! In var_0_26

Loading luajit bytcode:

Before loading the luajit bytecode, a new state is created. Each Lua state maintains its global environment, stack, and set of loaded libraries, providing isolation between different instances of Lua code.

new method research

  • In this blog, we saw the various techniques threat actors use to infiltrate user systems and exfiltrate their data.
  • Microsoft has since removed these files from the repositories.

Indicators of Compromise

new method research

Introducing McAfee+

Identity theft protection and privacy for your digital life

Stay Updated

Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.

McAfee Labs is one of the leading sources for threat research, threat intelligence, and cybersecurity thought leadership. See our blog posts below for more information.

More from McAfee Labs

new method research

Distinctive Campaign Evolution of Pikabot Malware

Apr 02, 2024   |   10 MIN READ

new method research

Android Phishing Scam Using Malware-as-a-Service on the Rise in India

Mar 14, 2024   |   7 MIN READ

new method research

Rise in Deceptive PDF: The Gateway to Malicious Payloads

Mar 01, 2024   |   17 MIN READ

new method research

GUloader Unmasked: Decrypting the Threat of Malicious SVG Files

Feb 28, 2024   |   5 MIN READ

new method research

MoqHao evolution: New variants start automatically right after installation

Feb 07, 2024   |   7 MIN READ

new method research

Generative AI: Cross the Stream Where it is Shallowest

Feb 07, 2024   |   5 MIN READ

new method research

From Email to RAT: Deciphering a VB Script-Driven Campaign

Jan 17, 2024   |   10 MIN READ

new method research

Stealth Backdoor “Android/Xamalicious” Actively Infecting Devices

Dec 22, 2023   |   14 MIN READ

new method research

Shielding Against Android Phishing in Indian Banking

Dec 20, 2023   |   8 MIN READ

new method research

PDF Phishing: Beyond the Bait

Dec 04, 2023   |   6 MIN READ

new method research

Beneath the Surface: How Hackers Turn NetSupport Against Users

Nov 27, 2023   |   12 MIN READ

new method research

Fake Android and iOS apps steal SMS and contacts in South Korea

Nov 15, 2023   |   10 MIN READ

Back to top

IMAGES

  1. 15 Research Methodology Examples (2023)

    new method research

  2. Types of Research Methodology: Uses, Types & Benefits

    new method research

  3. Steps for preparing research methodology

    new method research

  4. Research Method And Methodology

    new method research

  5. Module 1: Introduction: What is Research?

    new method research

  6. How to Write a Research Methodology

    new method research

VIDEO

  1. Scientific Method, steps involved in scientific method/research, scientific research

  2. The scientific approach and alternative approaches to investigation

  3. Metho 6: The Research Process (Introduction)

  4. Research Method vs Research Methodology

  5. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  6. RESEARCH CRITIQUE Mixed Methods Study

COMMENTS

  1. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  2. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  3. The Practice of Innovating Research Methods

    Third, despite the value of innovation, we actually know relatively little about the actual practice of research method innovation. Existing work presents exemplars of innovative methods along the research process from research setting to design, forms of data, data collection, and analysis (cf. Elsbach & Kramer, 2016).Other work (Bansal & Corley, 2011) calls for innovating methods via new ...

  4. ScienceDaily: Your source for the latest research news

    Apr. 26, 2024 — Researchers have created TopicVelo, a powerful new method of using the static snapshots from scRNA-seq to study how cells and genes change over time. This will help researchers ...

  5. The Growing Importance of Mixed-Methods Research in Health

    The relevance of mixed-methods in health research. The overall goal of the mixed-methods research design is to provide a better and deeper understanding, by providing a fuller picture that can enhance description and understanding of the phenomena [].Mixed-methods research has become popular because it uses quantitative and qualitative data in one single study which provides stronger inference ...

  6. The method comes first

    The method comes first. Nature Methods 17 , 1169 ( 2020) Cite this article. A new method should be thoroughly tested, applied, described — and peer-reviewed — before biological discoveries ...

  7. How to Construct a Mixed Methods Research Design

    Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

  8. Research Methods

    Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. ... Flexible - you can often adjust your methods as you go to develop new knowledge. Can be conducted with small samples.

  9. Emerging Research Methods

    Here are a few examples of how emerging research methods are being applied in real-world scenarios: Big Data Analysis: Companies like Netflix and Amazon use big data analytics to understand user behavior and preferences. They analyze large datasets consisting of customer viewing habits, search histories, and reviews, among others.

  10. Understanding Research Methods

    This MOOC is about demystifying research and research methods. It will outline the fundamentals of doing research, aimed primarily, but not exclusively, at the postgraduate level. It places the student experience at the centre of our endeavours by engaging learners in a range of robust and challenging discussions and exercises befitting SOAS ...

  11. Research Methods

    To evaluate programs: Research methods can be used to evaluate the effectiveness of a program, intervention, or policy. This can help in determining whether the program is meeting its goals and objectives. To explore new areas: Research methods can be used to explore new areas of inquiry or to test new hypotheses. This can help in advancing ...

  12. (PDF) Understanding research methods: An overview of the essentials

    New to this edition: • New topic section on design decisions in research • Additional material on production of knowledge and research methods • Significant development of material on ...

  13. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  14. What are research methods?

    Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project. The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data. The following boxes below contain a ...

  15. New Materialist Methods and the Research Process

    Abstract. In this chapter we discuss the challenges, opportunities, and considerations of putting new materialist theory into practice in empirical research. In this chapter we engage with literature from across a range of fields to provide an overview of the many ways that new materialisms are informing the research process and methodology ...

  16. New Market Research Methods and Techniques for Today

    In any case, the world of new market research methods is shifting from self-reporting techniques (surveys, focus groups), to observational research methods. The data is much more reliable. 3. Mobile market research methods: Smart phones and tablets have taken the world by storm. These devices are becoming a preferred platform for many ...

  17. What Is a Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Other interesting articles.

  18. 6 New Emerging Methodologies in Market Research

    Research gamification is a new emerging research methodology, and it's also a pretty simple one. It refers to the process of making survey questions "gamified" (aka more fun/game-like). A really simple example of this could be a matching game where respondents are asked to drag words from a word bank and match them to various brands.

  19. Study opens new avenue for immunotherapy drug development

    In a new study published today in Nature Biomedical Engineering, researchers at The University of Texas MD Anderson Cancer Center have designed a new method for developing immunotherapy drugs ...

  20. Researchers reveal new method for calculating mechanical ...

    A research team from Skoltech introduced a new method that takes advantage of machine learning for studying the properties of polycrystals, composites, and multiphase systems. It attained high ...

  21. A new method for satellite-based remote sensing analysis of plant

    Site and weather conditions. The investigations were conducted on arable fields at three different sites. The methodology for compiling biomass potential maps was derived from fields at two research stations (Roggenstein (site A) and Dürnast (site B)) of the Technical University of Munich (Bavaria, Germany) and was validated on arable fields of farms in the Burghausen region (80 km east of ...

  22. Analytical aspects of combining holistic ...

    In addition, a new heuristic to reduce the number of questions in the elicitation by decomposition of FITradeoff is proposed. We also address situations of inconsistencies that may arise due to the conflicting information provided by decomposition and holistic judgments, as well as ways to solve such inconsistencies.

  23. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  24. Research

    Today, research is conducted in a wide range of fields and is a critical component of many industries, including healthcare, technology, and academia. The process of research continues to evolve as new methods and technologies emerge, but the fundamental principles of observation, experimentation, and hypothesis testing remain at its core.

  25. New methodology to measure microplastics in EU's drinking water

    The Commission has established the methodology with a view to include microplastics on this watch list. Member States will then have to put in place monitoring requirements, using the JRC methodology set out in the Commission Delegated Decision. Related links. Report: Analytical methods to measure microplastics in drinking water

  26. Redline Stealer: A Novel Approach

    A new packed variant of the Redline Stealer trojan was observed in the wild, leveraging Lua bytecode to perform malicious behavior. McAfee telemetry data shows this malware strain is very prevalent, covering North America, South America, Europe, and Asia and reaching Australia. Infection Chain

  27. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  28. The Classroom Communication and Collaboration Appraisal: A

    Purpose:The aim of this research note is to introduce a new appraisal form, the Classroom Communication and Collaboration (C3) Appraisal, ... Method: A comprehensive synthesis of the key skills from a broad range of publications on successful communication and collaboration in the classroom was conducted. The resulting appraisal comprises 39 ...