Analyst Answers

Data & Finance for Work & Life

data analysis types, methods, and techniques tree diagram

Data Analysis: Types, Methods & Techniques (a Complete List)

( Updated Version )

While the term sounds intimidating, “data analysis” is nothing more than making sense of information in a table. It consists of filtering, sorting, grouping, and manipulating data tables with basic algebra and statistics.

In fact, you don’t need experience to understand the basics. You have already worked with data extensively in your life, and “analysis” is nothing more than a fancy word for good sense and basic logic.

Over time, people have intuitively categorized the best logical practices for treating data. These categories are what we call today types , methods , and techniques .

This article provides a comprehensive list of types, methods, and techniques, and explains the difference between them.

For a practical intro to data analysis (including types, methods, & techniques), check out our Intro to Data Analysis eBook for free.

Descriptive, Diagnostic, Predictive, & Prescriptive Analysis

If you Google “types of data analysis,” the first few results will explore descriptive , diagnostic , predictive , and prescriptive analysis. Why? Because these names are easy to understand and are used a lot in “the real world.”

Descriptive analysis is an informational method, diagnostic analysis explains “why” a phenomenon occurs, predictive analysis seeks to forecast the result of an action, and prescriptive analysis identifies solutions to a specific problem.

That said, these are only four branches of a larger analytical tree.

Good data analysts know how to position these four types within other analytical methods and tactics, allowing them to leverage strengths and weaknesses in each to uproot the most valuable insights.

Let’s explore the full analytical tree to understand how to appropriately assess and apply these four traditional types.

Tree diagram of Data Analysis Types, Methods, and Techniques

Here’s a picture to visualize the structure and hierarchy of data analysis types, methods, and techniques.

If it’s too small you can view the picture in a new tab . Open it to follow along!

method of data analysis in research

Note: basic descriptive statistics such as mean , median , and mode , as well as standard deviation , are not shown because most people are already familiar with them. In the diagram, they would fall under the “descriptive” analysis type.

Tree Diagram Explained

The highest-level classification of data analysis is quantitative vs qualitative . Quantitative implies numbers while qualitative implies information other than numbers.

Quantitative data analysis then splits into mathematical analysis and artificial intelligence (AI) analysis . Mathematical types then branch into descriptive , diagnostic , predictive , and prescriptive .

Methods falling under mathematical analysis include clustering , classification , forecasting , and optimization . Qualitative data analysis methods include content analysis , narrative analysis , discourse analysis , framework analysis , and/or grounded theory .

Moreover, mathematical techniques include regression , Nïave Bayes , Simple Exponential Smoothing , cohorts , factors , linear discriminants , and more, whereas techniques falling under the AI type include artificial neural networks , decision trees , evolutionary programming , and fuzzy logic . Techniques under qualitative analysis include text analysis , coding , idea pattern analysis , and word frequency .

It’s a lot to remember! Don’t worry, once you understand the relationship and motive behind all these terms, it’ll be like riding a bike.

We’ll move down the list from top to bottom and I encourage you to open the tree diagram above in a new tab so you can follow along .

But first, let’s just address the elephant in the room: what’s the difference between methods and techniques anyway?

Difference between methods and techniques

Though often used interchangeably, methods ands techniques are not the same. By definition, methods are the process by which techniques are applied, and techniques are the practical application of those methods.

For example, consider driving. Methods include staying in your lane, stopping at a red light, and parking in a spot. Techniques include turning the steering wheel, braking, and pushing the gas pedal.

Data sets: observations and fields

It’s important to understand the basic structure of data tables to comprehend the rest of the article. A data set consists of one far-left column containing observations, then a series of columns containing the fields (aka “traits” or “characteristics”) that describe each observations. For example, imagine we want a data table for fruit. It might look like this:

Now let’s turn to types, methods, and techniques. Each heading below consists of a description, relative importance, the nature of data it explores, and the motivation for using it.

Quantitative Analysis

  • It accounts for more than 50% of all data analysis and is by far the most widespread and well-known type of data analysis.
  • As you have seen, it holds descriptive, diagnostic, predictive, and prescriptive methods, which in turn hold some of the most important techniques available today, such as clustering and forecasting.
  • It can be broken down into mathematical and AI analysis.
  • Importance : Very high . Quantitative analysis is a must for anyone interesting in becoming or improving as a data analyst.
  • Nature of Data: data treated under quantitative analysis is, quite simply, quantitative. It encompasses all numeric data.
  • Motive: to extract insights. (Note: we’re at the top of the pyramid, this gets more insightful as we move down.)

Qualitative Analysis

  • It accounts for less than 30% of all data analysis and is common in social sciences .
  • It can refer to the simple recognition of qualitative elements, which is not analytic in any way, but most often refers to methods that assign numeric values to non-numeric data for analysis.
  • Because of this, some argue that it’s ultimately a quantitative type.
  • Importance: Medium. In general, knowing qualitative data analysis is not common or even necessary for corporate roles. However, for researchers working in social sciences, its importance is very high .
  • Nature of Data: data treated under qualitative analysis is non-numeric. However, as part of the analysis, analysts turn non-numeric data into numbers, at which point many argue it is no longer qualitative analysis.
  • Motive: to extract insights. (This will be more important as we move down the pyramid.)

Mathematical Analysis

  • Description: mathematical data analysis is a subtype of qualitative data analysis that designates methods and techniques based on statistics, algebra, and logical reasoning to extract insights. It stands in opposition to artificial intelligence analysis.
  • Importance: Very High. The most widespread methods and techniques fall under mathematical analysis. In fact, it’s so common that many people use “quantitative” and “mathematical” analysis interchangeably.
  • Nature of Data: numeric. By definition, all data under mathematical analysis are numbers.
  • Motive: to extract measurable insights that can be used to act upon.

Artificial Intelligence & Machine Learning Analysis

  • Description: artificial intelligence and machine learning analyses designate techniques based on the titular skills. They are not traditionally mathematical, but they are quantitative since they use numbers. Applications of AI & ML analysis techniques are developing, but they’re not yet mainstream enough to show promise across the field.
  • Importance: Medium . As of today (September 2020), you don’t need to be fluent in AI & ML data analysis to be a great analyst. BUT, if it’s a field that interests you, learn it. Many believe that in 10 year’s time its importance will be very high .
  • Nature of Data: numeric.
  • Motive: to create calculations that build on themselves in order and extract insights without direct input from a human.

Descriptive Analysis

  • Description: descriptive analysis is a subtype of mathematical data analysis that uses methods and techniques to provide information about the size, dispersion, groupings, and behavior of data sets. This may sounds complicated, but just think about mean, median, and mode: all three are types of descriptive analysis. They provide information about the data set. We’ll look at specific techniques below.
  • Importance: Very high. Descriptive analysis is among the most commonly used data analyses in both corporations and research today.
  • Nature of Data: the nature of data under descriptive statistics is sets. A set is simply a collection of numbers that behaves in predictable ways. Data reflects real life, and there are patterns everywhere to be found. Descriptive analysis describes those patterns.
  • Motive: the motive behind descriptive analysis is to understand how numbers in a set group together, how far apart they are from each other, and how often they occur. As with most statistical analysis, the more data points there are, the easier it is to describe the set.

Diagnostic Analysis

  • Description: diagnostic analysis answers the question “why did it happen?” It is an advanced type of mathematical data analysis that manipulates multiple techniques, but does not own any single one. Analysts engage in diagnostic analysis when they try to explain why.
  • Importance: Very high. Diagnostics are probably the most important type of data analysis for people who don’t do analysis because they’re valuable to anyone who’s curious. They’re most common in corporations, as managers often only want to know the “why.”
  • Nature of Data : data under diagnostic analysis are data sets. These sets in themselves are not enough under diagnostic analysis. Instead, the analyst must know what’s behind the numbers in order to explain “why.” That’s what makes diagnostics so challenging yet so valuable.
  • Motive: the motive behind diagnostics is to diagnose — to understand why.

Predictive Analysis

  • Description: predictive analysis uses past data to project future data. It’s very often one of the first kinds of analysis new researchers and corporate analysts use because it is intuitive. It is a subtype of the mathematical type of data analysis, and its three notable techniques are regression, moving average, and exponential smoothing.
  • Importance: Very high. Predictive analysis is critical for any data analyst working in a corporate environment. Companies always want to know what the future will hold — especially for their revenue.
  • Nature of Data: Because past and future imply time, predictive data always includes an element of time. Whether it’s minutes, hours, days, months, or years, we call this time series data . In fact, this data is so important that I’ll mention it twice so you don’t forget: predictive analysis uses time series data .
  • Motive: the motive for investigating time series data with predictive analysis is to predict the future in the most analytical way possible.

Prescriptive Analysis

  • Description: prescriptive analysis is a subtype of mathematical analysis that answers the question “what will happen if we do X?” It’s largely underestimated in the data analysis world because it requires diagnostic and descriptive analyses to be done before it even starts. More than simple predictive analysis, prescriptive analysis builds entire data models to show how a simple change could impact the ensemble.
  • Importance: High. Prescriptive analysis is most common under the finance function in many companies. Financial analysts use it to build a financial model of the financial statements that show how that data will change given alternative inputs.
  • Nature of Data: the nature of data in prescriptive analysis is data sets. These data sets contain patterns that respond differently to various inputs. Data that is useful for prescriptive analysis contains correlations between different variables. It’s through these correlations that we establish patterns and prescribe action on this basis. This analysis cannot be performed on data that exists in a vacuum — it must be viewed on the backdrop of the tangibles behind it.
  • Motive: the motive for prescriptive analysis is to establish, with an acceptable degree of certainty, what results we can expect given a certain action. As you might expect, this necessitates that the analyst or researcher be aware of the world behind the data, not just the data itself.

Clustering Method

  • Description: the clustering method groups data points together based on their relativeness closeness to further explore and treat them based on these groupings. There are two ways to group clusters: intuitively and statistically (or K-means).
  • Importance: Very high. Though most corporate roles group clusters intuitively based on management criteria, a solid understanding of how to group them mathematically is an excellent descriptive and diagnostic approach to allow for prescriptive analysis thereafter.
  • Nature of Data : the nature of data useful for clustering is sets with 1 or more data fields. While most people are used to looking at only two dimensions (x and y), clustering becomes more accurate the more fields there are.
  • Motive: the motive for clustering is to understand how data sets group and to explore them further based on those groups.
  • Here’s an example set:

method of data analysis in research

Classification Method

  • Description: the classification method aims to separate and group data points based on common characteristics . This can be done intuitively or statistically.
  • Importance: High. While simple on the surface, classification can become quite complex. It’s very valuable in corporate and research environments, but can feel like its not worth the work. A good analyst can execute it quickly to deliver results.
  • Nature of Data: the nature of data useful for classification is data sets. As we will see, it can be used on qualitative data as well as quantitative. This method requires knowledge of the substance behind the data, not just the numbers themselves.
  • Motive: the motive for classification is group data not based on mathematical relationships (which would be clustering), but by predetermined outputs. This is why it’s less useful for diagnostic analysis, and more useful for prescriptive analysis.

Forecasting Method

  • Description: the forecasting method uses time past series data to forecast the future.
  • Importance: Very high. Forecasting falls under predictive analysis and is arguably the most common and most important method in the corporate world. It is less useful in research, which prefers to understand the known rather than speculate about the future.
  • Nature of Data: data useful for forecasting is time series data, which, as we’ve noted, always includes a variable of time.
  • Motive: the motive for the forecasting method is the same as that of prescriptive analysis: the confidently estimate future values.

Optimization Method

  • Description: the optimization method maximized or minimizes values in a set given a set of criteria. It is arguably most common in prescriptive analysis. In mathematical terms, it is maximizing or minimizing a function given certain constraints.
  • Importance: Very high. The idea of optimization applies to more analysis types than any other method. In fact, some argue that it is the fundamental driver behind data analysis. You would use it everywhere in research and in a corporation.
  • Nature of Data: the nature of optimizable data is a data set of at least two points.
  • Motive: the motive behind optimization is to achieve the best result possible given certain conditions.

Content Analysis Method

  • Description: content analysis is a method of qualitative analysis that quantifies textual data to track themes across a document. It’s most common in academic fields and in social sciences, where written content is the subject of inquiry.
  • Importance: High. In a corporate setting, content analysis as such is less common. If anything Nïave Bayes (a technique we’ll look at below) is the closest corporations come to text. However, it is of the utmost importance for researchers. If you’re a researcher, check out this article on content analysis .
  • Nature of Data: data useful for content analysis is textual data.
  • Motive: the motive behind content analysis is to understand themes expressed in a large text

Narrative Analysis Method

  • Description: narrative analysis is a method of qualitative analysis that quantifies stories to trace themes in them. It’s differs from content analysis because it focuses on stories rather than research documents, and the techniques used are slightly different from those in content analysis (very nuances and outside the scope of this article).
  • Importance: Low. Unless you are highly specialized in working with stories, narrative analysis rare.
  • Nature of Data: the nature of the data useful for the narrative analysis method is narrative text.
  • Motive: the motive for narrative analysis is to uncover hidden patterns in narrative text.

Discourse Analysis Method

  • Description: the discourse analysis method falls under qualitative analysis and uses thematic coding to trace patterns in real-life discourse. That said, real-life discourse is oral, so it must first be transcribed into text.
  • Importance: Low. Unless you are focused on understand real-world idea sharing in a research setting, this kind of analysis is less common than the others on this list.
  • Nature of Data: the nature of data useful in discourse analysis is first audio files, then transcriptions of those audio files.
  • Motive: the motive behind discourse analysis is to trace patterns of real-world discussions. (As a spooky sidenote, have you ever felt like your phone microphone was listening to you and making reading suggestions? If it was, the method was discourse analysis.)

Framework Analysis Method

  • Description: the framework analysis method falls under qualitative analysis and uses similar thematic coding techniques to content analysis. However, where content analysis aims to discover themes, framework analysis starts with a framework and only considers elements that fall in its purview.
  • Importance: Low. As with the other textual analysis methods, framework analysis is less common in corporate settings. Even in the world of research, only some use it. Strangely, it’s very common for legislative and political research.
  • Nature of Data: the nature of data useful for framework analysis is textual.
  • Motive: the motive behind framework analysis is to understand what themes and parts of a text match your search criteria.

Grounded Theory Method

  • Description: the grounded theory method falls under qualitative analysis and uses thematic coding to build theories around those themes.
  • Importance: Low. Like other qualitative analysis techniques, grounded theory is less common in the corporate world. Even among researchers, you would be hard pressed to find many using it. Though powerful, it’s simply too rare to spend time learning.
  • Nature of Data: the nature of data useful in the grounded theory method is textual.
  • Motive: the motive of grounded theory method is to establish a series of theories based on themes uncovered from a text.

Clustering Technique: K-Means

  • Description: k-means is a clustering technique in which data points are grouped in clusters that have the closest means. Though not considered AI or ML, it inherently requires the use of supervised learning to reevaluate clusters as data points are added. Clustering techniques can be used in diagnostic, descriptive, & prescriptive data analyses.
  • Importance: Very important. If you only take 3 things from this article, k-means clustering should be part of it. It is useful in any situation where n observations have multiple characteristics and we want to put them in groups.
  • Nature of Data: the nature of data is at least one characteristic per observation, but the more the merrier.
  • Motive: the motive for clustering techniques such as k-means is to group observations together and either understand or react to them.

Regression Technique

  • Description: simple and multivariable regressions use either one independent variable or combination of multiple independent variables to calculate a correlation to a single dependent variable using constants. Regressions are almost synonymous with correlation today.
  • Importance: Very high. Along with clustering, if you only take 3 things from this article, regression techniques should be part of it. They’re everywhere in corporate and research fields alike.
  • Nature of Data: the nature of data used is regressions is data sets with “n” number of observations and as many variables as are reasonable. It’s important, however, to distinguish between time series data and regression data. You cannot use regressions or time series data without accounting for time. The easier way is to use techniques under the forecasting method.
  • Motive: The motive behind regression techniques is to understand correlations between independent variable(s) and a dependent one.

Nïave Bayes Technique

  • Description: Nïave Bayes is a classification technique that uses simple probability to classify items based previous classifications. In plain English, the formula would be “the chance that thing with trait x belongs to class c depends on (=) the overall chance of trait x belonging to class c, multiplied by the overall chance of class c, divided by the overall chance of getting trait x.” As a formula, it’s P(c|x) = P(x|c) * P(c) / P(x).
  • Importance: High. Nïave Bayes is a very common, simplistic classification techniques because it’s effective with large data sets and it can be applied to any instant in which there is a class. Google, for example, might use it to group webpages into groups for certain search engine queries.
  • Nature of Data: the nature of data for Nïave Bayes is at least one class and at least two traits in a data set.
  • Motive: the motive behind Nïave Bayes is to classify observations based on previous data. It’s thus considered part of predictive analysis.

Cohorts Technique

  • Description: cohorts technique is a type of clustering method used in behavioral sciences to separate users by common traits. As with clustering, it can be done intuitively or mathematically, the latter of which would simply be k-means.
  • Importance: Very high. With regard to resembles k-means, the cohort technique is more of a high-level counterpart. In fact, most people are familiar with it as a part of Google Analytics. It’s most common in marketing departments in corporations, rather than in research.
  • Nature of Data: the nature of cohort data is data sets in which users are the observation and other fields are used as defining traits for each cohort.
  • Motive: the motive for cohort analysis techniques is to group similar users and analyze how you retain them and how the churn.

Factor Technique

  • Description: the factor analysis technique is a way of grouping many traits into a single factor to expedite analysis. For example, factors can be used as traits for Nïave Bayes classifications instead of more general fields.
  • Importance: High. While not commonly employed in corporations, factor analysis is hugely valuable. Good data analysts use it to simplify their projects and communicate them more clearly.
  • Nature of Data: the nature of data useful in factor analysis techniques is data sets with a large number of fields on its observations.
  • Motive: the motive for using factor analysis techniques is to reduce the number of fields in order to more quickly analyze and communicate findings.

Linear Discriminants Technique

  • Description: linear discriminant analysis techniques are similar to regressions in that they use one or more independent variable to determine a dependent variable; however, the linear discriminant technique falls under a classifier method since it uses traits as independent variables and class as a dependent variable. In this way, it becomes a classifying method AND a predictive method.
  • Importance: High. Though the analyst world speaks of and uses linear discriminants less commonly, it’s a highly valuable technique to keep in mind as you progress in data analysis.
  • Nature of Data: the nature of data useful for the linear discriminant technique is data sets with many fields.
  • Motive: the motive for using linear discriminants is to classify observations that would be otherwise too complex for simple techniques like Nïave Bayes.

Exponential Smoothing Technique

  • Description: exponential smoothing is a technique falling under the forecasting method that uses a smoothing factor on prior data in order to predict future values. It can be linear or adjusted for seasonality. The basic principle behind exponential smoothing is to use a percent weight (value between 0 and 1 called alpha) on more recent values in a series and a smaller percent weight on less recent values. The formula is f(x) = current period value * alpha + previous period value * 1-alpha.
  • Importance: High. Most analysts still use the moving average technique (covered next) for forecasting, though it is less efficient than exponential moving, because it’s easy to understand. However, good analysts will have exponential smoothing techniques in their pocket to increase the value of their forecasts.
  • Nature of Data: the nature of data useful for exponential smoothing is time series data . Time series data has time as part of its fields .
  • Motive: the motive for exponential smoothing is to forecast future values with a smoothing variable.

Moving Average Technique

  • Description: the moving average technique falls under the forecasting method and uses an average of recent values to predict future ones. For example, to predict rainfall in April, you would take the average of rainfall from January to March. It’s simple, yet highly effective.
  • Importance: Very high. While I’m personally not a huge fan of moving averages due to their simplistic nature and lack of consideration for seasonality, they’re the most common forecasting technique and therefore very important.
  • Nature of Data: the nature of data useful for moving averages is time series data .
  • Motive: the motive for moving averages is to predict future values is a simple, easy-to-communicate way.

Neural Networks Technique

  • Description: neural networks are a highly complex artificial intelligence technique that replicate a human’s neural analysis through a series of hyper-rapid computations and comparisons that evolve in real time. This technique is so complex that an analyst must use computer programs to perform it.
  • Importance: Medium. While the potential for neural networks is theoretically unlimited, it’s still little understood and therefore uncommon. You do not need to know it by any means in order to be a data analyst.
  • Nature of Data: the nature of data useful for neural networks is data sets of astronomical size, meaning with 100s of 1000s of fields and the same number of row at a minimum .
  • Motive: the motive for neural networks is to understand wildly complex phenomenon and data to thereafter act on it.

Decision Tree Technique

  • Description: the decision tree technique uses artificial intelligence algorithms to rapidly calculate possible decision pathways and their outcomes on a real-time basis. It’s so complex that computer programs are needed to perform it.
  • Importance: Medium. As with neural networks, decision trees with AI are too little understood and are therefore uncommon in corporate and research settings alike.
  • Nature of Data: the nature of data useful for the decision tree technique is hierarchical data sets that show multiple optional fields for each preceding field.
  • Motive: the motive for decision tree techniques is to compute the optimal choices to make in order to achieve a desired result.

Evolutionary Programming Technique

  • Description: the evolutionary programming technique uses a series of neural networks, sees how well each one fits a desired outcome, and selects only the best to test and retest. It’s called evolutionary because is resembles the process of natural selection by weeding out weaker options.
  • Importance: Medium. As with the other AI techniques, evolutionary programming just isn’t well-understood enough to be usable in many cases. It’s complexity also makes it hard to explain in corporate settings and difficult to defend in research settings.
  • Nature of Data: the nature of data in evolutionary programming is data sets of neural networks, or data sets of data sets.
  • Motive: the motive for using evolutionary programming is similar to decision trees: understanding the best possible option from complex data.
  • Video example :

Fuzzy Logic Technique

  • Description: fuzzy logic is a type of computing based on “approximate truths” rather than simple truths such as “true” and “false.” It is essentially two tiers of classification. For example, to say whether “Apples are good,” you need to first classify that “Good is x, y, z.” Only then can you say apples are good. Another way to see it helping a computer see truth like humans do: “definitely true, probably true, maybe true, probably false, definitely false.”
  • Importance: Medium. Like the other AI techniques, fuzzy logic is uncommon in both research and corporate settings, which means it’s less important in today’s world.
  • Nature of Data: the nature of fuzzy logic data is huge data tables that include other huge data tables with a hierarchy including multiple subfields for each preceding field.
  • Motive: the motive of fuzzy logic to replicate human truth valuations in a computer is to model human decisions based on past data. The obvious possible application is marketing.

Text Analysis Technique

  • Description: text analysis techniques fall under the qualitative data analysis type and use text to extract insights.
  • Importance: Medium. Text analysis techniques, like all the qualitative analysis type, are most valuable for researchers.
  • Nature of Data: the nature of data useful in text analysis is words.
  • Motive: the motive for text analysis is to trace themes in a text across sets of very long documents, such as books.

Coding Technique

  • Description: the coding technique is used in textual analysis to turn ideas into uniform phrases and analyze the number of times and the ways in which those ideas appear. For this reason, some consider it a quantitative technique as well. You can learn more about coding and the other qualitative techniques here .
  • Importance: Very high. If you’re a researcher working in social sciences, coding is THE analysis techniques, and for good reason. It’s a great way to add rigor to analysis. That said, it’s less common in corporate settings.
  • Nature of Data: the nature of data useful for coding is long text documents.
  • Motive: the motive for coding is to make tracing ideas on paper more than an exercise of the mind by quantifying it and understanding is through descriptive methods.

Idea Pattern Technique

  • Description: the idea pattern analysis technique fits into coding as the second step of the process. Once themes and ideas are coded, simple descriptive analysis tests may be run. Some people even cluster the ideas!
  • Importance: Very high. If you’re a researcher, idea pattern analysis is as important as the coding itself.
  • Nature of Data: the nature of data useful for idea pattern analysis is already coded themes.
  • Motive: the motive for the idea pattern technique is to trace ideas in otherwise unmanageably-large documents.

Word Frequency Technique

  • Description: word frequency is a qualitative technique that stands in opposition to coding and uses an inductive approach to locate specific words in a document in order to understand its relevance. Word frequency is essentially the descriptive analysis of qualitative data because it uses stats like mean, median, and mode to gather insights.
  • Importance: High. As with the other qualitative approaches, word frequency is very important in social science research, but less so in corporate settings.
  • Nature of Data: the nature of data useful for word frequency is long, informative documents.
  • Motive: the motive for word frequency is to locate target words to determine the relevance of a document in question.

Types of data analysis in research

Types of data analysis in research methodology include every item discussed in this article. As a list, they are:

  • Quantitative
  • Qualitative
  • Mathematical
  • Machine Learning and AI
  • Descriptive
  • Prescriptive
  • Classification
  • Forecasting
  • Optimization
  • Grounded theory
  • Artificial Neural Networks
  • Decision Trees
  • Evolutionary Programming
  • Fuzzy Logic
  • Text analysis
  • Idea Pattern Analysis
  • Word Frequency Analysis
  • Nïave Bayes
  • Exponential smoothing
  • Moving average
  • Linear discriminant

Types of data analysis in qualitative research

As a list, the types of data analysis in qualitative research are the following methods:

Types of data analysis in quantitative research

As a list, the types of data analysis in quantitative research are:

Data analysis methods

As a list, data analysis methods are:

  • Content (qualitative)
  • Narrative (qualitative)
  • Discourse (qualitative)
  • Framework (qualitative)
  • Grounded theory (qualitative)

Quantitative data analysis methods

As a list, quantitative data analysis methods are:

Tabular View of Data Analysis Types, Methods, and Techniques

About the author.

Noah is the founder & Editor-in-Chief at AnalystAnswers. He is a transatlantic professional and entrepreneur with 5+ years of corporate finance and data analytics experience, as well as 3+ years in consumer financial products and business software. He started AnalystAnswers to provide aspiring professionals with accessible explanations of otherwise dense finance and data concepts. Noah believes everyone can benefit from an analytical mindset in growing digital world. When he's not busy at work, Noah likes to explore new European cities, exercise, and spend time with friends and family.

File available immediately.

method of data analysis in research

Notice: JavaScript is required for this content.

Your Modern Business Guide To Data Analysis Methods And Techniques

Data analysis methods and techniques blog post by datapine

Table of Contents

1) What Is Data Analysis?

2) Why Is Data Analysis Important?

3) What Is The Data Analysis Process?

4) Types Of Data Analysis Methods

5) Top Data Analysis Techniques To Apply

6) Quality Criteria For Data Analysis

7) Data Analysis Limitations & Barriers

8) Data Analysis Skills

9) Data Analysis In The Big Data Environment

In our data-rich age, understanding how to analyze and extract true meaning from our business’s digital insights is one of the primary drivers of success.

Despite the colossal volume of data we create every day, a mere 0.5% is actually analyzed and used for data discovery , improvement, and intelligence. While that may not seem like much, considering the amount of digital information we have at our fingertips, half a percent still accounts for a vast amount of data.

With so much data and so little time, knowing how to collect, curate, organize, and make sense of all of this potentially business-boosting information can be a minefield – but online data analysis is the solution.

In science, data analysis uses a more complex approach with advanced techniques to explore and experiment with data. On the other hand, in a business context, data is used to make data-driven decisions that will enable the company to improve its overall performance. In this post, we will cover the analysis of data from an organizational point of view while still going through the scientific and statistical foundations that are fundamental to understanding the basics of data analysis. 

To put all of that into perspective, we will answer a host of important analytical questions, explore analytical methods and techniques, while demonstrating how to perform analysis in the real world with a 17-step blueprint for success.

What Is Data Analysis?

Data analysis is the process of collecting, modeling, and analyzing data using various statistical and logical methods and techniques. Businesses rely on analytics processes and tools to extract insights that support strategic and operational decision-making.

All these various methods are largely based on two core areas: quantitative and qualitative research.

To explain the key differences between qualitative and quantitative research, here’s a video for your viewing pleasure:

Gaining a better understanding of different techniques and methods in quantitative research as well as qualitative insights will give your analyzing efforts a more clearly defined direction, so it’s worth taking the time to allow this particular knowledge to sink in. Additionally, you will be able to create a comprehensive analytical report that will skyrocket your analysis.

Apart from qualitative and quantitative categories, there are also other types of data that you should be aware of before dividing into complex data analysis processes. These categories include: 

  • Big data: Refers to massive data sets that need to be analyzed using advanced software to reveal patterns and trends. It is considered to be one of the best analytical assets as it provides larger volumes of data at a faster rate. 
  • Metadata: Putting it simply, metadata is data that provides insights about other data. It summarizes key information about specific data that makes it easier to find and reuse for later purposes. 
  • Real time data: As its name suggests, real time data is presented as soon as it is acquired. From an organizational perspective, this is the most valuable data as it can help you make important decisions based on the latest developments. Our guide on real time analytics will tell you more about the topic. 
  • Machine data: This is more complex data that is generated solely by a machine such as phones, computers, or even websites and embedded systems, without previous human interaction.

Why Is Data Analysis Important?

Before we go into detail about the categories of analysis along with its methods and techniques, you must understand the potential that analyzing data can bring to your organization.

  • Informed decision-making : From a management perspective, you can benefit from analyzing your data as it helps you make decisions based on facts and not simple intuition. For instance, you can understand where to invest your capital, detect growth opportunities, predict your income, or tackle uncommon situations before they become problems. Through this, you can extract relevant insights from all areas in your organization, and with the help of dashboard software , present the data in a professional and interactive way to different stakeholders.
  • Reduce costs : Another great benefit is to reduce costs. With the help of advanced technologies such as predictive analytics, businesses can spot improvement opportunities, trends, and patterns in their data and plan their strategies accordingly. In time, this will help you save money and resources on implementing the wrong strategies. And not just that, by predicting different scenarios such as sales and demand you can also anticipate production and supply. 
  • Target customers better : Customers are arguably the most crucial element in any business. By using analytics to get a 360° vision of all aspects related to your customers, you can understand which channels they use to communicate with you, their demographics, interests, habits, purchasing behaviors, and more. In the long run, it will drive success to your marketing strategies, allow you to identify new potential customers, and avoid wasting resources on targeting the wrong people or sending the wrong message. You can also track customer satisfaction by analyzing your client’s reviews or your customer service department’s performance.

What Is The Data Analysis Process?

Data analysis process graphic

When we talk about analyzing data there is an order to follow in order to extract the needed conclusions. The analysis process consists of 5 key stages. We will cover each of them more in detail later in the post, but to start providing the needed context to understand what is coming next, here is a rundown of the 5 essential steps of data analysis. 

  • Identify: Before you get your hands dirty with data, you first need to identify why you need it in the first place. The identification is the stage in which you establish the questions you will need to answer. For example, what is the customer's perception of our brand? Or what type of packaging is more engaging to our potential customers? Once the questions are outlined you are ready for the next step. 
  • Collect: As its name suggests, this is the stage where you start collecting the needed data. Here, you define which sources of data you will use and how you will use them. The collection of data can come in different forms such as internal or external sources, surveys, interviews, questionnaires, and focus groups, among others.  An important note here is that the way you collect the data will be different in a quantitative and qualitative scenario. 
  • Clean: Once you have the necessary data it is time to clean it and leave it ready for analysis. Not all the data you collect will be useful, when collecting big amounts of data in different formats it is very likely that you will find yourself with duplicate or badly formatted data. To avoid this, before you start working with your data you need to make sure to erase any white spaces, duplicate records, or formatting errors. This way you avoid hurting your analysis with bad-quality data. 
  • Analyze : With the help of various techniques such as statistical analysis, regressions, neural networks, text analysis, and more, you can start analyzing and manipulating your data to extract relevant conclusions. At this stage, you find trends, correlations, variations, and patterns that can help you answer the questions you first thought of in the identify stage. Various technologies in the market assist researchers and average users with the management of their data. Some of them include business intelligence and visualization software, predictive analytics, and data mining, among others. 
  • Interpret: Last but not least you have one of the most important steps: it is time to interpret your results. This stage is where the researcher comes up with courses of action based on the findings. For example, here you would understand if your clients prefer packaging that is red or green, plastic or paper, etc. Additionally, at this stage, you can also find some limitations and work on them. 

Now that you have a basic understanding of the key data analysis steps, let’s look at the top 17 essential methods.

17 Essential Types Of Data Analysis Methods

Before diving into the 17 essential types of methods, it is important that we go over really fast through the main analysis categories. Starting with the category of descriptive up to prescriptive analysis, the complexity and effort of data evaluation increases, but also the added value for the company.

a) Descriptive analysis - What happened.

The descriptive analysis method is the starting point for any analytic reflection, and it aims to answer the question of what happened? It does this by ordering, manipulating, and interpreting raw data from various sources to turn it into valuable insights for your organization.

Performing descriptive analysis is essential, as it enables us to present our insights in a meaningful way. Although it is relevant to mention that this analysis on its own will not allow you to predict future outcomes or tell you the answer to questions like why something happened, it will leave your data organized and ready to conduct further investigations.

b) Exploratory analysis - How to explore data relationships.

As its name suggests, the main aim of the exploratory analysis is to explore. Prior to it, there is still no notion of the relationship between the data and the variables. Once the data is investigated, exploratory analysis helps you to find connections and generate hypotheses and solutions for specific problems. A typical area of ​​application for it is data mining.

c) Diagnostic analysis - Why it happened.

Diagnostic data analytics empowers analysts and executives by helping them gain a firm contextual understanding of why something happened. If you know why something happened as well as how it happened, you will be able to pinpoint the exact ways of tackling the issue or challenge.

Designed to provide direct and actionable answers to specific questions, this is one of the world’s most important methods in research, among its other key organizational functions such as retail analytics , e.g.

c) Predictive analysis - What will happen.

The predictive method allows you to look into the future to answer the question: what will happen? In order to do this, it uses the results of the previously mentioned descriptive, exploratory, and diagnostic analysis, in addition to machine learning (ML) and artificial intelligence (AI). Through this, you can uncover future trends, potential problems or inefficiencies, connections, and casualties in your data.

With predictive analysis, you can unfold and develop initiatives that will not only enhance your various operational processes but also help you gain an all-important edge over the competition. If you understand why a trend, pattern, or event happened through data, you will be able to develop an informed projection of how things may unfold in particular areas of the business.

e) Prescriptive analysis - How will it happen.

Another of the most effective types of analysis methods in research. Prescriptive data techniques cross over from predictive analysis in the way that it revolves around using patterns or trends to develop responsive, practical business strategies.

By drilling down into prescriptive analysis, you will play an active role in the data consumption process by taking well-arranged sets of visual data and using it as a powerful fix to emerging issues in a number of key areas, including marketing, sales, customer experience, HR, fulfillment, finance, logistics analytics , and others.

Top 17 data analysis methods

As mentioned at the beginning of the post, data analysis methods can be divided into two big categories: quantitative and qualitative. Each of these categories holds a powerful analytical value that changes depending on the scenario and type of data you are working with. Below, we will discuss 17 methods that are divided into qualitative and quantitative approaches. 

Without further ado, here are the 17 essential types of data analysis methods with some use cases in the business world: 

A. Quantitative Methods 

To put it simply, quantitative analysis refers to all methods that use numerical data or data that can be turned into numbers (e.g. category variables like gender, age, etc.) to extract valuable insights. It is used to extract valuable conclusions about relationships, differences, and test hypotheses. Below we discuss some of the key quantitative methods. 

1. Cluster analysis

The action of grouping a set of data elements in a way that said elements are more similar (in a particular sense) to each other than to those in other groups – hence the term ‘cluster.’ Since there is no target variable when clustering, the method is often used to find hidden patterns in the data. The approach is also used to provide additional context to a trend or dataset.

Let's look at it from an organizational perspective. In a perfect world, marketers would be able to analyze each customer separately and give them the best-personalized service, but let's face it, with a large customer base, it is timely impossible to do that. That's where clustering comes in. By grouping customers into clusters based on demographics, purchasing behaviors, monetary value, or any other factor that might be relevant for your company, you will be able to immediately optimize your efforts and give your customers the best experience based on their needs.

2. Cohort analysis

This type of data analysis approach uses historical data to examine and compare a determined segment of users' behavior, which can then be grouped with others with similar characteristics. By using this methodology, it's possible to gain a wealth of insight into consumer needs or a firm understanding of a broader target group.

Cohort analysis can be really useful for performing analysis in marketing as it will allow you to understand the impact of your campaigns on specific groups of customers. To exemplify, imagine you send an email campaign encouraging customers to sign up for your site. For this, you create two versions of the campaign with different designs, CTAs, and ad content. Later on, you can use cohort analysis to track the performance of the campaign for a longer period of time and understand which type of content is driving your customers to sign up, repurchase, or engage in other ways.  

A useful tool to start performing cohort analysis method is Google Analytics. You can learn more about the benefits and limitations of using cohorts in GA in this useful guide . In the bottom image, you see an example of how you visualize a cohort in this tool. The segments (devices traffic) are divided into date cohorts (usage of devices) and then analyzed week by week to extract insights into performance.

Cohort analysis chart example from google analytics

3. Regression analysis

Regression uses historical data to understand how a dependent variable's value is affected when one (linear regression) or more independent variables (multiple regression) change or stay the same. By understanding each variable's relationship and how it developed in the past, you can anticipate possible outcomes and make better decisions in the future.

Let's bring it down with an example. Imagine you did a regression analysis of your sales in 2019 and discovered that variables like product quality, store design, customer service, marketing campaigns, and sales channels affected the overall result. Now you want to use regression to analyze which of these variables changed or if any new ones appeared during 2020. For example, you couldn’t sell as much in your physical store due to COVID lockdowns. Therefore, your sales could’ve either dropped in general or increased in your online channels. Through this, you can understand which independent variables affected the overall performance of your dependent variable, annual sales.

If you want to go deeper into this type of analysis, check out this article and learn more about how you can benefit from regression.

4. Neural networks

The neural network forms the basis for the intelligent algorithms of machine learning. It is a form of analytics that attempts, with minimal intervention, to understand how the human brain would generate insights and predict values. Neural networks learn from each and every data transaction, meaning that they evolve and advance over time.

A typical area of application for neural networks is predictive analytics. There are BI reporting tools that have this feature implemented within them, such as the Predictive Analytics Tool from datapine. This tool enables users to quickly and easily generate all kinds of predictions. All you have to do is select the data to be processed based on your KPIs, and the software automatically calculates forecasts based on historical and current data. Thanks to its user-friendly interface, anyone in your organization can manage it; there’s no need to be an advanced scientist. 

Here is an example of how you can use the predictive analysis tool from datapine:

Example on how to use predictive analytics tool from datapine

**click to enlarge**

5. Factor analysis

The factor analysis also called “dimension reduction” is a type of data analysis used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. The aim here is to uncover independent latent variables, an ideal method for streamlining specific segments.

A good way to understand this data analysis method is a customer evaluation of a product. The initial assessment is based on different variables like color, shape, wearability, current trends, materials, comfort, the place where they bought the product, and frequency of usage. Like this, the list can be endless, depending on what you want to track. In this case, factor analysis comes into the picture by summarizing all of these variables into homogenous groups, for example, by grouping the variables color, materials, quality, and trends into a brother latent variable of design.

If you want to start analyzing data using factor analysis we recommend you take a look at this practical guide from UCLA.

6. Data mining

A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.  When considering how to analyze data, adopting a data mining mindset is essential to success - as such, it’s an area that is worth exploring in greater detail.

An excellent use case of data mining is datapine intelligent data alerts . With the help of artificial intelligence and machine learning, they provide automated signals based on particular commands or occurrences within a dataset. For example, if you’re monitoring supply chain KPIs , you could set an intelligent alarm to trigger when invalid or low-quality data appears. By doing so, you will be able to drill down deep into the issue and fix it swiftly and effectively.

In the following picture, you can see how the intelligent alarms from datapine work. By setting up ranges on daily orders, sessions, and revenues, the alarms will notify you if the goal was not completed or if it exceeded expectations.

Example on how to use intelligent alerts from datapine

7. Time series analysis

As its name suggests, time series analysis is used to analyze a set of data points collected over a specified period of time. Although analysts use this method to monitor the data points in a specific interval of time rather than just monitoring them intermittently, the time series analysis is not uniquely used for the purpose of collecting data over time. Instead, it allows researchers to understand if variables changed during the duration of the study, how the different variables are dependent, and how did it reach the end result. 

In a business context, this method is used to understand the causes of different trends and patterns to extract valuable insights. Another way of using this method is with the help of time series forecasting. Powered by predictive technologies, businesses can analyze various data sets over a period of time and forecast different future events. 

A great use case to put time series analysis into perspective is seasonality effects on sales. By using time series forecasting to analyze sales data of a specific product over time, you can understand if sales rise over a specific period of time (e.g. swimwear during summertime, or candy during Halloween). These insights allow you to predict demand and prepare production accordingly.  

8. Decision Trees 

The decision tree analysis aims to act as a support tool to make smart and strategic decisions. By visually displaying potential outcomes, consequences, and costs in a tree-like model, researchers and company users can easily evaluate all factors involved and choose the best course of action. Decision trees are helpful to analyze quantitative data and they allow for an improved decision-making process by helping you spot improvement opportunities, reduce costs, and enhance operational efficiency and production.

But how does a decision tree actually works? This method works like a flowchart that starts with the main decision that you need to make and branches out based on the different outcomes and consequences of each decision. Each outcome will outline its own consequences, costs, and gains and, at the end of the analysis, you can compare each of them and make the smartest decision. 

Businesses can use them to understand which project is more cost-effective and will bring more earnings in the long run. For example, imagine you need to decide if you want to update your software app or build a new app entirely.  Here you would compare the total costs, the time needed to be invested, potential revenue, and any other factor that might affect your decision.  In the end, you would be able to see which of these two options is more realistic and attainable for your company or research.

9. Conjoint analysis 

Last but not least, we have the conjoint analysis. This approach is usually used in surveys to understand how individuals value different attributes of a product or service and it is one of the most effective methods to extract consumer preferences. When it comes to purchasing, some clients might be more price-focused, others more features-focused, and others might have a sustainable focus. Whatever your customer's preferences are, you can find them with conjoint analysis. Through this, companies can define pricing strategies, packaging options, subscription packages, and more. 

A great example of conjoint analysis is in marketing and sales. For instance, a cupcake brand might use conjoint analysis and find that its clients prefer gluten-free options and cupcakes with healthier toppings over super sugary ones. Thus, the cupcake brand can turn these insights into advertisements and promotions to increase sales of this particular type of product. And not just that, conjoint analysis can also help businesses segment their customers based on their interests. This allows them to send different messaging that will bring value to each of the segments. 

10. Correspondence Analysis

Also known as reciprocal averaging, correspondence analysis is a method used to analyze the relationship between categorical variables presented within a contingency table. A contingency table is a table that displays two (simple correspondence analysis) or more (multiple correspondence analysis) categorical variables across rows and columns that show the distribution of the data, which is usually answers to a survey or questionnaire on a specific topic. 

This method starts by calculating an “expected value” which is done by multiplying row and column averages and dividing it by the overall original value of the specific table cell. The “expected value” is then subtracted from the original value resulting in a “residual number” which is what allows you to extract conclusions about relationships and distribution. The results of this analysis are later displayed using a map that represents the relationship between the different values. The closest two values are in the map, the bigger the relationship. Let’s put it into perspective with an example. 

Imagine you are carrying out a market research analysis about outdoor clothing brands and how they are perceived by the public. For this analysis, you ask a group of people to match each brand with a certain attribute which can be durability, innovation, quality materials, etc. When calculating the residual numbers, you can see that brand A has a positive residual for innovation but a negative one for durability. This means that brand A is not positioned as a durable brand in the market, something that competitors could take advantage of. 

11. Multidimensional Scaling (MDS)

MDS is a method used to observe the similarities or disparities between objects which can be colors, brands, people, geographical coordinates, and more. The objects are plotted using an “MDS map” that positions similar objects together and disparate ones far apart. The (dis) similarities between objects are represented using one or more dimensions that can be observed using a numerical scale. For example, if you want to know how people feel about the COVID-19 vaccine, you can use 1 for “don’t believe in the vaccine at all”  and 10 for “firmly believe in the vaccine” and a scale of 2 to 9 for in between responses.  When analyzing an MDS map the only thing that matters is the distance between the objects, the orientation of the dimensions is arbitrary and has no meaning at all. 

Multidimensional scaling is a valuable technique for market research, especially when it comes to evaluating product or brand positioning. For instance, if a cupcake brand wants to know how they are positioned compared to competitors, it can define 2-3 dimensions such as taste, ingredients, shopping experience, or more, and do a multidimensional scaling analysis to find improvement opportunities as well as areas in which competitors are currently leading. 

Another business example is in procurement when deciding on different suppliers. Decision makers can generate an MDS map to see how the different prices, delivery times, technical services, and more of the different suppliers differ and pick the one that suits their needs the best. 

A final example proposed by a research paper on "An Improved Study of Multilevel Semantic Network Visualization for Analyzing Sentiment Word of Movie Review Data". Researchers picked a two-dimensional MDS map to display the distances and relationships between different sentiments in movie reviews. They used 36 sentiment words and distributed them based on their emotional distance as we can see in the image below where the words "outraged" and "sweet" are on opposite sides of the map, marking the distance between the two emotions very clearly.

Example of multidimensional scaling analysis

Aside from being a valuable technique to analyze dissimilarities, MDS also serves as a dimension-reduction technique for large dimensional data. 

B. Qualitative Methods

Qualitative data analysis methods are defined as the observation of non-numerical data that is gathered and produced using methods of observation such as interviews, focus groups, questionnaires, and more. As opposed to quantitative methods, qualitative data is more subjective and highly valuable in analyzing customer retention and product development.

12. Text analysis

Text analysis, also known in the industry as text mining, works by taking large sets of textual data and arranging them in a way that makes it easier to manage. By working through this cleansing process in stringent detail, you will be able to extract the data that is truly relevant to your organization and use it to develop actionable insights that will propel you forward.

Modern software accelerate the application of text analytics. Thanks to the combination of machine learning and intelligent algorithms, you can perform advanced analytical processes such as sentiment analysis. This technique allows you to understand the intentions and emotions of a text, for example, if it's positive, negative, or neutral, and then give it a score depending on certain factors and categories that are relevant to your brand. Sentiment analysis is often used to monitor brand and product reputation and to understand how successful your customer experience is. To learn more about the topic check out this insightful article .

By analyzing data from various word-based sources, including product reviews, articles, social media communications, and survey responses, you will gain invaluable insights into your audience, as well as their needs, preferences, and pain points. This will allow you to create campaigns, services, and communications that meet your prospects’ needs on a personal level, growing your audience while boosting customer retention. There are various other “sub-methods” that are an extension of text analysis. Each of them serves a more specific purpose and we will look at them in detail next. 

13. Content Analysis

This is a straightforward and very popular method that examines the presence and frequency of certain words, concepts, and subjects in different content formats such as text, image, audio, or video. For example, the number of times the name of a celebrity is mentioned on social media or online tabloids. It does this by coding text data that is later categorized and tabulated in a way that can provide valuable insights, making it the perfect mix of quantitative and qualitative analysis.

There are two types of content analysis. The first one is the conceptual analysis which focuses on explicit data, for instance, the number of times a concept or word is mentioned in a piece of content. The second one is relational analysis, which focuses on the relationship between different concepts or words and how they are connected within a specific context. 

Content analysis is often used by marketers to measure brand reputation and customer behavior. For example, by analyzing customer reviews. It can also be used to analyze customer interviews and find directions for new product development. It is also important to note, that in order to extract the maximum potential out of this analysis method, it is necessary to have a clearly defined research question. 

14. Thematic Analysis

Very similar to content analysis, thematic analysis also helps in identifying and interpreting patterns in qualitative data with the main difference being that the first one can also be applied to quantitative analysis. The thematic method analyzes large pieces of text data such as focus group transcripts or interviews and groups them into themes or categories that come up frequently within the text. It is a great method when trying to figure out peoples view’s and opinions about a certain topic. For example, if you are a brand that cares about sustainability, you can do a survey of your customers to analyze their views and opinions about sustainability and how they apply it to their lives. You can also analyze customer service calls transcripts to find common issues and improve your service. 

Thematic analysis is a very subjective technique that relies on the researcher’s judgment. Therefore,  to avoid biases, it has 6 steps that include familiarization, coding, generating themes, reviewing themes, defining and naming themes, and writing up. It is also important to note that, because it is a flexible approach, the data can be interpreted in multiple ways and it can be hard to select what data is more important to emphasize. 

15. Narrative Analysis 

A bit more complex in nature than the two previous ones, narrative analysis is used to explore the meaning behind the stories that people tell and most importantly, how they tell them. By looking into the words that people use to describe a situation you can extract valuable conclusions about their perspective on a specific topic. Common sources for narrative data include autobiographies, family stories, opinion pieces, and testimonials, among others. 

From a business perspective, narrative analysis can be useful to analyze customer behaviors and feelings towards a specific product, service, feature, or others. It provides unique and deep insights that can be extremely valuable. However, it has some drawbacks.  

The biggest weakness of this method is that the sample sizes are usually very small due to the complexity and time-consuming nature of the collection of narrative data. Plus, the way a subject tells a story will be significantly influenced by his or her specific experiences, making it very hard to replicate in a subsequent study. 

16. Discourse Analysis

Discourse analysis is used to understand the meaning behind any type of written, verbal, or symbolic discourse based on its political, social, or cultural context. It mixes the analysis of languages and situations together. This means that the way the content is constructed and the meaning behind it is significantly influenced by the culture and society it takes place in. For example, if you are analyzing political speeches you need to consider different context elements such as the politician's background, the current political context of the country, the audience to which the speech is directed, and so on. 

From a business point of view, discourse analysis is a great market research tool. It allows marketers to understand how the norms and ideas of the specific market work and how their customers relate to those ideas. It can be very useful to build a brand mission or develop a unique tone of voice. 

17. Grounded Theory Analysis

Traditionally, researchers decide on a method and hypothesis and start to collect the data to prove that hypothesis. The grounded theory is the only method that doesn’t require an initial research question or hypothesis as its value lies in the generation of new theories. With the grounded theory method, you can go into the analysis process with an open mind and explore the data to generate new theories through tests and revisions. In fact, it is not necessary to collect the data and then start to analyze it. Researchers usually start to find valuable insights as they are gathering the data. 

All of these elements make grounded theory a very valuable method as theories are fully backed by data instead of initial assumptions. It is a great technique to analyze poorly researched topics or find the causes behind specific company outcomes. For example, product managers and marketers might use the grounded theory to find the causes of high levels of customer churn and look into customer surveys and reviews to develop new theories about the causes. 

How To Analyze Data? Top 17 Data Analysis Techniques To Apply

17 top data analysis techniques by datapine

Now that we’ve answered the questions “what is data analysis’”, why is it important, and covered the different data analysis types, it’s time to dig deeper into how to perform your analysis by working through these 17 essential techniques.

1. Collaborate your needs

Before you begin analyzing or drilling down into any techniques, it’s crucial to sit down collaboratively with all key stakeholders within your organization, decide on your primary campaign or strategic goals, and gain a fundamental understanding of the types of insights that will best benefit your progress or provide you with the level of vision you need to evolve your organization.

2. Establish your questions

Once you’ve outlined your core objectives, you should consider which questions will need answering to help you achieve your mission. This is one of the most important techniques as it will shape the very foundations of your success.

To help you ask the right things and ensure your data works for you, you have to ask the right data analysis questions .

3. Data democratization

After giving your data analytics methodology some real direction, and knowing which questions need answering to extract optimum value from the information available to your organization, you should continue with democratization.

Data democratization is an action that aims to connect data from various sources efficiently and quickly so that anyone in your organization can access it at any given moment. You can extract data in text, images, videos, numbers, or any other format. And then perform cross-database analysis to achieve more advanced insights to share with the rest of the company interactively.  

Once you have decided on your most valuable sources, you need to take all of this into a structured format to start collecting your insights. For this purpose, datapine offers an easy all-in-one data connectors feature to integrate all your internal and external sources and manage them at your will. Additionally, datapine’s end-to-end solution automatically updates your data, allowing you to save time and focus on performing the right analysis to grow your company.

data connectors from datapine

4. Think of governance 

When collecting data in a business or research context you always need to think about security and privacy. With data breaches becoming a topic of concern for businesses, the need to protect your client's or subject’s sensitive information becomes critical. 

To ensure that all this is taken care of, you need to think of a data governance strategy. According to Gartner , this concept refers to “ the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics .” In simpler words, data governance is a collection of processes, roles, and policies, that ensure the efficient use of data while still achieving the main company goals. It ensures that clear roles are in place for who can access the information and how they can access it. In time, this not only ensures that sensitive information is protected but also allows for an efficient analysis as a whole. 

5. Clean your data

After harvesting from so many sources you will be left with a vast amount of information that can be overwhelming to deal with. At the same time, you can be faced with incorrect data that can be misleading to your analysis. The smartest thing you can do to avoid dealing with this in the future is to clean the data. This is fundamental before visualizing it, as it will ensure that the insights you extract from it are correct.

There are many things that you need to look for in the cleaning process. The most important one is to eliminate any duplicate observations; this usually appears when using multiple internal and external sources of information. You can also add any missing codes, fix empty fields, and eliminate incorrectly formatted data.

Another usual form of cleaning is done with text data. As we mentioned earlier, most companies today analyze customer reviews, social media comments, questionnaires, and several other text inputs. In order for algorithms to detect patterns, text data needs to be revised to avoid invalid characters or any syntax or spelling errors. 

Most importantly, the aim of cleaning is to prevent you from arriving at false conclusions that can damage your company in the long run. By using clean data, you will also help BI solutions to interact better with your information and create better reports for your organization.

6. Set your KPIs

Once you’ve set your sources, cleaned your data, and established clear-cut questions you want your insights to answer, you need to set a host of key performance indicators (KPIs) that will help you track, measure, and shape your progress in a number of key areas.

KPIs are critical to both qualitative and quantitative analysis research. This is one of the primary methods of data analysis you certainly shouldn’t overlook.

To help you set the best possible KPIs for your initiatives and activities, here is an example of a relevant logistics KPI : transportation-related costs. If you want to see more go explore our collection of key performance indicator examples .

Transportation costs logistics KPIs

7. Omit useless data

Having bestowed your data analysis tools and techniques with true purpose and defined your mission, you should explore the raw data you’ve collected from all sources and use your KPIs as a reference for chopping out any information you deem to be useless.

Trimming the informational fat is one of the most crucial methods of analysis as it will allow you to focus your analytical efforts and squeeze every drop of value from the remaining ‘lean’ information.

Any stats, facts, figures, or metrics that don’t align with your business goals or fit with your KPI management strategies should be eliminated from the equation.

8. Build a data management roadmap

While, at this point, this particular step is optional (you will have already gained a wealth of insight and formed a fairly sound strategy by now), creating a data governance roadmap will help your data analysis methods and techniques become successful on a more sustainable basis. These roadmaps, if developed properly, are also built so they can be tweaked and scaled over time.

Invest ample time in developing a roadmap that will help you store, manage, and handle your data internally, and you will make your analysis techniques all the more fluid and functional – one of the most powerful types of data analysis methods available today.

9. Integrate technology

There are many ways to analyze data, but one of the most vital aspects of analytical success in a business context is integrating the right decision support software and technology.

Robust analysis platforms will not only allow you to pull critical data from your most valuable sources while working with dynamic KPIs that will offer you actionable insights; it will also present them in a digestible, visual, interactive format from one central, live dashboard . A data methodology you can count on.

By integrating the right technology within your data analysis methodology, you’ll avoid fragmenting your insights, saving you time and effort while allowing you to enjoy the maximum value from your business’s most valuable insights.

For a look at the power of software for the purpose of analysis and to enhance your methods of analyzing, glance over our selection of dashboard examples .

10. Answer your questions

By considering each of the above efforts, working with the right technology, and fostering a cohesive internal culture where everyone buys into the different ways to analyze data as well as the power of digital intelligence, you will swiftly start to answer your most burning business questions. Arguably, the best way to make your data concepts accessible across the organization is through data visualization.

11. Visualize your data

Online data visualization is a powerful tool as it lets you tell a story with your metrics, allowing users across the organization to extract meaningful insights that aid business evolution – and it covers all the different ways to analyze data.

The purpose of analyzing is to make your entire organization more informed and intelligent, and with the right platform or dashboard, this is simpler than you think, as demonstrated by our marketing dashboard .

An executive dashboard example showcasing high-level marketing KPIs such as cost per lead, MQL, SQL, and cost per customer.

This visual, dynamic, and interactive online dashboard is a data analysis example designed to give Chief Marketing Officers (CMO) an overview of relevant metrics to help them understand if they achieved their monthly goals.

In detail, this example generated with a modern dashboard creator displays interactive charts for monthly revenues, costs, net income, and net income per customer; all of them are compared with the previous month so that you can understand how the data fluctuated. In addition, it shows a detailed summary of the number of users, customers, SQLs, and MQLs per month to visualize the whole picture and extract relevant insights or trends for your marketing reports .

The CMO dashboard is perfect for c-level management as it can help them monitor the strategic outcome of their marketing efforts and make data-driven decisions that can benefit the company exponentially.

12. Be careful with the interpretation

We already dedicated an entire post to data interpretation as it is a fundamental part of the process of data analysis. It gives meaning to the analytical information and aims to drive a concise conclusion from the analysis results. Since most of the time companies are dealing with data from many different sources, the interpretation stage needs to be done carefully and properly in order to avoid misinterpretations. 

To help you through the process, here we list three common practices that you need to avoid at all costs when looking at your data:

  • Correlation vs. causation: The human brain is formatted to find patterns. This behavior leads to one of the most common mistakes when performing interpretation: confusing correlation with causation. Although these two aspects can exist simultaneously, it is not correct to assume that because two things happened together, one provoked the other. A piece of advice to avoid falling into this mistake is never to trust just intuition, trust the data. If there is no objective evidence of causation, then always stick to correlation. 
  • Confirmation bias: This phenomenon describes the tendency to select and interpret only the data necessary to prove one hypothesis, often ignoring the elements that might disprove it. Even if it's not done on purpose, confirmation bias can represent a real problem, as excluding relevant information can lead to false conclusions and, therefore, bad business decisions. To avoid it, always try to disprove your hypothesis instead of proving it, share your analysis with other team members, and avoid drawing any conclusions before the entire analytical project is finalized.
  • Statistical significance: To put it in short words, statistical significance helps analysts understand if a result is actually accurate or if it happened because of a sampling error or pure chance. The level of statistical significance needed might depend on the sample size and the industry being analyzed. In any case, ignoring the significance of a result when it might influence decision-making can be a huge mistake.

13. Build a narrative

Now, we’re going to look at how you can bring all of these elements together in a way that will benefit your business - starting with a little something called data storytelling.

The human brain responds incredibly well to strong stories or narratives. Once you’ve cleansed, shaped, and visualized your most invaluable data using various BI dashboard tools , you should strive to tell a story - one with a clear-cut beginning, middle, and end.

By doing so, you will make your analytical efforts more accessible, digestible, and universal, empowering more people within your organization to use your discoveries to their actionable advantage.

14. Consider autonomous technology

Autonomous technologies, such as artificial intelligence (AI) and machine learning (ML), play a significant role in the advancement of understanding how to analyze data more effectively.

Gartner predicts that by the end of this year, 80% of emerging technologies will be developed with AI foundations. This is a testament to the ever-growing power and value of autonomous technologies.

At the moment, these technologies are revolutionizing the analysis industry. Some examples that we mentioned earlier are neural networks, intelligent alarms, and sentiment analysis.

15. Share the load

If you work with the right tools and dashboards, you will be able to present your metrics in a digestible, value-driven format, allowing almost everyone in the organization to connect with and use relevant data to their advantage.

Modern dashboards consolidate data from various sources, providing access to a wealth of insights in one centralized location, no matter if you need to monitor recruitment metrics or generate reports that need to be sent across numerous departments. Moreover, these cutting-edge tools offer access to dashboards from a multitude of devices, meaning that everyone within the business can connect with practical insights remotely - and share the load.

Once everyone is able to work with a data-driven mindset, you will catalyze the success of your business in ways you never thought possible. And when it comes to knowing how to analyze data, this kind of collaborative approach is essential.

16. Data analysis tools

In order to perform high-quality analysis of data, it is fundamental to use tools and software that will ensure the best results. Here we leave you a small summary of four fundamental categories of data analysis tools for your organization.

  • Business Intelligence: BI tools allow you to process significant amounts of data from several sources in any format. Through this, you can not only analyze and monitor your data to extract relevant insights but also create interactive reports and dashboards to visualize your KPIs and use them for your company's good. datapine is an amazing online BI software that is focused on delivering powerful online analysis features that are accessible to beginner and advanced users. Like this, it offers a full-service solution that includes cutting-edge analysis of data, KPIs visualization, live dashboards, reporting, and artificial intelligence technologies to predict trends and minimize risk.
  • Statistical analysis: These tools are usually designed for scientists, statisticians, market researchers, and mathematicians, as they allow them to perform complex statistical analyses with methods like regression analysis, predictive analysis, and statistical modeling. A good tool to perform this type of analysis is R-Studio as it offers a powerful data modeling and hypothesis testing feature that can cover both academic and general data analysis. This tool is one of the favorite ones in the industry, due to its capability for data cleaning, data reduction, and performing advanced analysis with several statistical methods. Another relevant tool to mention is SPSS from IBM. The software offers advanced statistical analysis for users of all skill levels. Thanks to a vast library of machine learning algorithms, text analysis, and a hypothesis testing approach it can help your company find relevant insights to drive better decisions. SPSS also works as a cloud service that enables you to run it anywhere.
  • SQL Consoles: SQL is a programming language often used to handle structured data in relational databases. Tools like these are popular among data scientists as they are extremely effective in unlocking these databases' value. Undoubtedly, one of the most used SQL software in the market is MySQL Workbench . This tool offers several features such as a visual tool for database modeling and monitoring, complete SQL optimization, administration tools, and visual performance dashboards to keep track of KPIs.
  • Data Visualization: These tools are used to represent your data through charts, graphs, and maps that allow you to find patterns and trends in the data. datapine's already mentioned BI platform also offers a wealth of powerful online data visualization tools with several benefits. Some of them include: delivering compelling data-driven presentations to share with your entire company, the ability to see your data online with any device wherever you are, an interactive dashboard design feature that enables you to showcase your results in an interactive and understandable way, and to perform online self-service reports that can be used simultaneously with several other people to enhance team productivity.

17. Refine your process constantly 

Last is a step that might seem obvious to some people, but it can be easily ignored if you think you are done. Once you have extracted the needed results, you should always take a retrospective look at your project and think about what you can improve. As you saw throughout this long list of techniques, data analysis is a complex process that requires constant refinement. For this reason, you should always go one step further and keep improving. 

Quality Criteria For Data Analysis

So far we’ve covered a list of methods and techniques that should help you perform efficient data analysis. But how do you measure the quality and validity of your results? This is done with the help of some science quality criteria. Here we will go into a more theoretical area that is critical to understanding the fundamentals of statistical analysis in science. However, you should also be aware of these steps in a business context, as they will allow you to assess the quality of your results in the correct way. Let’s dig in. 

  • Internal validity: The results of a survey are internally valid if they measure what they are supposed to measure and thus provide credible results. In other words , internal validity measures the trustworthiness of the results and how they can be affected by factors such as the research design, operational definitions, how the variables are measured, and more. For instance, imagine you are doing an interview to ask people if they brush their teeth two times a day. While most of them will answer yes, you can still notice that their answers correspond to what is socially acceptable, which is to brush your teeth at least twice a day. In this case, you can’t be 100% sure if respondents actually brush their teeth twice a day or if they just say that they do, therefore, the internal validity of this interview is very low. 
  • External validity: Essentially, external validity refers to the extent to which the results of your research can be applied to a broader context. It basically aims to prove that the findings of a study can be applied in the real world. If the research can be applied to other settings, individuals, and times, then the external validity is high. 
  • Reliability : If your research is reliable, it means that it can be reproduced. If your measurement were repeated under the same conditions, it would produce similar results. This means that your measuring instrument consistently produces reliable results. For example, imagine a doctor building a symptoms questionnaire to detect a specific disease in a patient. Then, various other doctors use this questionnaire but end up diagnosing the same patient with a different condition. This means the questionnaire is not reliable in detecting the initial disease. Another important note here is that in order for your research to be reliable, it also needs to be objective. If the results of a study are the same, independent of who assesses them or interprets them, the study can be considered reliable. Let’s see the objectivity criteria in more detail now. 
  • Objectivity: In data science, objectivity means that the researcher needs to stay fully objective when it comes to its analysis. The results of a study need to be affected by objective criteria and not by the beliefs, personality, or values of the researcher. Objectivity needs to be ensured when you are gathering the data, for example, when interviewing individuals, the questions need to be asked in a way that doesn't influence the results. Paired with this, objectivity also needs to be thought of when interpreting the data. If different researchers reach the same conclusions, then the study is objective. For this last point, you can set predefined criteria to interpret the results to ensure all researchers follow the same steps. 

The discussed quality criteria cover mostly potential influences in a quantitative context. Analysis in qualitative research has by default additional subjective influences that must be controlled in a different way. Therefore, there are other quality criteria for this kind of research such as credibility, transferability, dependability, and confirmability. You can see each of them more in detail on this resource . 

Data Analysis Limitations & Barriers

Analyzing data is not an easy task. As you’ve seen throughout this post, there are many steps and techniques that you need to apply in order to extract useful information from your research. While a well-performed analysis can bring various benefits to your organization it doesn't come without limitations. In this section, we will discuss some of the main barriers you might encounter when conducting an analysis. Let’s see them more in detail. 

  • Lack of clear goals: No matter how good your data or analysis might be if you don’t have clear goals or a hypothesis the process might be worthless. While we mentioned some methods that don’t require a predefined hypothesis, it is always better to enter the analytical process with some clear guidelines of what you are expecting to get out of it, especially in a business context in which data is utilized to support important strategic decisions. 
  • Objectivity: Arguably one of the biggest barriers when it comes to data analysis in research is to stay objective. When trying to prove a hypothesis, researchers might find themselves, intentionally or unintentionally, directing the results toward an outcome that they want. To avoid this, always question your assumptions and avoid confusing facts with opinions. You can also show your findings to a research partner or external person to confirm that your results are objective. 
  • Data representation: A fundamental part of the analytical procedure is the way you represent your data. You can use various graphs and charts to represent your findings, but not all of them will work for all purposes. Choosing the wrong visual can not only damage your analysis but can mislead your audience, therefore, it is important to understand when to use each type of data depending on your analytical goals. Our complete guide on the types of graphs and charts lists 20 different visuals with examples of when to use them. 
  • Flawed correlation : Misleading statistics can significantly damage your research. We’ve already pointed out a few interpretation issues previously in the post, but it is an important barrier that we can't avoid addressing here as well. Flawed correlations occur when two variables appear related to each other but they are not. Confusing correlations with causation can lead to a wrong interpretation of results which can lead to building wrong strategies and loss of resources, therefore, it is very important to identify the different interpretation mistakes and avoid them. 
  • Sample size: A very common barrier to a reliable and efficient analysis process is the sample size. In order for the results to be trustworthy, the sample size should be representative of what you are analyzing. For example, imagine you have a company of 1000 employees and you ask the question “do you like working here?” to 50 employees of which 49 say yes, which means 95%. Now, imagine you ask the same question to the 1000 employees and 950 say yes, which also means 95%. Saying that 95% of employees like working in the company when the sample size was only 50 is not a representative or trustworthy conclusion. The significance of the results is way more accurate when surveying a bigger sample size.   
  • Privacy concerns: In some cases, data collection can be subjected to privacy regulations. Businesses gather all kinds of information from their customers from purchasing behaviors to addresses and phone numbers. If this falls into the wrong hands due to a breach, it can affect the security and confidentiality of your clients. To avoid this issue, you need to collect only the data that is needed for your research and, if you are using sensitive facts, make it anonymous so customers are protected. The misuse of customer data can severely damage a business's reputation, so it is important to keep an eye on privacy. 
  • Lack of communication between teams : When it comes to performing data analysis on a business level, it is very likely that each department and team will have different goals and strategies. However, they are all working for the same common goal of helping the business run smoothly and keep growing. When teams are not connected and communicating with each other, it can directly affect the way general strategies are built. To avoid these issues, tools such as data dashboards enable teams to stay connected through data in a visually appealing way. 
  • Innumeracy : Businesses are working with data more and more every day. While there are many BI tools available to perform effective analysis, data literacy is still a constant barrier. Not all employees know how to apply analysis techniques or extract insights from them. To prevent this from happening, you can implement different training opportunities that will prepare every relevant user to deal with data. 

Key Data Analysis Skills

As you've learned throughout this lengthy guide, analyzing data is a complex task that requires a lot of knowledge and skills. That said, thanks to the rise of self-service tools the process is way more accessible and agile than it once was. Regardless, there are still some key skills that are valuable to have when working with data, we list the most important ones below.

  • Critical and statistical thinking: To successfully analyze data you need to be creative and think out of the box. Yes, that might sound like a weird statement considering that data is often tight to facts. However, a great level of critical thinking is required to uncover connections, come up with a valuable hypothesis, and extract conclusions that go a step further from the surface. This, of course, needs to be complemented by statistical thinking and an understanding of numbers. 
  • Data cleaning: Anyone who has ever worked with data before will tell you that the cleaning and preparation process accounts for 80% of a data analyst's work, therefore, the skill is fundamental. But not just that, not cleaning the data adequately can also significantly damage the analysis which can lead to poor decision-making in a business scenario. While there are multiple tools that automate the cleaning process and eliminate the possibility of human error, it is still a valuable skill to dominate. 
  • Data visualization: Visuals make the information easier to understand and analyze, not only for professional users but especially for non-technical ones. Having the necessary skills to not only choose the right chart type but know when to apply it correctly is key. This also means being able to design visually compelling charts that make the data exploration process more efficient. 
  • SQL: The Structured Query Language or SQL is a programming language used to communicate with databases. It is fundamental knowledge as it enables you to update, manipulate, and organize data from relational databases which are the most common databases used by companies. It is fairly easy to learn and one of the most valuable skills when it comes to data analysis. 
  • Communication skills: This is a skill that is especially valuable in a business environment. Being able to clearly communicate analytical outcomes to colleagues is incredibly important, especially when the information you are trying to convey is complex for non-technical people. This applies to in-person communication as well as written format, for example, when generating a dashboard or report. While this might be considered a “soft” skill compared to the other ones we mentioned, it should not be ignored as you most likely will need to share analytical findings with others no matter the context. 

Data Analysis In The Big Data Environment

Big data is invaluable to today’s businesses, and by using different methods for data analysis, it’s possible to view your data in a way that can help you turn insight into positive action.

To inspire your efforts and put the importance of big data into context, here are some insights that you should know:

  • By 2026 the industry of big data is expected to be worth approximately $273.4 billion.
  • 94% of enterprises say that analyzing data is important for their growth and digital transformation. 
  • Companies that exploit the full potential of their data can increase their operating margins by 60% .
  • We already told you the benefits of Artificial Intelligence through this article. This industry's financial impact is expected to grow up to $40 billion by 2025.

Data analysis concepts may come in many forms, but fundamentally, any solid methodology will help to make your business more streamlined, cohesive, insightful, and successful than ever before.

Key Takeaways From Data Analysis 

As we reach the end of our data analysis journey, we leave a small summary of the main methods and techniques to perform excellent analysis and grow your business.

17 Essential Types of Data Analysis Methods:

  • Cluster analysis
  • Cohort analysis
  • Regression analysis
  • Factor analysis
  • Neural Networks
  • Data Mining
  • Text analysis
  • Time series analysis
  • Decision trees
  • Conjoint analysis 
  • Correspondence Analysis
  • Multidimensional Scaling 
  • Content analysis 
  • Thematic analysis
  • Narrative analysis 
  • Grounded theory analysis
  • Discourse analysis 

Top 17 Data Analysis Techniques:

  • Collaborate your needs
  • Establish your questions
  • Data democratization
  • Think of data governance 
  • Clean your data
  • Set your KPIs
  • Omit useless data
  • Build a data management roadmap
  • Integrate technology
  • Answer your questions
  • Visualize your data
  • Interpretation of data
  • Consider autonomous technology
  • Build a narrative
  • Share the load
  • Data Analysis tools
  • Refine your process constantly 

We’ve pondered the data analysis definition and drilled down into the practical applications of data-centric analytics, and one thing is clear: by taking measures to arrange your data and making your metrics work for you, it’s possible to transform raw information into action - the kind of that will push your business to the next level.

Yes, good data analytics techniques result in enhanced business intelligence (BI). To help you understand this notion in more detail, read our exploration of business intelligence reporting .

And, if you’re ready to perform your own analysis, drill down into your facts and figures while interacting with your data on astonishing visuals, you can try our software for a free, 14-day trial .

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Data Analysis

Research Methods Guide: Data Analysis

  • Introduction
  • Research Design & Method
  • Survey Research
  • Interview Research
  • Resources & Consultation

Tools for Analyzing Survey Data

  • R (open source)
  • Stata 
  • DataCracker (free up to 100 responses per survey)
  • SurveyMonkey (free up to 100 responses per survey)

Tools for Analyzing Interview Data

  • AQUAD (open source)
  • NVivo 

Data Analysis and Presentation Techniques that Apply to both Survey and Interview Research

  • Create a documentation of the data and the process of data collection.
  • Analyze the data rather than just describing it - use it to tell a story that focuses on answering the research question.
  • Use charts or tables to help the reader understand the data and then highlight the most interesting findings.
  • Don’t get bogged down in the detail - tell the reader about the main themes as they relate to the research question, rather than reporting everything that survey respondents or interviewees said.
  • State that ‘most people said …’ or ‘few people felt …’ rather than giving the number of people who said a particular thing.
  • Use brief quotes where these illustrate a particular point really well.
  • Respect confidentiality - you could attribute a quote to 'a faculty member', ‘a student’, or 'a customer' rather than ‘Dr. Nicholls.'

Survey Data Analysis

  • If you used an online survey, the software will automatically collate the data – you will just need to download the data, for example as a spreadsheet.
  • If you used a paper questionnaire, you will need to manually transfer the responses from the questionnaires into a spreadsheet.  Put each question number as a column heading, and use one row for each person’s answers.  Then assign each possible answer a number or ‘code’.
  • When all the data is present and correct, calculate how many people selected each response.
  • Once you have calculated how many people selected each response, you can set up tables and/or graph to display the data.  This could take the form of a table or chart.
  • In addition to descriptive statistics that characterize findings from your survey, you can use statistical and analytical reporting techniques if needed.

Interview Data Analysis

  • Data Reduction and Organization: Try not to feel overwhelmed by quantity of information that has been collected from interviews- a one-hour interview can generate 20 to 25 pages of single-spaced text.   Once you start organizing your fieldwork notes around themes, you can easily identify which part of your data to be used for further analysis.
  • What were the main issues or themes that struck you in this contact / interviewee?"
  • Was there anything else that struck you as salient, interesting, illuminating or important in this contact / interviewee? 
  • What information did you get (or failed to get) on each of the target questions you had for this contact / interviewee?
  • Connection of the data: You can connect data around themes and concepts - then you can show how one concept may influence another.
  • Examination of Relationships: Examining relationships is the centerpiece of the analytic process, because it allows you to move from simple description of the people and settings to explanations of why things happened as they did with those people in that setting.
  • << Previous: Interview Research
  • Next: Resources & Consultation >>
  • Last Updated: Aug 21, 2023 10:42 AM

Data Analysis in Quantitative Research

  • Reference work entry
  • First Online: 13 January 2019
  • Cite this reference work entry

Book cover

  • Yong Moon Jung 2  

1744 Accesses

1 Citations

Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility. Conducting quantitative data analysis requires a prerequisite understanding of the statistical knowledge and skills. It also requires rigor in the choice of appropriate analysis model and the interpretation of the analysis outcomes. Basically, the choice of appropriate analysis techniques is determined by the type of research question and the nature of the data. In addition, different analysis techniques require different assumptions of data. This chapter provides introductory guides for readers to assist them with their informed decision-making in choosing the correct analysis models. To this end, it begins with discussion of the levels of measure: nominal, ordinal, and scale. Some commonly used analysis techniques in univariate, bivariate, and multivariate data analysis are presented for practical examples. Example analysis outcomes are produced by the use of SPSS (Statistical Package for Social Sciences).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Armstrong JS. Significance tests harm progress in forecasting. Int J Forecast. 2007;23(2):321–7.

Article   Google Scholar  

Babbie E. The practice of social research. 14th ed. Belmont: Cengage Learning; 2016.

Google Scholar  

Brockopp DY, Hastings-Tolsma MT. Fundamentals of nursing research. Boston: Jones & Bartlett; 2003.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks: Sage; 2014.

Fawcett J. The relationship of theory and research. Philadelphia: F. A. Davis; 1999.

Field A. Discovering statistics using IBM SPSS statistics. London: Sage; 2013.

Grove SK, Gray JR, Burns N. Understanding nursing research: building an evidence-based practice. 6th ed. St. Louis: Elsevier Saunders; 2015.

Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RD. Multivariate data analysis. Upper Saddle River: Pearson Prentice Hall; 2006.

Katz MH. Multivariable analysis: a practical guide for clinicians. Cambridge: Cambridge University Press; 2006.

Book   Google Scholar  

McHugh ML. Scientific inquiry. J Specialists Pediatr Nurs. 2007; 8 (1):35–7. Volume 8, Issue 1, Version of Record online: 22 FEB 2007

Pallant J. SPSS survival manual: a step by step guide to data analysis using IBM SPSS. Sydney: Allen & Unwin; 2016.

Polit DF, Beck CT. Nursing research: principles and methods. Philadelphia: Lippincott Williams & Wilkins; 2004.

Trochim WMK, Donnelly JP. Research methods knowledge base. 3rd ed. Mason: Thomson Custom Publishing; 2007.

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson Education.

Wells CS, Hin JM. Dealing with assumptions underlying statistical tests. Psychol Sch. 2007;44(5):495–502.

Download references

Author information

Authors and affiliations.

Centre for Business and Social Innovation, University of Technology Sydney, Ultimo, NSW, Australia

Yong Moon Jung

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yong Moon Jung .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Jung, Y.M. (2019). Data Analysis in Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_109

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_109

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Grad Coach

Qualitative Data Analysis Methods 101:

The “big 6” methods + examples.

By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)

Qualitative data analysis methods. Wow, that’s a mouthful. 

If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!

Don’t worry – in this post, we’ll unpack the most popular analysis methods , one at a time, so that you can approach your analysis with confidence and competence – whether that’s for a dissertation, thesis or really any kind of research project.

Qualitative data analysis methods

What (exactly) is qualitative data analysis?

To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.

Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.

So, if it’s not numbers, what is it?

Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.

So, how’s that different from quantitative data, you ask?

Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .

qualitative data analysis vs quantitative data analysis

So, qualitative analysis is easier than quantitative, right?

Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.

Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.

Need a helping hand?

method of data analysis in research

In this post, we’ll explore qualitative data analysis by looking at some of the most common analysis methods we encounter. We’re not going to cover every possible qualitative method and we’re not going to go into heavy detail – we’re just going to give you the big picture. That said, we will of course includes links to loads of extra resources so that you can learn more about whichever analysis method interests you.

Without further delay, let’s get into it.

The “Big 6” Qualitative Analysis Methods 

There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.

The 6 most popular methods (or at least the ones we see at Grad Coach) are:

  • Content analysis
  • Narrative analysis
  • Discourse analysis
  • Thematic analysis
  • Grounded theory (GT)
  • Interpretive phenomenological analysis (IPA)

Let’s take a look at each of them…

QDA Method #1: Qualitative Content Analysis

Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.

Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.

Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.

Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.

QDA Method #2: Narrative Analysis 

As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.

You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.

Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.

Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.

QDA Method #3: Discourse Analysis 

Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.

To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.

So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.

Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.

Discourse analysis can also be very time-consuming  as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.

QDA Method #4: Thematic Analysis

Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.

Let’s take a look at an example.

With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.

So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.

Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.

Thematic analysis takes bodies of data and groups them according to similarities (themes), which help us make sense of the content.

QDA Method #5: Grounded theory (GT) 

Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.

What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name). 

Let’s look at an example of GT in action.

Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.

After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.

From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.

So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.

Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .

Grounded theory is used to create a new theory (or theories) by using the data at hand, as opposed to existing theories and frameworks.

QDA Method #6:   Interpretive Phenomenological Analysis (IPA)

Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…

Let’s just stick with IPA, okay?

IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.

It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.

Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.

IPA can help you understand the personal experiences of a person or group concerning a major life event, an experience or a situation.

How to choose the right analysis method

In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”

Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:

  • Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
  • Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
  • Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.

As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant. 

It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.

As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.

Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.

No single analysis method is perfect, so it can often make sense to adopt more than one  method (this is called triangulation).

Let’s recap on QDA methods…

In this post, we looked at six popular qualitative data analysis methods:

  • First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
  • Then we looked at narrative analysis , which is about analysing how stories are told.
  • Next up was discourse analysis – which is about analysing conversations and interactions.
  • Then we moved on to thematic analysis – which is about identifying themes and patterns.
  • From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
  • And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.

Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.

If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.

method of data analysis in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research design for qualitative and quantitative studies

84 Comments

Richard N

This has been very helpful. Thank you.

netaji

Thank you madam,

Mariam Jaiyeola

Thank you so much for this information

Nzube

I wonder it so clear for understand and good for me. can I ask additional query?

Lee

Very insightful and useful

Susan Nakaweesi

Good work done with clear explanations. Thank you.

Titilayo

Thanks so much for the write-up, it’s really good.

Hemantha Gunasekara

Thanks madam . It is very important .

Gumathandra

thank you very good

Pramod Bahulekar

This has been very well explained in simple language . It is useful even for a new researcher.

Derek Jansen

Great to hear that. Good luck with your qualitative data analysis, Pramod!

Adam Zahir

This is very useful information. And it was very a clear language structured presentation. Thanks a lot.

Golit,F.

Thank you so much.

Emmanuel

very informative sequential presentation

Shahzada

Precise explanation of method.

Alyssa

Hi, may we use 2 data analysis methods in our qualitative research?

Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.

Dr. Manju Pandey

You explained it in very simple language, everyone can understand it. Thanks so much.

Phillip

Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands

Anne

Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?

Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048

Rev. Osadare K . J

This is my first time to come across a well explained data analysis. so helpful.

Tina King

I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!

Bromie

Thank you very much, this is well explained and useful

udayangani

i need a citation of your book.

khutsafalo

Thanks a lot , remarkable indeed, enlighting to the best

jas

Hi Derek, What other theories/methods would you recommend when the data is a whole speech?

M

Keep writing useful artikel.

Adane

It is important concept about QDA and also the way to express is easily understandable, so thanks for all.

Carl Benecke

Thank you, this is well explained and very useful.

Ngwisa

Very helpful .Thanks.

Hajra Aman

Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards

Hillary Mophethe

The session was very helpful and insightful. Thank you

This was very helpful and insightful. Easy to read and understand

Catherine

As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.

Keep up the good work Grad Coach you are unmatched with quality content for sure.

Abdulkerim

Its Great and help me the most. A Million Thanks you Dr.

Emanuela

It is a very nice work

Noble Naade

Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?

Karen

This is Amazing and well explained, thanks

amirhossein

great overview

Tebogo

What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.

Catherine Shimechero

Informative video, explained in a clear and simple way. Kudos

Van Hmung

Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.

BRIAN ONYANGO MWAGA

This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.

Livhuwani Reineth

Very helpful indeed. Thanku so much for the insight.

Storm Erlank

This was incredibly helpful.

Jack Kanas

Very helpful.

catherine

very educative

Wan Roslina

Nicely written especially for novice academic researchers like me! Thank you.

Talash

choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?

ramesh

that was very helpful for me. because these details are so important to my research. thank you very much

Kumsa Desisa

I learnt a lot. Thank you

Tesfa NT

Relevant and Informative, thanks !

norma

Well-planned and organized, thanks much! 🙂

Dr. Jacob Lubuva

I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!

Nyi Nyi Lwin

Clear explanation on qualitative and how about Case study

Ogobuchi Otuu

This was helpful. Thank you

Alicia

This was really of great assistance, it was just the right information needed. Explanation very clear and follow.

Wow, Thanks for making my life easy

C. U

This was helpful thanks .

Dr. Alina Atif

Very helpful…. clear and written in an easily understandable manner. Thank you.

Herb

This was so helpful as it was easy to understand. I’m a new to research thank you so much.

cissy

so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?

Ayo

Thank you for the great content, I have learnt a lot. So helpful

Tesfaye

precise and clear presentation with simple language and thank you for that.

nneheng

very informative content, thank you.

Oscar Kuebutornye

You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!

NG

Brilliant Delivery. You made a complex subject seem so easy. Well done.

Ankit Kumar

Beautifully explained.

Thanks a lot

Kidada Owen-Browne

Is there a video the captures the practical process of coding using automated applications?

Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.

Mathewos Damtew

content analysis can be qualitative research?

Hend

THANK YOU VERY MUCH.

Dev get

Thank you very much for such a wonderful content

Kassahun Aman

do you have any material on Data collection

Prince .S. mpofu

What a powerful explanation of the QDA methods. Thank you.

Kassahun

Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.

BORA SAMWELI MATUTULI

very helpful, thank you so much

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

A data analyst in side profile, looking at a laptop screen

The 7 Most Useful Data Analysis Methods and Techniques

method of data analysis in research

Data analytics is the process of analyzing raw data to draw out meaningful insights. These insights are then used to determine the best course of action.

When is the best time to roll out that marketing campaign? Is the current team structure as effective as it could be? Which customer segments are most likely to purchase your new product?

Ultimately, data analytics is a crucial driver of any successful business strategy. But how do data analysts actually turn raw data into something useful? There are a range of methods and techniques that data analysts use depending on the type of data in question and the kinds of insights they want to uncover.

You can get a hands-on introduction to data analytics in this free short course .

In this post, we’ll explore some of the most useful data analysis techniques. By the end, you’ll have a much clearer idea of how you can transform meaningless data into business intelligence. We’ll cover:

  • What is data analysis and why is it important?
  • What is the difference between qualitative and quantitative data?
  • Regression analysis
  • Monte Carlo simulation
  • Factor analysis
  • Cohort analysis
  • Cluster analysis
  • Time series analysis
  • Sentiment analysis
  • The data analysis process
  • The best tools for data analysis
  •  Key takeaways

The first six methods listed are used for quantitative data , while the last technique applies to qualitative data. We briefly explain the difference between quantitative and qualitative data in section two, but if you want to skip straight to a particular analysis technique, just use the clickable menu.

1. What is data analysis and why is it important?

Data analysis is, put simply, the process of discovering useful information by evaluating data. This is done through a process of inspecting, cleaning, transforming, and modeling data using analytical and statistical tools, which we will explore in detail further along in this article.

Why is data analysis important? Analyzing data effectively helps organizations make business decisions. Nowadays, data is collected by businesses constantly: through surveys, online tracking, online marketing analytics, collected subscription and registration data (think newsletters), social media monitoring, among other methods.

These data will appear as different structures, including—but not limited to—the following:

The concept of big data —data that is so large, fast, or complex, that it is difficult or impossible to process using traditional methods—gained momentum in the early 2000s. Then, Doug Laney, an industry analyst, articulated what is now known as the mainstream definition of big data as the three Vs: volume, velocity, and variety. 

  • Volume: As mentioned earlier, organizations are collecting data constantly. In the not-too-distant past it would have been a real issue to store, but nowadays storage is cheap and takes up little space.
  • Velocity: Received data needs to be handled in a timely manner. With the growth of the Internet of Things, this can mean these data are coming in constantly, and at an unprecedented speed.
  • Variety: The data being collected and stored by organizations comes in many forms, ranging from structured data—that is, more traditional, numerical data—to unstructured data—think emails, videos, audio, and so on. We’ll cover structured and unstructured data a little further on.

This is a form of data that provides information about other data, such as an image. In everyday life you’ll find this by, for example, right-clicking on a file in a folder and selecting “Get Info”, which will show you information such as file size and kind, date of creation, and so on.

Real-time data

This is data that is presented as soon as it is acquired. A good example of this is a stock market ticket, which provides information on the most-active stocks in real time.

Machine data

This is data that is produced wholly by machines, without human instruction. An example of this could be call logs automatically generated by your smartphone.

Quantitative and qualitative data

Quantitative data—otherwise known as structured data— may appear as a “traditional” database—that is, with rows and columns. Qualitative data—otherwise known as unstructured data—are the other types of data that don’t fit into rows and columns, which can include text, images, videos and more. We’ll discuss this further in the next section.

2. What is the difference between quantitative and qualitative data?

How you analyze your data depends on the type of data you’re dealing with— quantitative or qualitative . So what’s the difference?

Quantitative data is anything measurable , comprising specific quantities and numbers. Some examples of quantitative data include sales figures, email click-through rates, number of website visitors, and percentage revenue increase. Quantitative data analysis techniques focus on the statistical, mathematical, or numerical analysis of (usually large) datasets. This includes the manipulation of statistical data using computational techniques and algorithms. Quantitative analysis techniques are often used to explain certain phenomena or to make predictions.

Qualitative data cannot be measured objectively , and is therefore open to more subjective interpretation. Some examples of qualitative data include comments left in response to a survey question, things people have said during interviews, tweets and other social media posts, and the text included in product reviews. With qualitative data analysis, the focus is on making sense of unstructured data (such as written text, or transcripts of spoken conversations). Often, qualitative analysis will organize the data into themes—a process which, fortunately, can be automated.

Data analysts work with both quantitative and qualitative data , so it’s important to be familiar with a variety of analysis methods. Let’s take a look at some of the most useful techniques now.

3. Data analysis techniques

Now we’re familiar with some of the different types of data, let’s focus on the topic at hand: different methods for analyzing data. 

a. Regression analysis

Regression analysis is used to estimate the relationship between a set of variables. When conducting any type of regression analysis , you’re looking to see if there’s a correlation between a dependent variable (that’s the variable or outcome you want to measure or predict) and any number of independent variables (factors which may have an impact on the dependent variable). The aim of regression analysis is to estimate how one or more variables might impact the dependent variable, in order to identify trends and patterns. This is especially useful for making predictions and forecasting future trends.

Let’s imagine you work for an ecommerce company and you want to examine the relationship between: (a) how much money is spent on social media marketing, and (b) sales revenue. In this case, sales revenue is your dependent variable—it’s the factor you’re most interested in predicting and boosting. Social media spend is your independent variable; you want to determine whether or not it has an impact on sales and, ultimately, whether it’s worth increasing, decreasing, or keeping the same. Using regression analysis, you’d be able to see if there’s a relationship between the two variables. A positive correlation would imply that the more you spend on social media marketing, the more sales revenue you make. No correlation at all might suggest that social media marketing has no bearing on your sales. Understanding the relationship between these two variables would help you to make informed decisions about the social media budget going forward. However: It’s important to note that, on their own, regressions can only be used to determine whether or not there is a relationship between a set of variables—they don’t tell you anything about cause and effect. So, while a positive correlation between social media spend and sales revenue may suggest that one impacts the other, it’s impossible to draw definitive conclusions based on this analysis alone.

There are many different types of regression analysis, and the model you use depends on the type of data you have for the dependent variable. For example, your dependent variable might be continuous (i.e. something that can be measured on a continuous scale, such as sales revenue in USD), in which case you’d use a different type of regression analysis than if your dependent variable was categorical in nature (i.e. comprising values that can be categorised into a number of distinct groups based on a certain characteristic, such as customer location by continent). You can learn more about different types of dependent variables and how to choose the right regression analysis in this guide .

Regression analysis in action: Investigating the relationship between clothing brand Benetton’s advertising expenditure and sales

b. Monte Carlo simulation

When making decisions or taking certain actions, there are a range of different possible outcomes. If you take the bus, you might get stuck in traffic. If you walk, you might get caught in the rain or bump into your chatty neighbor, potentially delaying your journey. In everyday life, we tend to briefly weigh up the pros and cons before deciding which action to take; however, when the stakes are high, it’s essential to calculate, as thoroughly and accurately as possible, all the potential risks and rewards.

Monte Carlo simulation, otherwise known as the Monte Carlo method, is a computerized technique used to generate models of possible outcomes and their probability distributions. It essentially considers a range of possible outcomes and then calculates how likely it is that each particular outcome will be realized. The Monte Carlo method is used by data analysts to conduct advanced risk analysis, allowing them to better forecast what might happen in the future and make decisions accordingly.

So how does Monte Carlo simulation work, and what can it tell us? To run a Monte Carlo simulation, you’ll start with a mathematical model of your data—such as a spreadsheet. Within your spreadsheet, you’ll have one or several outputs that you’re interested in; profit, for example, or number of sales. You’ll also have a number of inputs; these are variables that may impact your output variable. If you’re looking at profit, relevant inputs might include the number of sales, total marketing spend, and employee salaries. If you knew the exact, definitive values of all your input variables, you’d quite easily be able to calculate what profit you’d be left with at the end. However, when these values are uncertain, a Monte Carlo simulation enables you to calculate all the possible options and their probabilities. What will your profit be if you make 100,000 sales and hire five new employees on a salary of $50,000 each? What is the likelihood of this outcome? What will your profit be if you only make 12,000 sales and hire five new employees? And so on. It does this by replacing all uncertain values with functions which generate random samples from distributions determined by you, and then running a series of calculations and recalculations to produce models of all the possible outcomes and their probability distributions. The Monte Carlo method is one of the most popular techniques for calculating the effect of unpredictable variables on a specific output variable, making it ideal for risk analysis.

Monte Carlo simulation in action: A case study using Monte Carlo simulation for risk analysis

 c. Factor analysis

Factor analysis is a technique used to reduce a large number of variables to a smaller number of factors. It works on the basis that multiple separate, observable variables correlate with each other because they are all associated with an underlying construct. This is useful not only because it condenses large datasets into smaller, more manageable samples, but also because it helps to uncover hidden patterns. This allows you to explore concepts that cannot be easily measured or observed—such as wealth, happiness, fitness, or, for a more business-relevant example, customer loyalty and satisfaction.

Let’s imagine you want to get to know your customers better, so you send out a rather long survey comprising one hundred questions. Some of the questions relate to how they feel about your company and product; for example, “Would you recommend us to a friend?” and “How would you rate the overall customer experience?” Other questions ask things like “What is your yearly household income?” and “How much are you willing to spend on skincare each month?”

Once your survey has been sent out and completed by lots of customers, you end up with a large dataset that essentially tells you one hundred different things about each customer (assuming each customer gives one hundred responses). Instead of looking at each of these responses (or variables) individually, you can use factor analysis to group them into factors that belong together—in other words, to relate them to a single underlying construct. In this example, factor analysis works by finding survey items that are strongly correlated. This is known as covariance . So, if there’s a strong positive correlation between household income and how much they’re willing to spend on skincare each month (i.e. as one increases, so does the other), these items may be grouped together. Together with other variables (survey responses), you may find that they can be reduced to a single factor such as “consumer purchasing power”. Likewise, if a customer experience rating of 10/10 correlates strongly with “yes” responses regarding how likely they are to recommend your product to a friend, these items may be reduced to a single factor such as “customer satisfaction”.

In the end, you have a smaller number of factors rather than hundreds of individual variables. These factors are then taken forward for further analysis, allowing you to learn more about your customers (or any other area you’re interested in exploring).

Factor analysis in action: Using factor analysis to explore customer behavior patterns in Tehran

d. Cohort analysis

Cohort analysis is a data analytics technique that groups users based on a shared characteristic , such as the date they signed up for a service or the product they purchased. Once users are grouped into cohorts, analysts can track their behavior over time to identify trends and patterns.

So what does this mean and why is it useful? Let’s break down the above definition further. A cohort is a group of people who share a common characteristic (or action) during a given time period. Students who enrolled at university in 2020 may be referred to as the 2020 cohort. Customers who purchased something from your online store via the app in the month of December may also be considered a cohort.

With cohort analysis, you’re dividing your customers or users into groups and looking at how these groups behave over time. So, rather than looking at a single, isolated snapshot of all your customers at a given moment in time (with each customer at a different point in their journey), you’re examining your customers’ behavior in the context of the customer lifecycle. As a result, you can start to identify patterns of behavior at various points in the customer journey—say, from their first ever visit to your website, through to email newsletter sign-up, to their first purchase, and so on. As such, cohort analysis is dynamic, allowing you to uncover valuable insights about the customer lifecycle.

This is useful because it allows companies to tailor their service to specific customer segments (or cohorts). Let’s imagine you run a 50% discount campaign in order to attract potential new customers to your website. Once you’ve attracted a group of new customers (a cohort), you’ll want to track whether they actually buy anything and, if they do, whether or not (and how frequently) they make a repeat purchase. With these insights, you’ll start to gain a much better understanding of when this particular cohort might benefit from another discount offer or retargeting ads on social media, for example. Ultimately, cohort analysis allows companies to optimize their service offerings (and marketing) to provide a more targeted, personalized experience. You can learn more about how to run cohort analysis using Google Analytics .

Cohort analysis in action: How Ticketmaster used cohort analysis to boost revenue

e. Cluster analysis

Cluster analysis is an exploratory technique that seeks to identify structures within a dataset. The goal of cluster analysis is to sort different data points into groups (or clusters) that are internally homogeneous and externally heterogeneous. This means that data points within a cluster are similar to each other, and dissimilar to data points in another cluster. Clustering is used to gain insight into how data is distributed in a given dataset, or as a preprocessing step for other algorithms.

There are many real-world applications of cluster analysis. In marketing, cluster analysis is commonly used to group a large customer base into distinct segments, allowing for a more targeted approach to advertising and communication. Insurance firms might use cluster analysis to investigate why certain locations are associated with a high number of insurance claims. Another common application is in geology, where experts will use cluster analysis to evaluate which cities are at greatest risk of earthquakes (and thus try to mitigate the risk with protective measures).

It’s important to note that, while cluster analysis may reveal structures within your data, it won’t explain why those structures exist. With that in mind, cluster analysis is a useful starting point for understanding your data and informing further analysis. Clustering algorithms are also used in machine learning—you can learn more about clustering in machine learning in our guide .

Cluster analysis in action: Using cluster analysis for customer segmentation—a telecoms case study example

f. Time series analysis

Time series analysis is a statistical technique used to identify trends and cycles over time. Time series data is a sequence of data points which measure the same variable at different points in time (for example, weekly sales figures or monthly email sign-ups). By looking at time-related trends, analysts are able to forecast how the variable of interest may fluctuate in the future.

When conducting time series analysis, the main patterns you’ll be looking out for in your data are:

  • Trends: Stable, linear increases or decreases over an extended time period.
  • Seasonality: Predictable fluctuations in the data due to seasonal factors over a short period of time. For example, you might see a peak in swimwear sales in summer around the same time every year.
  • Cyclic patterns: Unpredictable cycles where the data fluctuates. Cyclical trends are not due to seasonality, but rather, may occur as a result of economic or industry-related conditions.

As you can imagine, the ability to make informed predictions about the future has immense value for business. Time series analysis and forecasting is used across a variety of industries, most commonly for stock market analysis, economic forecasting, and sales forecasting. There are different types of time series models depending on the data you’re using and the outcomes you want to predict. These models are typically classified into three broad types: the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. For an in-depth look at time series analysis, refer to our guide .

Time series analysis in action: Developing a time series model to predict jute yarn demand in Bangladesh

g. Sentiment analysis

When you think of data, your mind probably automatically goes to numbers and spreadsheets.

Many companies overlook the value of qualitative data, but in reality, there are untold insights to be gained from what people (especially customers) write and say about you. So how do you go about analyzing textual data?

One highly useful qualitative technique is sentiment analysis , a technique which belongs to the broader category of text analysis —the (usually automated) process of sorting and understanding textual data.

With sentiment analysis, the goal is to interpret and classify the emotions conveyed within textual data. From a business perspective, this allows you to ascertain how your customers feel about various aspects of your brand, product, or service.

There are several different types of sentiment analysis models, each with a slightly different focus. The three main types include:

Fine-grained sentiment analysis

If you want to focus on opinion polarity (i.e. positive, neutral, or negative) in depth, fine-grained sentiment analysis will allow you to do so.

For example, if you wanted to interpret star ratings given by customers, you might use fine-grained sentiment analysis to categorize the various ratings along a scale ranging from very positive to very negative.

Emotion detection

This model often uses complex machine learning algorithms to pick out various emotions from your textual data.

You might use an emotion detection model to identify words associated with happiness, anger, frustration, and excitement, giving you insight into how your customers feel when writing about you or your product on, say, a product review site.

Aspect-based sentiment analysis

This type of analysis allows you to identify what specific aspects the emotions or opinions relate to, such as a certain product feature or a new ad campaign.

If a customer writes that they “find the new Instagram advert so annoying”, your model should detect not only a negative sentiment, but also the object towards which it’s directed.

In a nutshell, sentiment analysis uses various Natural Language Processing (NLP) algorithms and systems which are trained to associate certain inputs (for example, certain words) with certain outputs.

For example, the input “annoying” would be recognized and tagged as “negative”. Sentiment analysis is crucial to understanding how your customers feel about you and your products, for identifying areas for improvement, and even for averting PR disasters in real-time!

Sentiment analysis in action: 5 Real-world sentiment analysis case studies

4. The data analysis process

In order to gain meaningful insights from data, data analysts will perform a rigorous step-by-step process. We go over this in detail in our step by step guide to the data analysis process —but, to briefly summarize, the data analysis process generally consists of the following phases:

Defining the question

The first step for any data analyst will be to define the objective of the analysis, sometimes called a ‘problem statement’. Essentially, you’re asking a question with regards to a business problem you’re trying to solve. Once you’ve defined this, you’ll then need to determine which data sources will help you answer this question.

Collecting the data

Now that you’ve defined your objective, the next step will be to set up a strategy for collecting and aggregating the appropriate data. Will you be using quantitative (numeric) or qualitative (descriptive) data? Do these data fit into first-party, second-party, or third-party data?

Learn more: Quantitative vs. Qualitative Data: What’s the Difference? 

Cleaning the data

Unfortunately, your collected data isn’t automatically ready for analysis—you’ll have to clean it first. As a data analyst, this phase of the process will take up the most time. During the data cleaning process, you will likely be:

  • Removing major errors, duplicates, and outliers
  • Removing unwanted data points
  • Structuring the data—that is, fixing typos, layout issues, etc.
  • Filling in major gaps in data

Analyzing the data

Now that we’ve finished cleaning the data, it’s time to analyze it! Many analysis methods have already been described in this article, and it’s up to you to decide which one will best suit the assigned objective. It may fall under one of the following categories:

  • Descriptive analysis , which identifies what has already happened
  • Diagnostic analysis , which focuses on understanding why something has happened
  • Predictive analysis , which identifies future trends based on historical data
  • Prescriptive analysis , which allows you to make recommendations for the future

Visualizing and sharing your findings

We’re almost at the end of the road! Analyses have been made, insights have been gleaned—all that remains to be done is to share this information with others. This is usually done with a data visualization tool, such as Google Charts, or Tableau.

Learn more: 13 of the Most Common Types of Data Visualization

To sum up the process, Will’s explained it all excellently in the following video:

5. The best tools for data analysis

As you can imagine, every phase of the data analysis process requires the data analyst to have a variety of tools under their belt that assist in gaining valuable insights from data. We cover these tools in greater detail in this article , but, in summary, here’s our best-of-the-best list, with links to each product:

The top 9 tools for data analysts

  • Microsoft Excel
  • Jupyter Notebook
  • Apache Spark
  • Microsoft Power BI

6. Key takeaways and further reading

As you can see, there are many different data analysis techniques at your disposal. In order to turn your raw data into actionable insights, it’s important to consider what kind of data you have (is it qualitative or quantitative?) as well as the kinds of insights that will be useful within the given context. In this post, we’ve introduced seven of the most useful data analysis techniques—but there are many more out there to be discovered!

So what now? If you haven’t already, we recommend reading the case studies for each analysis technique discussed in this post (you’ll find a link at the end of each section). For a more hands-on introduction to the kinds of methods and techniques that data analysts use, try out this free introductory data analytics short course. In the meantime, you might also want to read the following:

  • The Best Online Data Analytics Courses for 2024
  • What Is Time Series Data and How Is It Analyzed?
  • What is Spatial Analysis?
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

method of data analysis in research

Home Market Research

Qualitative Data Analysis: What is it, Methods + Examples

Explore qualitative data analysis with diverse methods and real-world examples. Uncover the nuances of human experiences with this guide.

In a world rich with information and narrative, understanding the deeper layers of human experiences requires a unique vision that goes beyond numbers and figures. This is where the power of qualitative data analysis comes to light.

In this blog, we’ll learn about qualitative data analysis, explore its methods, and provide real-life examples showcasing its power in uncovering insights.

What is Qualitative Data Analysis?

Qualitative data analysis is a systematic process of examining non-numerical data to extract meaning, patterns, and insights.

In contrast to quantitative analysis, which focuses on numbers and statistical metrics, the qualitative study focuses on the qualitative aspects of data, such as text, images, audio, and videos. It seeks to understand every aspect of human experiences, perceptions, and behaviors by examining the data’s richness.

Companies frequently conduct this analysis on customer feedback. You can collect qualitative data from reviews, complaints, chat messages, interactions with support centers, customer interviews, case notes, or even social media comments. This kind of data holds the key to understanding customer sentiments and preferences in a way that goes beyond mere numbers.

Importance of Qualitative Data Analysis

Qualitative data analysis plays a crucial role in your research and decision-making process across various disciplines. Let’s explore some key reasons that underline the significance of this analysis:

In-Depth Understanding

It enables you to explore complex and nuanced aspects of a phenomenon, delving into the ‘how’ and ‘why’ questions. This method provides you with a deeper understanding of human behavior, experiences, and contexts that quantitative approaches might not capture fully.

Contextual Insight

You can use this analysis to give context to numerical data. It will help you understand the circumstances and conditions that influence participants’ thoughts, feelings, and actions. This contextual insight becomes essential for generating comprehensive explanations.

Theory Development

You can generate or refine hypotheses via qualitative data analysis. As you analyze the data attentively, you can form hypotheses, concepts, and frameworks that will drive your future research and contribute to theoretical advances.

Participant Perspectives

When performing qualitative research, you can highlight participant voices and opinions. This approach is especially useful for understanding marginalized or underrepresented people, as it allows them to communicate their experiences and points of view.

Exploratory Research

The analysis is frequently used at the exploratory stage of your project. It assists you in identifying important variables, developing research questions, and designing quantitative studies that will follow.

Types of Qualitative Data

When conducting qualitative research, you can use several qualitative data collection methods , and here you will come across many sorts of qualitative data that can provide you with unique insights into your study topic. These data kinds add new views and angles to your understanding and analysis.

Interviews and Focus Groups

Interviews and focus groups will be among your key methods for gathering qualitative data. Interviews are one-on-one talks in which participants can freely share their thoughts, experiences, and opinions.

Focus groups, on the other hand, are discussions in which members interact with one another, resulting in dynamic exchanges of ideas. Both methods provide rich qualitative data and direct access to participant perspectives.

Observations and Field Notes

Observations and field notes are another useful sort of qualitative data. You can immerse yourself in the research environment through direct observation, carefully documenting behaviors, interactions, and contextual factors.

These observations will be recorded in your field notes, providing a complete picture of the environment and the behaviors you’re researching. This data type is especially important for comprehending behavior in their natural setting.

Textual and Visual Data

Textual and visual data include a wide range of resources that can be qualitatively analyzed. Documents, written narratives, and transcripts from various sources, such as interviews or speeches, are examples of textual data.

Photographs, films, and even artwork provide a visual layer to your research. These forms of data allow you to investigate what is spoken and the underlying emotions, details, and symbols expressed by language or pictures.

When to Choose Qualitative Data Analysis over Quantitative Data Analysis

As you begin your research journey, understanding why the analysis of qualitative data is important will guide your approach to understanding complex events. If you analyze qualitative data, it will provide new insights that complement quantitative methodologies, which will give you a broader understanding of your study topic.

It is critical to know when to use qualitative analysis over quantitative procedures. You can prefer qualitative data analysis when:

  • Complexity Reigns: When your research questions involve deep human experiences, motivations, or emotions, qualitative research excels at revealing these complexities.
  • Exploration is Key: Qualitative analysis is ideal for exploratory research. It will assist you in understanding a new or poorly understood topic before formulating quantitative hypotheses.
  • Context Matters: If you want to understand how context affects behaviors or results, qualitative data analysis provides the depth needed to grasp these relationships.
  • Unanticipated Findings: When your study provides surprising new viewpoints or ideas, qualitative analysis helps you to delve deeply into these emerging themes.
  • Subjective Interpretation is Vital: When it comes to understanding people’s subjective experiences and interpretations, qualitative data analysis is the way to go.

You can make informed decisions regarding the right approach for your research objectives if you understand the importance of qualitative analysis and recognize the situations where it shines.

Qualitative Data Analysis Methods and Examples

Exploring various qualitative data analysis methods will provide you with a wide collection for making sense of your research findings. Once the data has been collected, you can choose from several analysis methods based on your research objectives and the data type you’ve collected.

There are five main methods for analyzing qualitative data. Each method takes a distinct approach to identifying patterns, themes, and insights within your qualitative data. They are:

Method 1: Content Analysis

Content analysis is a methodical technique for analyzing textual or visual data in a structured manner. In this method, you will categorize qualitative data by splitting it into manageable pieces and assigning the manual coding process to these units.

As you go, you’ll notice ongoing codes and designs that will allow you to conclude the content. This method is very beneficial for detecting common ideas, concepts, or themes in your data without losing the context.

Steps to Do Content Analysis

Follow these steps when conducting content analysis:

  • Collect and Immerse: Begin by collecting the necessary textual or visual data. Immerse yourself in this data to fully understand its content, context, and complexities.
  • Assign Codes and Categories: Assign codes to relevant data sections that systematically represent major ideas or themes. Arrange comparable codes into groups that cover the major themes.
  • Analyze and Interpret: Develop a structured framework from the categories and codes. Then, evaluate the data in the context of your research question, investigate relationships between categories, discover patterns, and draw meaning from these connections.

Benefits & Challenges

There are various advantages to using content analysis:

  • Structured Approach: It offers a systematic approach to dealing with large data sets and ensures consistency throughout the research.
  • Objective Insights: This method promotes objectivity, which helps to reduce potential biases in your study.
  • Pattern Discovery: Content analysis can help uncover hidden trends, themes, and patterns that are not always obvious.
  • Versatility: You can apply content analysis to various data formats, including text, internet content, images, etc.

However, keep in mind the challenges that arise:

  • Subjectivity: Even with the best attempts, a certain bias may remain in coding and interpretation.
  • Complexity: Analyzing huge data sets requires time and great attention to detail.
  • Contextual Nuances: Content analysis may not capture all of the contextual richness that qualitative data analysis highlights.

Example of Content Analysis

Suppose you’re conducting market research and looking at customer feedback on a product. As you collect relevant data and analyze feedback, you’ll see repeating codes like “price,” “quality,” “customer service,” and “features.” These codes are organized into categories such as “positive reviews,” “negative reviews,” and “suggestions for improvement.”

According to your findings, themes such as “price” and “customer service” stand out and show that pricing and customer service greatly impact customer satisfaction. This example highlights the power of content analysis for obtaining significant insights from large textual data collections.

Method 2: Thematic Analysis

Thematic analysis is a well-structured procedure for identifying and analyzing recurring themes in your data. As you become more engaged in the data, you’ll generate codes or short labels representing key concepts. These codes are then organized into themes, providing a consistent framework for organizing and comprehending the substance of the data.

The analysis allows you to organize complex narratives and perspectives into meaningful categories, which will allow you to identify connections and patterns that may not be visible at first.

Steps to Do Thematic Analysis

Follow these steps when conducting a thematic analysis:

  • Code and Group: Start by thoroughly examining the data and giving initial codes that identify the segments. To create initial themes, combine relevant codes.
  • Code and Group: Begin by engaging yourself in the data, assigning first codes to notable segments. To construct basic themes, group comparable codes together.
  • Analyze and Report: Analyze the data within each theme to derive relevant insights. Organize the topics into a consistent structure and explain your findings, along with data extracts that represent each theme.

Thematic analysis has various benefits:

  • Structured Exploration: It is a method for identifying patterns and themes in complex qualitative data.
  • Comprehensive knowledge: Thematic analysis promotes an in-depth understanding of the complications and meanings of the data.
  • Application Flexibility: This method can be customized to various research situations and data kinds.

However, challenges may arise, such as:

  • Interpretive Nature: Interpreting qualitative data in thematic analysis is vital, and it is critical to manage researcher bias.
  • Time-consuming: The study can be time-consuming, especially with large data sets.
  • Subjectivity: The selection of codes and topics might be subjective.

Example of Thematic Analysis

Assume you’re conducting a thematic analysis on job satisfaction interviews. Following your immersion in the data, you assign initial codes such as “work-life balance,” “career growth,” and “colleague relationships.” As you organize these codes, you’ll notice themes develop, such as “Factors Influencing Job Satisfaction” and “Impact on Work Engagement.”

Further investigation reveals the tales and experiences included within these themes and provides insights into how various elements influence job satisfaction. This example demonstrates how thematic analysis can reveal meaningful patterns and insights in qualitative data.

Method 3: Narrative Analysis

The narrative analysis involves the narratives that people share. You’ll investigate the histories in your data, looking at how stories are created and the meanings they express. This method is excellent for learning how people make sense of their experiences through narrative.

Steps to Do Narrative Analysis

The following steps are involved in narrative analysis:

  • Gather and Analyze: Start by collecting narratives, such as first-person tales, interviews, or written accounts. Analyze the stories, focusing on the plot, feelings, and characters.
  • Find Themes: Look for recurring themes or patterns in various narratives. Think about the similarities and differences between these topics and personal experiences.
  • Interpret and Extract Insights: Contextualize the narratives within their larger context. Accept the subjective nature of each narrative and analyze the narrator’s voice and style. Extract insights from the tales by diving into the emotions, motivations, and implications communicated by the stories.

There are various advantages to narrative analysis:

  • Deep Exploration: It lets you look deeply into people’s personal experiences and perspectives.
  • Human-Centered: This method prioritizes the human perspective, allowing individuals to express themselves.

However, difficulties may arise, such as:

  • Interpretive Complexity: Analyzing narratives requires dealing with the complexities of meaning and interpretation.
  • Time-consuming: Because of the richness and complexities of tales, working with them can be time-consuming.

Example of Narrative Analysis

Assume you’re conducting narrative analysis on refugee interviews. As you read the stories, you’ll notice common themes of toughness, loss, and hope. The narratives provide insight into the obstacles that refugees face, their strengths, and the dreams that guide them.

The analysis can provide a deeper insight into the refugees’ experiences and the broader social context they navigate by examining the narratives’ emotional subtleties and underlying meanings. This example highlights how narrative analysis can reveal important insights into human stories.

Method 4: Grounded Theory Analysis

Grounded theory analysis is an iterative and systematic approach that allows you to create theories directly from data without being limited by pre-existing hypotheses. With an open mind, you collect data and generate early codes and labels that capture essential ideas or concepts within the data.

As you progress, you refine these codes and increasingly connect them, eventually developing a theory based on the data. Grounded theory analysis is a dynamic process for developing new insights and hypotheses based on details in your data.

Steps to Do Grounded Theory Analysis

Grounded theory analysis requires the following steps:

  • Initial Coding: First, immerse yourself in the data, producing initial codes that represent major concepts or patterns.
  • Categorize and Connect: Using axial coding, organize the initial codes, which establish relationships and connections between topics.
  • Build the Theory: Focus on creating a core category that connects the codes and themes. Regularly refine the theory by comparing and integrating new data, ensuring that it evolves organically from the data.

Grounded theory analysis has various benefits:

  • Theory Generation: It provides a one-of-a-kind opportunity to generate hypotheses straight from data and promotes new insights.
  • In-depth Understanding: The analysis allows you to deeply analyze the data and reveal complex relationships and patterns.
  • Flexible Process: This method is customizable and ongoing, which allows you to enhance your research as you collect additional data.

However, challenges might arise with:

  • Time and Resources: Because grounded theory analysis is a continuous process, it requires a large commitment of time and resources.
  • Theoretical Development: Creating a grounded theory involves a thorough understanding of qualitative data analysis software and theoretical concepts.
  • Interpretation of Complexity: Interpreting and incorporating a newly developed theory into existing literature can be intellectually hard.

Example of Grounded Theory Analysis

Assume you’re performing a grounded theory analysis on workplace collaboration interviews. As you open code the data, you will discover notions such as “communication barriers,” “team dynamics,” and “leadership roles.” Axial coding demonstrates links between these notions, emphasizing the significance of efficient communication in developing collaboration.

You create the core “Integrated Communication Strategies” category through selective coding, which unifies new topics.

This theory-driven category serves as the framework for understanding how numerous aspects contribute to effective team collaboration. This example shows how grounded theory analysis allows you to generate a theory directly from the inherent nature of the data.

Method 5: Discourse Analysis

Discourse analysis focuses on language and communication. You’ll look at how language produces meaning and how it reflects power relations, identities, and cultural influences. This strategy examines what is said and how it is said; the words, phrasing, and larger context of communication.

The analysis is precious when investigating power dynamics, identities, and cultural influences encoded in language. By evaluating the language used in your data, you can identify underlying assumptions, cultural standards, and how individuals negotiate meaning through communication.

Steps to Do Discourse Analysis

Conducting discourse analysis entails the following steps:

  • Select Discourse: For analysis, choose language-based data such as texts, speeches, or media content.
  • Analyze Language: Immerse yourself in the conversation, examining language choices, metaphors, and underlying assumptions.
  • Discover Patterns: Recognize the dialogue’s reoccurring themes, ideologies, and power dynamics. To fully understand the effects of these patterns, put them in their larger context.

There are various advantages of using discourse analysis:

  • Understanding Language: It provides an extensive understanding of how language builds meaning and influences perceptions.
  • Uncovering Power Dynamics: The analysis reveals how power dynamics appear via language.
  • Cultural Insights: This method identifies cultural norms, beliefs, and ideologies stored in communication.

However, the following challenges may arise:

  • Complexity of Interpretation: Language analysis involves navigating multiple levels of nuance and interpretation.
  • Subjectivity: Interpretation can be subjective, so controlling researcher bias is important.
  • Time-Intensive: Discourse analysis can take a lot of time because careful linguistic study is required in this analysis.

Example of Discourse Analysis

Consider doing discourse analysis on media coverage of a political event. You notice repeating linguistic patterns in news articles that depict the event as a conflict between opposing parties. Through deconstruction, you can expose how this framing supports particular ideologies and power relations.

You can illustrate how language choices influence public perceptions and contribute to building the narrative around the event by analyzing the speech within the broader political and social context. This example shows how discourse analysis can reveal hidden power dynamics and cultural influences on communication.

How to do Qualitative Data Analysis with the QuestionPro Research suite?

QuestionPro is a popular survey and research platform that offers tools for collecting and analyzing qualitative and quantitative data. Follow these general steps for conducting qualitative data analysis using the QuestionPro Research Suite:

  • Collect Qualitative Data: Set up your survey to capture qualitative responses. It might involve open-ended questions, text boxes, or comment sections where participants can provide detailed responses.
  • Export Qualitative Responses: Export the responses once you’ve collected qualitative data through your survey. QuestionPro typically allows you to export survey data in various formats, such as Excel or CSV.
  • Prepare Data for Analysis: Review the exported data and clean it if necessary. Remove irrelevant or duplicate entries to ensure your data is ready for analysis.
  • Code and Categorize Responses: Segment and label data, letting new patterns emerge naturally, then develop categories through axial coding to structure the analysis.
  • Identify Themes: Analyze the coded responses to identify recurring themes, patterns, and insights. Look for similarities and differences in participants’ responses.
  • Generate Reports and Visualizations: Utilize the reporting features of QuestionPro to create visualizations, charts, and graphs that help communicate the themes and findings from your qualitative research.
  • Interpret and Draw Conclusions: Interpret the themes and patterns you’ve identified in the qualitative data. Consider how these findings answer your research questions or provide insights into your study topic.
  • Integrate with Quantitative Data (if applicable): If you’re also conducting quantitative research using QuestionPro, consider integrating your qualitative findings with quantitative results to provide a more comprehensive understanding.

Qualitative data analysis is vital in uncovering various human experiences, views, and stories. If you’re ready to transform your research journey and apply the power of qualitative analysis, now is the moment to do it. Book a demo with QuestionPro today and begin your journey of exploration.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Focus group software

Top 7 Focus Group Software for Comprehensive Research

Apr 17, 2024

DEI software

Top 7 DEI Software Solutions to Empower Your Workplace 

Apr 16, 2024

ai for customer experience

The Power of AI in Customer Experience — Tuesday CX Thoughts

employee lifecycle management software

Employee Lifecycle Management Software: Top of 2024

Apr 15, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Research-Methodology

Data Analysis

Methodology chapter of your dissertation should include discussions about the methods of data analysis. You have to explain in a brief manner how you are going to analyze the primary data you will collect employing the methods explained in this chapter.

There are differences between qualitative data analysis and quantitative data analysis . In qualitative researches using interviews, focus groups, experiments etc. data analysis is going to involve identifying common patterns within the responses and critically analyzing them in order to achieve research aims and objectives.

Data analysis for quantitative studies, on the other hand, involves critical analysis and interpretation of figures and numbers, and attempts to find rationale behind the emergence of main findings. Comparisons of primary research findings to the findings of the literature review are critically important for both types of studies – qualitative and quantitative.

Data analysis methods in the absence of primary data collection can involve discussing common patterns, as well as, controversies within secondary data directly related to the research area.

Data analysis

John Dudovskiy

  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • Science & Research
  • About Science & Research at FDA
  • The FDA Science Forum

Real-World Data Analysis of Adverse Events Attributable to Large Joint Arthroplasty Implants

2023 FDA Science Forum

Background:

With various adverse events reportedly associated with metal implants, their clinical manifestations and biological underpinnings remain unclear. We employed a comprehensive analysis using real-world data (RWD) from electronic health records (EHR) to explore arthroplasty implant-related adverse outcomes with respect to device/patient characteristics.

This research aims to: 1) outline the scope, frequency, and underlying nature of clinically relevant adverse outcomes potentially attributable to arthroplasty implants; 2) explore pre-implantation risk factors and post-implantation complications likely associated with arthroplasty implant reactivity; and 3) develop device-oriented RWD analysis/visualization algorithms.

This research focused on large joint arthroplasty,utilized an EHR dataset of ~27,000 patients who had an arthroplasty encounter (2016 - 2019) that was collated by Loopback Analytics LLC for FDA. Cohorts with hip, knee, or shoulder arthroplasty were established using standardized ICD-10 codes. Comorbidity analysis with respect to the implantation time was performed in subjects with Revision and known arthroplasty-related Adverse Outcomes (AO+Rev) versus those without these outcomes (Control). Inter-cohort differences were assessed using chi-square test with odds ratios, relative risk ratios, and multivariate regression. Time-to-event analysis using Kaplan-Meier approach, log-rank test, and Cox proportional hazards regression were applied to evaluate the inter-cohort differences in pre-selected conditions representing potential implant-related immune/inflammatory responses. LASSO regression modelling was conducted as an unsupervised assessment of diagnoses that may predict AO+Rev. The co-occurrence and correlations between diagnoses pairs were assessed and visualized by network analysis; comorbidity score was introduced to quantify the correlations pertaining to diagnoses that may represent arthroplasty implant reactivity. Hierarchical clustering and correlation heatmaps were applied to visualize the intergroup differences in AO+Rev vs. Control comorbidity patterns and relationships between diagnoses of interest.

Compared to Controls, the AO+Rev cohort showed distinct likelihoods of different diagnoses that potentially represent arthroplasty-related underlying patient conditions (pre-implantation) or underrecognized complications (post-implantation), including some allergic and immune/inflammatory conditions. Different RWD analysis/visualization approaches with the respective results will be illustrated.

Conclusion:

The developed RWD algorithm can be applied for providing insights into the risk factors and complications pertaining to various arthroplasty implants, thereby optimizing leading to a more predictive evaluation of implant safety in the real-world setting.

Real-World Data Analysis of Adverse Events Attributable to Large Joint Arthroplasty Implants

Download the Poster (PDF; 2.93 MB)

This paper is in the following e-collection/theme issue:

Published on 18.4.2024 in Vol 11 (2024)

Time-Varying Network Models for the Temporal Dynamics of Depressive Symptomatology in Patients With Depressive Disorders: Secondary Analysis of Longitudinal Observational Data

Authors of this article:

Author Orcid Image

Björn Sebastian Siepe   1 , MSc ;   Christian Sander   2, 3 , PhD ;   Martin Schultze   4 , Prof Dr ;   Andreas Kliem   5 , PhD ;   Sascha Ludwig   6 , MSc ;   Ulrich Hegerl   7, 8 , Prof Dr ;   Hanna Reich   2, 8 , PhD

1 Psychological Methods Lab, Department of Psychology, University of Marburg, Marburg, Germany

2 German Depression Foundation, Leipzig, Germany

3 Department of Psychiatry and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany

4 Department of Psychology, Goethe University, Frankfurt, Germany

5 Adesso SE, Dortmund, Germany

6 Institute for Applied Informatics, University Leipzig, Leipzig, Germany

7 Department for Psychiatry, Psychosomatics and Psychotherapy, Goethe University, Frankfurt, Germany

8 Depression Research Center of the German Depression Foundation, Department for Psychiatry, Psychosomatics and Psychotherapy, Goethe University, Frankfurt, Germany

Corresponding Author:

  • Björn Sebastian Siepe , MSc
  • Psychological Methods Lab
  • Department of Psychology
  • University of Marburg
  • Gutenbergstraße 18
  • Marburg , 35032
  • Phone: 49 6421 28 23616
  • Email: [email protected]

This paper is in the following e-collection/theme issue:

Published on 17.4.2024 in Vol 26 (2024)

Digital Interventions for Recreational Cannabis Use Among Young Adults: Systematic Review, Meta-Analysis, and Behavior Change Technique Analysis of Randomized Controlled Studies

Authors of this article:

Author Orcid Image

  • José Côté 1, 2, 3 , RN, PhD   ; 
  • Gabrielle Chicoine 3, 4 , RN, PhD   ; 
  • Billy Vinette 1, 3 , RN, MSN   ; 
  • Patricia Auger 2, 3 , MSc   ; 
  • Geneviève Rouleau 3, 5, 6 , RN, PhD   ; 
  • Guillaume Fontaine 7, 8, 9 , RN, PhD   ; 
  • Didier Jutras-Aswad 2, 10 , MSc, MD  

1 Faculty of Nursing, Université de Montréal, Montreal, QC, Canada

2 Research Centre of the Centre Hospitalier de l’Université de Montréal, Montreal, QC, Canada

3 Research Chair in Innovative Nursing Practices, Montreal, QC, Canada

4 Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, ON, Canada

5 Department of Nursing, Université du Québec en Outaouais, Saint-Jérôme, QC, Canada

6 Women's College Hospital Institute for Health System Solutions and Virtual Care, Women's College Hospital, Toronto, ON, Canada

7 Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada

8 Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, Montreal, QC, Canada

9 Kirby Institute, University of New South Wales, Sydney, Australia

10 Department of Psychiatry and Addictology, Faculty of Medicine, Université de Montréal, Montreal, QC, Canada

Corresponding Author:

José Côté, RN, PhD

Research Centre of the Centre Hospitalier de l’Université de Montréal

850 Saint-Denis

Montreal, QC, H2X 0A9

Phone: 1 514 890 8000

Email: [email protected]

Background: The high prevalence of cannabis use among young adults poses substantial global health concerns due to the associated acute and long-term health and psychosocial risks. Digital modalities, including websites, digital platforms, and mobile apps, have emerged as promising tools to enhance the accessibility and availability of evidence-based interventions for young adults for cannabis use. However, existing reviews do not consider young adults specifically, combine cannabis-related outcomes with those of many other substances in their meta-analytical results, and do not solely target interventions for cannabis use.

Objective: We aimed to evaluate the effectiveness and active ingredients of digital interventions designed specifically for cannabis use among young adults living in the community.

Methods: We conducted a systematic search of 7 databases for empirical studies published between database inception and February 13, 2023, assessing the following outcomes: cannabis use (frequency, quantity, or both) and cannabis-related negative consequences. The reference lists of included studies were consulted, and forward citation searching was also conducted. We included randomized studies assessing web- or mobile-based interventions that included a comparator or control group. Studies were excluded if they targeted other substance use (eg, alcohol), did not report cannabis use separately as an outcome, did not include young adults (aged 16-35 y), had unpublished data, were delivered via teleconference through mobile phones and computers or in a hospital-based setting, or involved people with mental health disorders or substance use disorders or dependence. Data were independently extracted by 2 reviewers using a pilot-tested extraction form. Authors were contacted to clarify study details and obtain additional data. The characteristics of the included studies, study participants, digital interventions, and their comparators were summarized. Meta-analysis results were combined using a random-effects model and pooled as standardized mean differences.

Results: Of 6606 unique records, 19 (0.29%) were included (n=6710 participants). Half (9/19, 47%) of these articles reported an intervention effect on cannabis use frequency. The digital interventions included in the review were mostly web-based. A total of 184 behavior change techniques were identified across the interventions (range 5-19), and feedback on behavior was the most frequently used (17/19, 89%). Digital interventions for young adults reduced cannabis use frequency at the 3-month follow-up compared to control conditions (including passive and active controls) by −6.79 days of use in the previous month (95% CI −9.59 to −4.00; P <.001).

Conclusions: Our results indicate the potential of digital interventions to reduce cannabis use in young adults but raise important questions about what optimal exposure dose could be more effective, both in terms of intervention duration and frequency. Further high-quality research is still needed to investigate the effects of digital interventions on cannabis use among young adults.

Trial Registration: PROSPERO CRD42020196959; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=196959

Introduction

Cannabis use among young adults is recognized as a public health concern.

Young adulthood (typically the ages of 18-30 y) is a critical developmental stage characterized by a peak prevalence of substance use [ 1 , 2 ]. Worldwide, cannabis is a substance frequently used for nonmedical purposes due in part to its high availability in some regions and enhanced product variety and potency [ 3 , 4 ]. The prevalence of cannabis use (CU) among young adults is high [ 5 , 6 ], and its rates have risen in recent decades [ 7 ]. In North America and Oceania, the estimated past-year prevalence of CU is ≥25% among young adults [ 8 , 9 ].

While the vast majority of cannabis users do not experience severe problems from their use [ 4 ], the high prevalence of CU among young adults poses substantial global health concerns due to the associated acute and long-term health and psychosocial risks [ 10 , 11 ]. These include impairment of cognitive function, memory, and psychomotor skills during acute intoxication; increased engagement in behaviors with a potential for injury and fatality (eg, driving under the influence); socioeconomic problems; and diminished social functioning [ 4 , 12 - 14 ]. Importantly, an extensive body of literature reveals that subgroups engaging in higher-risk use, such as intensive or repeated use, are more prone to severe and chronic consequences, including physical ailments (eg, respiratory illness and reproductive dysfunction), mental health disorders (eg, psychosis, depression, and suicidal ideation or attempts), and the potential development of CU disorder [ 4 , 15 - 17 ].

Interventions to Reduce Public Health Impact of Young Adult CU

Given the increased prevalence of lifetime and daily CU among young adults and the potential negative impact of higher-risk CU, various prevention and intervention programs have been implemented to help users reduce or cease their CU. These programs primarily target young adults regardless of their CU status [ 2 , 18 ]. In this context, many health care organizations and international expert panels have developed evidence-based lower-risk CU guidelines to promote safer CU and intervention options to help reduce risks of adverse health outcomes from nonmedical CU [ 4 , 16 , 17 , 19 ]. Lower-risk guidance-oriented interventions for CU are based on concepts of health promotion [ 20 - 22 ] and health behavior change [ 23 - 26 ] and on other similar harm reduction interventions implemented in other areas of population health (eg, lower-risk drinking guidelines, supervised consumption sites and services, and sexual health) [ 27 , 28 ]. These interventions primarily aim to raise awareness of negative mental, physical, and social cannabis-related consequences to modify individual-level behavior-related risk factors.

Meta-analyses have shown that face-to-face prevention and treatment interventions are generally effective in reducing CU in young adults [ 18 , 29 - 32 ]. However, as the proportion of professional help seeking for CU concerns among young adults remains low (approximately 15%) [ 33 , 34 ], alternative strategies that consider the limited capacities and access-related barriers of traditional face-to-face prevention and treatment facilities are needed. Digital interventions, including websites, digital platforms, and mobile apps, have emerged as promising tools to enhance the accessibility and availability of evidence-based programs for young adult cannabis users. These interventions address barriers such as long-distance travel, concerns about confidentiality, stigma associated with seeking treatment, and the cost of traditional treatments [ 35 - 37 ]. By overcoming these barriers, digital interventions have the potential to have a stronger public health impact [ 18 , 38 ].

State of Knowledge of Digital Interventions for CU and Young Adults

The literature regarding digital interventions for substance use has grown rapidly in the past decade, as evidenced by several systematic reviews and meta-analyses of randomized controlled trial (RCT) studies on the efficacy or effectiveness of these interventions in preventing or reducing harmful substance use [ 2 , 39 - 41 ]. However, these reviews do not focus on young adults specifically. In addition, they combine CU-related outcomes with those of many other substances in their meta-analytical results. Finally, they do not target CU interventions exclusively.

In total, 4 systematic reviews and meta-analyses of digital interventions for CU among young people have reported mixed results [ 42 - 45 ]. In their systematic review (10 studies of 5 prevention and 5 treatment interventions up to 2012), Tait et al [ 44 ] concluded that digital interventions effectively reduced CU among adolescents and adults at the posttreatment time point. Olmos et al [ 43 ] reached a similar conclusion in their meta-analysis of 9 RCT studies (2 prevention and 7 treatment interventions). In their review, Hoch et al [ 42 ] reported evidence of small effects at the 3-month follow-up based on 4 RCTs of brief motivational interventions and cognitive behavioral therapy (CBT) delivered on the web. In another systematic review and meta-analysis, Beneria et al [ 45 ] found that web-based CU interventions did not significantly reduce consumption. However, these authors indicated that the programs tested varied significantly across the studies considered and that statistical heterogeneity was attributable to the inclusion of studies of programs targeting more than one substance (eg, alcohol and cannabis) and both adolescents and young adults. Beneria et al [ 45 ] recommend that future work “establish the effectiveness of the newer generation of interventions as well as the key ingredients” of effective digital interventions addressing CU by young people. This is of particular importance because behavior change interventions tend to be complex as they consist of multiple interactive components [ 46 ].

Behavior change interventions refer to “coordinated sets of activities designed to change specified behavior patterns” [ 47 ]. Their interacting active ingredients can be conceptualized as behavior change techniques (BCTs) [ 48 ]. BCTs are specific and irreducible. Each BCT has its own individual label and definition, which can be used when designing and reporting complex interventions and as a nomenclature system when coding interventions for their content [ 47 ]. The Behavior Change Technique Taxonomy version 1 (BCTTv1) [ 48 , 49 ] was developed to provide a shared, standardized terminology for characterizing complex behavior change interventions and their active ingredients. Several systematic reviews with meta-regressions that used the BCTTv1 have found interventions with certain BCTs to be more effective than those without [ 50 - 53 ]. A better understanding of the BCTs used in digital interventions for young adult cannabis users would help not only to establish the key ingredients of such interventions but also develop and evaluate effective interventions.

In the absence of any systematic review of the effectiveness and active ingredients of digital interventions designed specifically for CU among community-living young adults, we set out to achieve the following:

  • conduct a comprehensive review of digital interventions for preventing, reducing, or ceasing CU among community-living young adults,
  • describe the active ingredients (ie, BCTs) in these interventions from the perspective of behavior change science, and
  • analyze the effectiveness of these interventions on CU outcomes.

Protocol Registration

We followed the Cochrane Handbook for Systematic Reviews of Interventions [ 54 ] in designing this systematic review and meta-analysis and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines in reporting our findings (see Multimedia Appendix 1 [ 55 ] for the complete PRISMA checklist). This review was registered in PROSPERO (CRD42020196959).

Search Strategy

The search strategy was designed by a health information specialist together with the research team and peer reviewed by another senior information specialist before execution using Peer Review of Electronic Search Strategies for systematic reviews [ 56 ]. The search strategy revolved around three concepts:

  • CU (eg, “cannabis,” “marijuana,” and “hashish”)
  • Digital interventions (eg, “telehealth,” “website,” “mobile applications,” and “computer”)
  • Young adults (eg, “emerging adults” and “students”)

The strategy was initially implemented on March 18, 2020, and again on October 13, 2021, and February 13, 2023. The full, detailed search strategies for each database are presented in Multimedia Appendix 2 .

Information Sources

We searched 7 electronic databases of published literature: CINAHL Complete, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Embase, MEDLINE, PubMed, and PsycINFO. No publication date filters or language restrictions were applied. A combination of free-text keywords and Medical Subject Headings was tailored to the conventions of each database for optimal electronic searching. The research team also manually screened the reference lists of the included articles and the bibliographies of existing systematic reviews [ 18 , 31 , 42 - 45 ] to identify additional relevant studies (snowballing). Finally, a forward citation tracking procedure (ie, searching for articles that cited the included studies) was carried out in Google Scholar.

Inclusion Criteria

The population, intervention, comparison, outcome, and study design process is presented in Multimedia Appendix 3 . The inclusion criteria were as follows: (1) original research articles published in peer-reviewed journals; (2) use of an experimental study design (eg, RCT, cluster RCT, or pilot RCT); (3) studies evaluating the effectiveness (or efficacy) of digital interventions designed specifically to prevent, reduce, or cease CU as well as promote CU self-management or address cannabis-related harm and having CU as an outcome measure; (4) studies targeting young adults, including active and nonactive cannabis users; (5) cannabis users and nonusers not under substance use treatment used as controls in comparator, waitlist, or delayed-treatment groups offered another type of intervention (eg, pharmacotherapy or psychosocial) different from the one being investigated or participants assessed only for CU; and (6) quantitative CU outcomes (frequency and quantity) or cannabis abstinence. Given the availability of numerous CU screening and assessment tools with adequate psychometric properties and the absence of a gold standard in this regard [ 57 ], any instrument capturing aspects of CU was considered. CU outcome measures could be subjective (eg, self-reported number of CU days or joints in the previous 3 months) or objective (eg, drug screening test). CU had to be measured before the intervention (baseline) and at least once after.

Digital CU interventions were defined as web- or mobile-based interventions that included one or more activities (eg, self-directed or interactive psychoeducation or therapy, personalized feedback, peer-to-peer contact, and patient-to-expert communication) aimed at changing CU [ 58 ]. Mobile-based interventions were defined as interventions delivered via mobile phone through SMS text message, multimedia messaging service (ie, SMS text messages that include multimedia content, such as pictures, videos, or emojis), or mobile apps, whereas web-based interventions (eg, websites and digital platforms) were defined as interventions designed to be accessed on the web (ie, the internet), mainly via computers. Interventions could include self-directed and web-based interventions with human support. We defined young adults as aged 16 to 35 years and included students and nonstudents. While young adulthood is typically defined as covering the ages of 18 to 30 years [ 59 ], we broadened the range given that the age of majority and legal age to purchase cannabis differs across countries and jurisdictions. This was also in line with the age range targeted by several digital CU interventions (college or university students or emerging adults aged 15-24 years) [ 31 , 45 ]. Given the language expertise of the research team members and the available resources, only English- and French-language articles were retained.

Exclusion Criteria

Knowledge synthesis articles, study protocols, and discussion papers or editorials were excluded, as were articles with cross-sectional, cohort, case study or report, pretest-posttest, quasi-experimental, or qualitative designs. Mixed methods designs were included only if the quantitative component was an RCT. We excluded studies if (1) use of substances other than cannabis (eg, alcohol, opioids, or stimulants) was the focus of the digital intervention (though studies that included polysubstance users were retained if CU was assessed and reported separately); (2) CU was not reported separately as an outcome or only attitudes or beliefs regarding, knowledge of, intention to reduce, or readiness or motivation to change CU was measured; and (3) the data reported were unpublished (eg, conferences and dissertations). Studies of traditional face-to-face therapy delivered via teleconference on mobile phones and computers or in a hospital-based setting and informational campaigns (eg, web-based poster presentations or pamphlets) were excluded as well. Studies with samples with a maximum age of <15 years and a minimum age of >35 years were also excluded. Finally, we excluded studies that focused exclusively on people with a mental health disorder or substance use disorder or dependence or on adolescents owing to the particular health care needs of these populations, which may differ from those of young adults [ 1 ].

Data Collection

Selection of studies.

Duplicates were removed from the literature search results in EndNote (version X9.3.3; Clarivate Analytics) using the Bramer method for deduplication of database search results for systematic reviews [ 60 ]. The remaining records were uploaded to Covidence (Veritas Health Innovation), a web-based systematic review management system. A reviewer guide was developed that included screening questions and a detailed description of each inclusion and exclusion criterion based on PICO (population, intervention, comparator, and outcome), and a calibration exercise was performed before each stage of the selection process to maximize consistency between reviewers. Titles and abstracts of studies flagged for possible inclusion were screened first by 2 independent reviewers (GC, BV, PA, and GR; 2 per article) against the eligibility criteria (stage 1). Articles deemed eligible for full-text review were then retrieved and screened for inclusion (stage 2). Full texts were assessed in detail against the eligibility criteria again by 2 reviewers independently. Disagreements between reviewers were resolved through consensus or by consulting a third reviewer.

Data Extraction Process

In total, 2 reviewers (GC, BV, PA, GR, and GF; 2 per article) independently extracted relevant data (or informal evidence) using a data extraction form developed specifically for this review and integrated into Covidence. The form was pilot-tested on 2 randomly selected studies and refined accordingly. Data pertaining to the following domains were extracted from the included studies: (1) Study characteristics included information on the first and corresponding authors, publication year, country of origin, aims and hypotheses, study period, design (including details on randomization and blinding), follow-up times, data collection methods, and types of statistical analysis. (2) Participant characteristics included study target population, participant inclusion and exclusion criteria, sex or gender, mean age, and sample sizes at each data collection time point. (3) Intervention characteristics, for which the research team developed a matrix inspired by the template for intervention description and replication 12-item checklist [ 61 ] to extract informal evidence (ie, intervention descriptions) from the included studies under the headings name of intervention, purpose, underpinning theory of design elements, treatment approach, type of technology (ie, web or mobile) and software used, delivery format (ie, self-directed, human involvement, or both), provider characteristics (if applicable), intervention duration (ie, length of treatment and number of sessions or modules), material and procedures (ie, tools or activities offered, resources provided, and psychoeducational content), tailoring, and unplanned modifications. (4) Comparator characteristics were details of the control or comparison group or groups, including nature (passive vs active), number of groups or clusters (if applicable), type and length of the intervention (if applicable), and number of participants at each data collection time point. (5) Outcome variables, including the primary outcome variable examined in this systematic review, that is, the mean difference in CU frequency before and after the intervention and between the experimental and control or comparison groups. When possible, we examined continuous variables, including CU frequency means and SDs at the baseline and follow-up time points, and standardized regression coefficients (ie, β coefficients and associated 95% CIs). The secondary outcomes examined included other CU outcome variables (eg, quantity of cannabis used and abstinence) and cannabis-related negative consequences (or problems). Details on outcome variables (ie, definition, data time points, and missing data) and measurements (ie, instruments, measurement units, and scales) were also extracted.

In addition, data on user engagement and use of the digital intervention and study attrition rates (ie, dropouts and loss to follow-up) were extracted. When articles had missing data, we contacted the corresponding authors via email (2 attempts were made over a 2-month period) to obtain missing information. Disagreements over the extracted data were limited and resolved through discussion.

Data Synthesis Methods

Descriptive synthesis.

The characteristics of the included studies, study participants, interventions, and comparators were summarized in narrative and table formats. The template for intervention description and replication 12-item checklist [ 61 ] was used to summarize and organize intervention characteristics and assess to what extent the interventions were appropriately described in the included articles. As not all studies had usable data for meta-analysis purposes and because of heterogeneity, we summarized the main findings (ie, intervention effects) of the included studies in narrative and table formats for each outcome of interest in this review.

The BCTs used in the digital interventions were identified from the descriptions of the interventions (ie, experimental groups) provided in the articles as well as any supplementary material and previously published research protocols. A BCT was defined as “an observable, replicable, and irreducible component of an intervention designed to alter or redirect causal processes that regulate behavior” [ 48 ]. The target behavior in this review was the cessation or reduction of CU by young adults. BCTs were identified and coded using the BCTTv1 [ 48 , 49 ], a taxonomy of 93 BCTs organized into 16 hierarchical thematic clusters or categories. Applying the BCTTv1 in a systematic review allows for the comparison and synthesis of evidence across studies in a structured manner. This analysis allows for the identification of the explicit mechanisms underlying the reported behavior change induced by interventions, successful or not, and, thus, avoids making implicit assumptions about what works [ 62 ].

BCT coding was performed by 2 reviewers independently—BV coded all studies, and GC and GF coded a subset of the studies. All reviewers completed web-based training on the BCTTv1, and GF is an experienced implementation scientist who had used the BCTTv1 in prior work [ 63 - 65 ]. The descriptions of the interventions in the articles were read line by line and analyzed for the clear presence of BCTs using the guidelines developed by Michie et al [ 48 ]. For each article, the BCTs identified were documented and categorized using supporting textual evidence. They were coded only once per article regardless of how many times they came up in the text. Disagreements about including a BCT were resolved through discussion. If there was uncertainty about whether a BCT was present, it was coded as absent. Excel (Microsoft Corp) was used to compare the reviewers’ independent BCT coding and generate an overall descriptive synthesis of the BCTs identified. The BCTs were summarized by study and BCT cluster.

Statistical Analysis

Meta-analyses were conducted to estimate the size of the effect of the digital interventions for young adult CU on outcomes of interest at the posttreatment and follow-up assessments compared with control or alternative intervention conditions. The outcome variables considered were (1) CU frequency and other CU outcome variables (eg, quantity of cannabis used and abstinence) at baseline and the posttreatment time point or follow-up measured using standardized instruments of self-reported CU (eg, the timeline followback [TLFB] method) [ 66 ] and (2) cannabis-related negative consequences measured using standardized instruments (eg, the Marijuana Problems Scale) [ 67 ].

Under our systematic review protocol, ≥2 studies were needed for a meta-analysis. On the basis of previous systematic reviews and meta-analyses in the field of digital CU interventions [ 31 , 42 - 45 ], we expected between-study heterogeneity regarding outcome assessment. To minimize heterogeneity, we chose to pool studies with similar outcomes of interest based on four criteria: (1) definition of outcome (eg, CU frequency, quantity consumed, and abstinence), (2) type of outcome variable (eg, days of CU in the previous 90 days, days high per week in the previous 30 days, and number of CU events in the previous month) and measure (ie, instruments or scales), (3) use of validated instruments, and (4) posttreatment or follow-up time points (eg, 2 weeks or 1 month after the baseline or 3, 6, and 12 months after the baseline).

Only articles that reported sufficient statistics to compute a valid effect size with 95% CIs were included in the meta-analyses. In the case of articles that were not independent (ie, more than one published article reporting data from the same clinical trial), only 1 was included, and it was represented only once in the meta-analysis for a given outcome variable regardless of whether the data used to compute the effect size were extracted from the original paper or a secondary analysis paper. We made sure that the independence of the studies included in the meta-analysis of each outcome was respected. In the case of studies that had more than one comparator, we used the effect size for each comparison between the intervention and control groups.

Meta-analyses were conducted only for mean differences based on the change from baseline in CU frequency at 3 months after the baseline as measured using the number of self-reported days of use in the previous month. As the true value of the estimated effect size for outcome variables might vary across different trials and samples, we used a random-effects model given that the studies retained did not have identical target populations. The random-effects model incorporates between-study variation in the study weights and estimated effect size [ 68 ]. In addition, statistical heterogeneity across studies was assessed using I 2 , which measures the proportion of heterogeneity to the total observed dispersion; 25% was considered low, 50% was considered moderate, and 75% was considered high [ 69 ]. Because only 3 studies were included in the meta-analysis [ 70 - 72 ], publication bias could not be assessed. All analyses were completed using Stata (version 18; StataCorp) [ 73 ].

Risk-of-Bias Assessment

The risk of bias (RoB) of the included RCTs was assessed using the Cochrane RoB 2 tool at the outcome level [ 74 ]. Each distinct risk domain (ie, randomization process, deviations from the intended intervention, missing outcome data, measurement of the outcome, and selection of the reported results) was assessed as “low,” “some concerns,” or “high” based on the RoB 2 criteria. In total, 2 reviewers (GC and BV) conducted the assessments independently. Disagreements were discussed, and if not resolved consensually by the 2, the matter was left for a third reviewer (GF) to settle. The assessments were summarized by risk domain and outcome and converted into figures using the RoB visualization tool robvis [ 75 ].

Search Results

The database search generated a total of 13,232 citations, of which 7822 (59.11%) were from the initial search on March 18, 2020, and 2805 (21.2%) and 2605 (19.69%) were from the updates on October 13, 2021, and February 13, 2023, respectively. Figure 1 presents the PRISMA study flow diagram [ 76 ]. Of the 6606 unique records, 6484 (98.15%) were excluded based on title and abstract screening. Full texts of the remaining 1.85% (122/6606) of the records were examined, as were those of 25 more reports found through hand searching. Of these 147 records, 128 (87.1%) were excluded after 3 rounds of full-text screening. Of these 128 records, 39 (30.5%) were excluded for not being empirical research articles (eg, research protocols). Another 28.1% (36/128) were excluded for not meeting our definition of digital CU intervention. The remaining records were excluded for reasons that occurred with a frequency of ≤14%, including young adults not being the target population and the study not meeting our study design criteria (ie, RCT, cluster RCT, or pilot RCT). Excluded studies and reasons for exclusion are listed in Multimedia Appendix 4 . Finally, 19 articles detailing the results of 19 original studies were included.

method of data analysis in research

Description of Studies

Study characteristics.

Multimedia Appendix 5 [ 70 - 72 , 77 - 92 ] describes the general characteristics of the 19 included studies. The studies were published between 2010 and 2023, with 58% (11/19) published in 2018 or later. A total of 53% (10/19) of the studies were conducted in the United States [ 77 - 86 ], 11% (2/19) were conducted in Canada [ 87 , 88 ], 11% (2/19) were conducted in Australia [ 71 , 89 ], 11% (2/19) were conducted in Germany [ 72 , 90 ], 11% (2/19) were conducted in Switzerland [ 70 , 91 ], and 5% (1/19) were conducted in Sweden [ 92 ]. A total of 79% (15/19) were RCTs [ 70 - 72 , 77 , 79 , 81 - 83 , 86 - 92 ], and 21% (4/19) were pilot RCTs [ 78 , 80 , 84 , 85 ].

Participant Characteristics

The studies enrolled a total of 6710 participants—3229 (48.1%) in the experimental groups, 3358 (50%) in the control groups, and the remaining 123 (1.8%) from 1 study [ 82 ] where participant allocation to the intervention condition was not reported. Baseline sample sizes ranged from 49 [ 81 ] to 1292 [ 72 ] (mean 352.89, SD 289.50), as shown in Multimedia Appendix 5 . Participant mean ages ranged from 18.03 (SD 0.31) [ 79 ] to 35.3 (SD 12.6) years [ 88 ], and the proportion of participants who identified as female ranged from 24.7% [ 91 ] to 84.1% [ 80 ].

Of the 19 included studies, 10 (53%) targeted adults aged ≥18 years, of which 7 (70%) studies focused on adults who had engaged in past-month CU [ 70 , 71 , 80 , 84 , 85 , 90 , 91 ], 2 (20%) studies included adults who wished to reduce or cease CU [ 72 , 89 ], and 1 (10%) study focused on noncollege adults with a moderate risk associated with CU [ 88 ]. Sinadinovic et al [ 92 ] targeted young adults aged ≥16 years who had used cannabis at least once a week in the previous 6 months. The remaining 8 studies targeted college or university students (aged ≥17 y) specifically, of which 7 (88%) studies focused solely on students who reported using cannabis [ 78 , 79 , 81 - 83 , 86 , 87 ] and 1 (12%) study focused solely on students who did not report past-month CU (ie, abstainers) [ 77 ].

Intervention Characteristics

The 19 included studies assessed nine different digital interventions: (1) 5 (26%) evaluated Marijuana eCHECKUP TO GO (e-TOKE), a commercially available electronic intervention used at colleges throughout the United States and Canada [ 77 , 78 , 81 - 83 ]; (2) 2 (11%) examined the internationally known CANreduce program [ 70 , 91 ]; (3) 2 (11%) evaluated the German Quit the Shit program [ 72 , 90 ]; (4) 2 (11%) assessed a social media–delivered, physical activity–focused cannabis intervention [ 84 , 85 ]; (5) 1 (5%) investigated the Swedish Cannabishjälpen intervention [ 92 ]; (6) 1 (5%) evaluated the Australian Grassessment: Evaluate Your Use of Cannabis website program [ 89 ]; (7) 1 (5%) assessed the Canadian Ma réussite, mon choix intervention [ 87 ]; (8) 1 (5%) examined the Australian Reduce Your Use: How to Break the Cannabis Habit program [ 71 ]; and (9) 4 (21%) each evaluated a unique no-name intervention described as a personalized feedback intervention (PFI) [ 79 , 80 , 86 , 88 ]. Detailed information regarding the characteristics of all interventions as reported in each included study is provided in Multimedia Appendix 6 [ 70 - 72 , 77 - 113 ] and summarized in the following paragraphs.

In several studies (8/19, 42%), the interventions were designed to support cannabis users in reducing or ceasing their consumption [ 70 , 72 , 80 , 87 , 89 - 92 ]. In 37% (7/19) of the studies, the interventions aimed at reducing both CU and cannabis-related consequences [ 79 , 81 - 85 , 88 ]. Other interventions focused on helping college students think carefully about the decision to use cannabis [ 77 , 78 ] and on reducing either cannabis-related problems among undergraduate students [ 86 ] or symptoms associated with CU disorder in young adults [ 71 ].

In 26% (5/19) of the studies, theory was used to inform intervention design along with a clear rationale for theory use. Of these 5 articles, only 1 (20%) [ 87 ] reported using a single theory of behavior change, the theory of planned behavior [ 114 ]. A total of 21% (4/19) of the studies selected only constructs of theories (or models) for their intervention design. Of these 4 studies, 2 (50%) evaluated the same intervention [ 72 , 90 ], which focused on principles of self-regulation and self-control theory [ 93 ]; 1 (25%) [ 70 ] used the concept of adherence-focused guidance enhancement based on the supportive accountability model of guidance [ 94 ]; and 1 (25%) [ 71 ] reported that intervention design was guided by the concept of self-behavioral management.

The strategies (or approaches) used in the delivery of the digital interventions were discussed in greater detail in 84% (16/19) of the articles [ 70 - 72 , 79 - 81 , 83 - 92 ]. Many of these articles (9/19, 47%) reported using a combination of approaches based on CBT or motivational interviewing (MI) [ 70 , 71 , 79 , 83 - 85 , 90 - 92 ]. PFIs were also often mentioned as an approach to inform intervention delivery [ 7 , 71 , 79 , 86 - 88 ].

More than half (13/19, 68%) of all the digital interventions were asynchronous and based on a self-guided approach without support from a counselor or therapist. The study by Côté et al [ 87 ] evaluated the efficacy of a web-based tailored intervention focused on reinforcing a positive attitude toward and a sense of control over cannabis abstinence through psychoeducational messages delivered by a credible character in short video clips and personalized reinforcement messages. Lee et al [ 79 ] evaluated a brief, web-based personalized feedback selective intervention based on the PFI approach pioneered by Marlatt et al [ 95 ] for alcohol use prevention and on the MI approach described by Miller and Rollnick [ 96 ]. Similarly, Rooke et al [ 71 ] combined principles of MI and CBT to develop a web-based intervention delivered via web modules, which were informed by previous automated feedback interventions targeting substance use. The study by Copeland et al [ 89 ] assessed the short-term effectiveness of Grassessment: Evaluate Your Use of Cannabis, a brief web-based, self-complete intervention based on motivational enhancement therapy that included personalized feedback messages and psychoeducational material. In the studies by Buckner et al [ 80 ], Cunningham et al [ 88 ], and Walukevich-Dienst et al [ 86 ], experimental groups received a brief web-based PFI available via a computer. A total of 16% (3/19) of the studies [ 77 , 78 , 82 ] applied a program called the Marijuana eCHECKUP TO GO (e-TOKE) for Universities and Colleges, which was presented as a web-based, norm-correcting, brief preventive and intervention education program designed to prompt self-reflection on consequences and consideration of decreasing CU among students. Riggs et al [ 83 ] developed and evaluated an adapted version of e-TOKE that provided participants with university-specific personalized feedback and normative information based on protective behavioral strategies for CU [ 97 ]. Similarly, Goodness and Palfai [ 81 ] tested the efficacy of eCHECKUP TO GO-cannabis, a modified version of e-TOKE combining personalized feedback, norm correction, and a harm and frequency reduction strategy where a “booster” session was provided at 3 months to allow participants to receive repeated exposure to the intervention.

In the remaining 32% (6/19) of the studies, which examined 4 different interventions, the presence of a therapist guide was reported. The intervention evaluated by Sinadinovic et al [ 92 ] combined principles of psychoeducation, MI, and CBT organized into 13 web-based modules and a calendar involving therapist guidance, recommendations, and personal feedback. In total, 33% (2/6) of these studies evaluated a social media–delivered intervention with e-coaches that combined principles of MI and CBT and a harm reduction approach for risky CU [ 84 , 85 ]. Schaub et al [ 91 ] evaluated the efficacy of CANreduce, a web-based self-help intervention based on both MI and CBT approaches, using automated motivational and feedback emails, chat with a counselor, and web-based psychoeducational modules. Similarly, Baumgartner et al [ 70 ] investigated the effectiveness of CANreduce 2.0, a modified version of CANreduce, using semiautomated motivational and adherence-focused guidance-based email feedback with or without a personal online coach. The studies by Tossman et al [ 72 ] and Jonas et al [ 90 ] used a solution-focused approach and MI to evaluate the effectiveness of the German Quit the Shit web-based program that involves weekly feedback provided by counselors.

In addition to using different intervention strategies or approaches, the interventions were diverse in terms of the duration and frequency of the program (eg, web-based activities, sessions, or modules). Of the 12 articles that provided details in this regard, 2 (17%) on the same intervention described it as a brief 20- to 45-minute web-based program [ 77 , 78 ], 2 (17%) on 2 different interventions reported including 1 or 2 modules per week for a duration of 6 weeks [ 71 , 92 ], and 7 (58%) on 4 different interventions described them as being available over a longer period ranging from 6 weeks to 3 months [ 70 , 72 , 79 , 84 , 85 , 87 , 90 , 91 ].

Comparator Types

A total of 42% (8/19) of the studies [ 72 , 77 - 80 , 85 , 87 , 92 ] used a passive comparator only, namely, a waitlist control group ( Multimedia Appendix 5 ). A total of 26% (5/19) of the studies used an active comparator only where participants were provided with minimal general health feedback regarding recommended guidelines for sleep, exercise, and nutrition [ 81 , 82 ]; strategies for healthy stress management [ 83 ]; educational materials about risky CU [ 88 ]; or access to a website containing information about cannabis [ 71 ]. In another 21% (4/19) of the studies, which used an active comparator, participants received the same digital intervention minus a specific component: a personal web-based coach [ 70 ], extended personalized feedback [ 89 ], web-based chat counseling [ 91 ], or information on risks associated with CU [ 86 ]. A total of 21% (4/19) of the studies had more than one control group [ 70 , 84 , 90 , 91 ].

Outcome Variable Assessment and Summary of Main Findings of the Studies

The methodological characteristics and major findings of the included studies (N=19) are presented in Multimedia Appendix 7 [ 67 , 70 - 72 , 77 - 92 , 115 - 120 ] and summarized in the following sections for each outcome of interest in this review (ie, CU and cannabis-related consequences). Of the 19 studies, 11 (58%) were reported as efficacy trials [ 7 , 77 , 79 , 81 - 83 , 86 - 88 , 91 , 92 ], and 8 (42%) were reported as effectiveness trials [ 70 - 72 , 78 , 84 , 85 , 89 , 90 ].

Across all the included studies (19/19, 100%), participant attrition rates ranged from 1.6% at 1 month after the baseline [ 77 , 78 ] to 75.1% at the 3-month follow-up [ 70 ]. A total of 37% (7/19) of the studies assessed and reported results regarding user engagement [ 71 , 78 , 84 , 85 , 90 - 92 ] using different types of metrics. In one article on the Marijuana eCHECKUP TO GO (e-TOKE) web-based program [ 78 ], the authors briefly reported that participation was confirmed for 98.1% (158/161) of participants in the intervention group. In 11% (2/19) of the studies, which were on a similar social media–delivered intervention [ 84 , 85 ], user engagement was quantified by tallying the number of comments or posts and reactions (eg, likes and hearts) left by participants. In both studies [ 84 , 85 ], the intervention group, which involved a CU-related Facebook page, displayed greater interactions than the control groups, which involved a Facebook page unrelated to CU. One article [ 84 ] reported that 80% of participants in the intervention group posted at least once (range 0-60) and 50% posted at least weekly. In the other study [ 85 ], the results showed that intervention participants engaged (ie, posting or commenting or clicking reactions) on average 47.9 times each over 8 weeks. In total, 11% (2/19) of the studies [ 90 , 91 ] on 2 different web-based intervention programs, both consisting of web documentation accompanied by chat-based counseling, measured user engagement either by average duration or average number of chat sessions. Finally, 16% (3/19) of the studies [ 71 , 91 , 92 ], which involved 3 different web-based intervention programs, characterized user engagement by the mean number of web modules completed per participant. Overall, the mean number of web modules completed reported in these articles was quite similar: 3.9 out of 13 [ 92 ] and 3.2 [ 91 ] and 3.5 [ 71 ] out of 6.

Assessment of CU

As presented in Multimedia Appendix 7 , the included studies differed in terms of how they assessed CU, although all used at least one self-reported measure of frequency. Most studies (16/19, 84%) measured frequency by days of use, including days of use in the preceding week [ 91 ] or 2 [ 80 ], days of use in the previous 30 [ 70 - 72 , 78 , 84 - 86 , 88 - 90 ] or 90 days [ 79 , 81 , 82 ], and days high per week [ 83 ]. Other self-reported measures of CU frequency included (1) number of CU events in the previous month [ 87 , 90 ], (2) cannabis initiation or use in the previous month (ie, yes or no) [ 77 ], and (3) days without CU in the previous 7 days [ 92 ]. In addition to measuring CU frequency, 42% (8/19) of the studies also assessed CU via self-reported measures of quantity used, including estimated grams consumed in the previous week [ 92 ] or 30 days [ 72 , 85 , 90 ] and the number of standard-sized joints consumed in the previous 7 days [ 91 ] or the previous month [ 70 , 71 , 89 ].

Of the 19 articles included, 10 (53%) [ 70 - 72 , 80 , 84 - 86 , 89 , 90 , 92 ] reported using a validated instrument to measure CU frequency or quantity, including the TLFB instrument [ 66 ] (n=9, 90% of the studies) and the Marijuana Use Form (n=1, 10% of the studies); 1 (10%) [ 79 ] reported using CU-related questions from an adaptation of the Global Appraisal of Individual Needs–Initial instrument [ 115 ]; and 30% (3/10) [ 81 , 82 , 91 ] reported using a questionnaire accompanied by a calendar or a diary of consumption. The 19 studies also differed with regard to their follow-up time measurements for assessing CU, ranging from 2 weeks after the baseline [ 80 ] to 12 months after randomization [ 90 ], although 12 (63%) of the studies included a 3-month follow-up assessment [ 70 - 72 , 79 , 81 , 82 , 84 , 85 , 88 , 90 - 92 ].

Of all studies assessing and reporting change in CU frequency from baseline to follow-up assessments (19/19, 100%), 47% (9/19) found statistically significant differences between the experimental and control groups [ 70 - 72 , 80 , 81 , 83 , 85 , 87 , 91 ]. Importantly, 67% (6/9) of these studies showed that participants in the experimental groups exhibited greater decreases in CU frequency 3 months following the baseline assessment compared with participants in the control groups [ 70 - 72 , 81 , 85 , 91 ], 22% (2/9) of the studies showed greater decreases in CU frequency at 6 weeks after the baseline assessment [ 71 , 83 ], 22% (2/9) of the studies showed greater decreases in CU frequency at 6 months following the baseline assessment [ 81 , 85 ], 11% (1/9) of the studies showed greater decreases in CU frequency at 2 weeks after the baseline [ 80 ], and 11% (1/9) of the studies showed greater decreases in CU frequency at 2 months after treatment [ 87 ].

In the study by Baumgartner et al [ 70 ], a reduction in CU days was observed in all groups, but the authors reported that the difference was statistically significant only between the intervention group with the service team and the control group (the reduction in the intervention group with social presence was not significant). In the study by Bonar et al [ 85 ], the only statistically significant difference between the intervention and control groups at the 3- and 6-month follow-ups involved total days of cannabis vaping in the previous 30 days. Finally, in the study by Buckner et al [ 80 ], the intervention group had less CU than the control group 2 weeks after the baseline; however, this was statistically significant only for participants with moderate or high levels of social anxiety.

Assessment of Cannabis-Related Negative Consequences

A total of 53% (10/19) of the studies also assessed cannabis-related negative consequences [ 78 - 84 , 86 , 88 , 92 ]. Of these 10 articles, 8 (80%) reported using a validated self-report instrument: 4 (50%) [ 81 , 82 , 86 , 88 ] used the 19-item Marijuana Problems Scale [ 67 ], 2 (25%) [ 78 , 79 ] used the 18-item Rutgers Marijuana Problem Index [ 121 , 122 ], and 2 (25%) [ 80 , 84 ] used the Brief Marijuana Consequences Questionnaire [ 116 ]. Only 10% (1/10) of the studies [ 92 ] used a screening tool, the Cannabis Abuse Screening Test [ 117 , 118 ]. None of these 10 studies demonstrated a statistically significant difference between the intervention and control groups. Of note, Walukevich-Dienst et al [ 86 ] found that women (but not men) who received an web-based PFI with additional information on CU risks reported significantly fewer cannabis-related problems than did women in the control group at 1 month after the intervention ( B =−1.941; P =.01).

Descriptive Summary of BCTs Used in Intervention Groups

After the 19 studies included in this review were coded, a total of 184 individual BCTs targeting CU in young adults were identified. Of these 184 BCTs, 133 (72.3% ) were deemed to be present beyond a reasonable doubt, and 51 (27.7%) were deemed to be present in all probability. Multimedia Appendix 8 [ 48 , 70 - 72 , 77 - 92 ] presents all the BCTs coded for each included study summarized by individual BCT and BCT cluster.

The 184 individual BCTs coded covered 38% (35/93) of the BCTs listed in the BCTTv1 [ 48 ]. The number of individual BCTs identified per study ranged from 5 to 19, with two-thirds of the 19 studies (12/19, 63%) using ≤9 BCTs (mean 9.68). As Multimedia Appendix 8 shows, at least one BCT fell into 13 of the 16 possible BCT clusters. The most frequent clusters were feedback monitoring , natural consequences , goal planning , and comparison of outcomes .

The most frequently coded BCTs were (1) feedback on behavior (BCT 2.2; 17/19, 89% of the studies; eg, “Once a week, participants receive detailed feedback by their counselor on their entries in diary and exercises. Depending on the involvement of each participant, up to seven feedbacks are given” [ 90 ]), (2) social support (unspecified) (BCT 3.1; 15/19, 79% of the studies; eg, “The website also features [...] blogs from former cannabis users, quick assist links, and weekly automatically generated encouragement emails” [ 71 ]), and (3) pros and cons (BCT 9.2; 14/19, 74% of the studies; eg, “participants are encouraged to state their personal reasons for and against their cannabis consumption, which they can review at any time, so they may reflect on what they could gain by successfully completing the program” [ 70 ]). Other commonly identified BCTs included social comparison (BCT 6.2; 12/19, 63% of the studies) and information about social and environmental consequences (BCT 5.3; 11/19, 58% of the studies), followed by problem solving (BCT 2.1; 10/19, 53% of the studies) and information about health consequences (BCT 5.1; 10/19, 53% of the studies).

RoB Assessment

Figure 2 presents the overall assessment of risk in each domain for all the included studies, whereas Figure 3 [ 70 - 72 , 77 - 92 ] summarizes the assessment of each study at the outcome level for each domain in the Cochrane RoB 2 [ 74 ].

Figure 2 shows that, of the included studies, 93% (27/29) were rated as having a “low” RoB arising from the randomization process (ie, selection bias) and 83% (24/29) were rated as having a “low” RoB due to missing data (ie, attrition bias). For bias due to deviations from the intended intervention (ie, performance bias), 72% (21/29) were rated as having a “low” risk, and for selective reporting of results, 59% (17/29) were rated as having a “low” risk. In the remaining domain regarding bias in measurement of the outcome (ie, detection bias), 48% (14/29) of the studies were deemed to present “some concerns,” mainly owing to the outcome assessment not being blinded (eg, self-reported outcome measure of CU). Finally, 79% (15/19) of the included studies were deemed to present “some concerns” or were rated as having a “high” RoB at the outcome level ( Figure 3 [ 70 - 72 , 77 - 92 ]). The RoB assessment for CU and cannabis consequences of each included study is presented in Multimedia Appendix 9 [ 70 - 72 , 77 - 92 ].

method of data analysis in research

Meta-Analysis Results

Due to several missing data points and despite contacting the authors, we were able to carry out only 1 meta-analysis of our primary outcome, CU frequency. Usable data were retrieved from only 16% (3/19) [ 70 - 72 ] of the studies included in this review. These 3 studies provided sufficient information to calculate an effect size, including mean differences based on change-from-baseline measurements and associated 95% CIs (or SE of the mean difference) and sample sizes per intervention and comparison conditions. The reasons for excluding the other 84% (16/19) of the studies included heterogeneity in outcome variables or measurements, inconsistent results, and missing data ( Multimedia Appendix 10 [ 77 - 92 ]).

Figure 4 [ 70 - 72 ] illustrates the mean differences and associated 95% CIs of 3 unique RCTs [ 70 - 72 ] that provided sufficient information to allow for the measurement of CU frequency at 3 months after the baseline relative to a comparison condition in terms of the number of self-reported days of use in the previous month using the TLFB method. Overall, the synthesized effect of digital interventions for young adult cannabis users on CU frequency, as measured using days of use in the previous month, was −6.79 (95% CI −9.59 to −4.00). This suggests that digital CU interventions had a statistically significant effect ( P <.001) on reducing CU frequency at the 3-month follow-up compared with the control conditions (both passive and active controls). The results of the meta-analysis also showed low between-study heterogeneity ( I 2 =48.3%; P =.12) across the 3 included studies.

method of data analysis in research

The samples of the 3 studies included in the meta-analysis varied in size from 225 to 1292 participants (mean 697.33, SD 444.11), and the mean age ranged from 24.7 to 31.88 years (mean 26.38, SD 3.58 years). These studies involved 3 different digital interventions and used different design approaches to assess intervention effectiveness. One study assessed the effectiveness of a web-based counseling program (ie, Quit the Shit) against a waitlist control [ 72 ], another examined the effectiveness of a fully self-guided web-based treatment program for CU and related problems (ie, Reduce Your Use: How to Break the Cannabis Habit) against a control condition website consisting of basic educational information on cannabis [ 71 ], and the third used a 3-arm RCT design to investigate whether the effectiveness of a minimally guided internet-based self-help intervention (ie, CANreduce 2.0) might be enhanced by implementing adherence-focused guidance and emphasizing the social presence factor of a personal e-coach [ 70 ].

Summary of Principal Findings

The primary aim of this systematic review was to evaluate the effectiveness of digital interventions in addressing CU among community-living young adults. We included 19 randomized controlled studies representing 9 unique digital interventions aimed at preventing, reducing, or ceasing CU and evaluated the effects of 3 different digital interventions on CU. In summary, the 3 digital interventions included in the meta-analysis proved superior to control conditions in reducing the number of days of CU in the previous month at the 3-month follow-up.

Our findings are consistent with those of 2 previous meta-analyses by Olmos et al [ 43 ] and Tait et al [ 44 ] and with the findings of a recently published umbrella review of systematic reviews and meta-analyses of RCTs [ 123 ], all of which revealed a positive effect of internet- and computer-based interventions on CU. However, a recent systematic review and meta-analysis by Beneria et al [ 45 ] found that web-based CU interventions did not significantly reduce CU. Beneria et al [ 45 ] included studies with different intervention programs that targeted diverse population groups (both adolescents and young adults) and use of more than one substance (eg, alcohol and cannabis). In our systematic review, a more conservative approach was taken—we focused specifically on young adults and considered interventions targeting CU only. Although our results indicate that digital interventions hold great promise in terms of effectiveness, an important question that remains unresolved is whether there is an optimal exposure dose in terms of both duration and frequency that might be more effective. Among the studies included in this systematic review, interventions varied considerably in terms of the number of psychoeducational modules offered (from 2 to 13), time spent reviewing the material, and duration (from a single session to a 12-week spread period). Our results suggest that an intervention duration of at least 6 weeks yields better results.

Another important finding of this review is that, although almost half (9/19, 47%) of the included studies observed an intervention effect on CU frequency, none reported a statistically significant improvement in cannabis-related negative consequences, which may be considered a more distal indicator. More than half (10/19, 53%) of the included studies investigated this outcome. It seems normal to expect to find an effect on CU frequency given that reducing CU is often the primary objective of interventions and because the motivation of users’ is generally focused on changing consumption behavior. It is plausible to think that the change in behavior at the consumption level must be maintained over time before an effect on cannabis-related negative consequences can be observed. However, our results showed that, in all the included studies, cannabis-related negative consequences and change in behavior (CU frequency) were measured at the same time point, namely, 3 months after the baseline. Moreover, Grigsby et al [ 124 ] conducted a scoping review of risk and protective factors for CU and suggested that interventions to reduce negative CU consequences should prioritize multilevel methods or strategies “to attenuate the cumulative risk from a combination of psychological, contextual, and social influences.”

A secondary objective of this systematic review was to describe the active ingredients used in digital interventions for CU among young adults. The vast majority of the interventions were based on either a theory or an intervention approach derived from theories such as CBT, MI, and personalized feedback. From these theories and approaches stem behavior change strategies or techniques, commonly known as BCTs. Feedback on behavior , included in the feedback monitoring BCT cluster, was the most common BCT used in the included studies. This specific BCT appears to be a core strategy in behavior change interventions [ 125 , 126 ]. In their systematic review of remotely delivered alcohol or substance misuse interventions for adults, Howlett et al [ 53 ] found that feedback on behavior , problem solving , and goal setting were the most frequently used BCTs in the included studies. In addition, this research group noted that the most promising BCTs for alcohol misuse were avoidance/reducing exposure to cues for behavior , pros and cons , and self-monitoring of behavior, whereas 2 very promising strategies for substance misuse in general were problem solving and self-monitoring of behavior . In our systematic review, in addition to feedback on behavior , the 6 most frequently used BCTs in the included studies were social support , pros and cons , social comparison , problem solving , information about social and environmental consequences , and information about health consequences . Although pros and cons and problem solving were present in all 3 studies of digital interventions included in our meta-analysis, avoidance/reducing exposure to cues for behavior was reported in only 5% (1/19) of the articles, and feedback on behavior was more frequently used than self-monitoring of behavior. However, it should be noted that the review by Howlett et al [ 53 ] examined digital interventions for participants with alcohol or substance misuse problems, whereas in this review, we focused on interventions that targeted CU from a harm reduction perspective. In this light, avoidance/reducing exposure to cues for behavior may be a BCT better suited to populations with substance misuse problems. Lending support to this, a meta-regression by Garnett et al [ 127 ] and a Cochrane systematic review by Kaner et al [ 128 ] both found interventions that used behavior substitution and credible source to be associated with greater reduction in excessive alcohol consumption compared with interventions that used other BCTs.

Beyond the number and types of BCTs used, reflecting on the extent to which each BCT in a given intervention suits (or does not suit) the targeted determinants (ie, behavioral and environmental causes) is crucial for planning intervention programs [ 26 ]. It is important when designing digital CU interventions not merely to pick a combination of BCTs that have been associated with effectiveness. Rather, the active ingredients must fit the determinants that the interventionists seek to influence. For example, action planning would be more relevant as a BCT for young adults highly motivated and ready to take action on their CU than would pros and cons , which aims instead to bolster motivation. Given that more than half of all digital interventions are asynchronous and based on a self-guided approach and do not offer counselor or therapist support, a great deal of motivation is required to engage in intervention and behavior change. Therefore, it is essential that developers consider the needs and characteristics of the targeted population to tailor intervention strategies (ie, BCTs) for successful behavior change (eg, tailored to the participant’s stage of change). In most of the digital interventions included in this systematic review, personalization was achieved through feedback messages about CU regarding descriptive norms, motives, risks and consequences, and costs, among other things.

Despite the high number of recent studies conducted in the field of digital CU interventions, most of the included articles in our review (17/19, 89%) reported on the development and evaluation of web-based intervention programs. A new generation of health intervention modalities such as mobile apps and social media has drawn the attention of researchers in the past decade and is currently being evaluated. In this regard, the results from a recently published scoping review [ 129 ], which included 5 studies of mobile apps for nonmedical CU, suggested that these novel modes of intervention delivery demonstrated adequate feasibility and acceptability. Nevertheless, the internet remains a powerful and convenient medium for reaching young adults with digital interventions intended to support safe CU behaviors [ 123 , 130 ].

Quality of Evidence

The GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach [ 131 - 133 ] was used to assess the quality of the evidence reviewed. It was deemed to be moderate for the primary outcome of this review, that is, CU frequency in terms of days of use in the previous month (see the summary of evidence in Multimedia Appendix 11 [ 70 , 72 ]). The direction of evidence was broadly consistent—in all 3 RCT studies [ 70 - 72 ] included in the meta-analysis, participants who received digital CU interventions reduced their consumption compared with those who received no or minimal interventions. The 3 RCTs were similar in that they all involved a web-based, multicomponent intervention program aimed at reducing or ceasing CU. However, the interventions did differ or vary in terms of several characteristics, including the strategies used, content, frequency, and duration. Given the small number of studies included in the meta-analysis, we could not conclude with certainty which intervention components, if any, contributed to the effect estimate observed.

Although inconsistency, indirectness, and imprecision were not major issues in the body of evidence, we downgraded the evidence from high to moderate quality on account of RoB assessments at the outcome level. The 3 RCT studies included in the meta-analysis were rated as having “some concerns” of RoB, mainly due to lack of blinding, which significantly reduced our certainty relative to subjective outcomes (ie, self-reported measures of CU frequency). A positive feature of these digital intervention trials is that most procedures are fully automated, and so there was typically a low RoB regarding randomization procedures, allocation to different conditions, and intervention delivery. It is impossible to blind participants to these types of behavior change interventions, and although some researchers have made attempts to counter the impact of this risk, performance bias is an inescapable issue in RCT studies of this kind. Blinding of intervention providers was not an issue in the 3 RCTs included in the meta-analysis because outcome data collection was automated. However, this same automated procedure made it very difficult to ensure follow‐up. Consequently, attrition was another source of bias in these RCT studies [ 70 - 72 ]. The participants lost to follow-up likely stopped using the intervention. However, there is no way of determining whether these people would have benefited more or less than the completers if they had seen the trial through.

The 3 RCTs included in the meta-analysis relied on subjective self-reported measures of CU at baseline and follow‐up, which are subject to recall and social desirability bias. However, all 3 studies used a well-validated instrument of measurement to determine frequency of CU, the TLFB [ 66 ]. This is a widely used, subjective self-report tool for measuring frequency (or quantity) of substance use (or abstinence). It is considered a reliable measure of CU [ 134 , 135 ]. Finally, it should be pointed out that any potential bias related to self‐reported CU frequency would have affected both the intervention and control groups (particularly in cases in which control groups received cannabis‐related information), and thus, it was unlikely to account for differential intervention effects. Moreover, we found RoB due to selective reporting in some studies owing mainly to the absence of any reference to a protocol. Ultimately, these limitations may have biased the results of the meta-analysis. Consequently, future research is likely to further undermine our confidence in the effect estimate we observed and report considerably different estimates.

Strengths and Limitations

Our systematic review and meta-analysis has a number of strengths: (1) we included only randomized controlled studies to ensure that the included studies possessed a rigorous research design, (2) we focused specifically on cannabis (rather than combining multiple substances), (3) we assessed the effectiveness of 3 different digital interventions on CU frequency among community-living young adults, and (4) we performed an exhaustive synthesis and comparison of the BCTs used in the 9 digital interventions examined in the 19 studies included in our review based on the BCTTv1.

Admittedly, this systematic review and meta-analysis has limitations that should be recognized. First, although we searched a range of bibliographic databases, the review was limited to articles published in peer-reviewed journals in English or French. This may have introduced publication bias given that articles reporting positive effects are more likely to be published than those with negative or equivocal results. Consequently, the studies included in this review may have overrepresented the statistically significant effects of digital CU interventions.

Second, only a small number of studies were included in the meta-analyses because many studies did not provide adequate statistical information for calculating and synthesizing effect sizes, although significant efforts were made to contact the authors in case of missing data. Because of the small sample size used in the meta-analysis, the effect size estimates may not be highly reflective of the true effects of digital interventions on CU frequency among young adults. Furthermore, synthesizing findings across studies that evaluated different modalities of web-based intervention programs (eg, fully self-guided vs with therapist guidance) and types of intervention approaches (eg, CBT, MI, and personalized feedback) may have introduced bias in the meta-analytical results due to the heterogeneity of the included studies, although heterogeneity was controlled for using a random-effects model and our results indicated low between-study heterogeneity.

Third, we took various measures to ensure that BCT coding was carried out rigorously throughout the data extraction and analysis procedures: (1) all coders received training on how to use the BCTTv1; (2) all the included articles were read line by line so that coders became familiar with intervention descriptions before initiating BCT coding; (3) the intervention description of each included article was double coded after a pilot calibration exercise with all coders, and any disagreements regarding the presence or absence of a BCT were discussed and resolved with a third party; and (4) we contacted the article authors when necessary and possible for further details on the BCTs they used. However, incomplete reporting of intervention content is a recognized issue [ 136 ], which may have resulted in our coding BCTs incorrectly as present or absent. Reliably specifying the BCTs used in interventions allows their active ingredients to be identified, their evidence to be synthesized, and interventions to be replicated, thereby providing tangible guidance to programmers and researchers to develop more effective interventions.

Finally, although this review identified the BCTs used in digital interventions, our approach did not allow us to draw conclusions regarding their effectiveness. Coding BCTs simply as present or absent does not consider the frequency, intensity, and quality with which they were delivered. For example, it is unclear how many individuals should self‐monitor their CU. In addition, the quality of BCT implementation may be critical in digital interventions where different graphics and interface designs and the usability of the BCTs used can have considerable influence on the level of user engagement [ 137 ]. In the future, it may be necessary to develop new methods to evaluate the dosage of individual BCTs in digital health interventions and characterize their implementation quality to assess their effectiveness [ 128 , 138 ]. Despite its limitations, this review suggests that digital interventions represent a promising avenue for preventing, reducing, or ceasing CU among community-living young adults.

Conclusions

The results of this systematic review and meta-analysis lend support to the promise of digital interventions as an effective means of reducing recreational CU frequency among young adults. Despite the advent and popularity of smartphones, web-based interventions remain the most common mode of delivery for digital interventions. The active ingredients of digital interventions are varied and encompass a number of clusters of the BCTTv1, but a significant number of BCTs remain underused. Additional research is needed to further investigate the effectiveness of these interventions on CU and key outcomes at later time points. Finally, a detailed assessment of user engagement with digital interventions for CU and understanding which intervention components are the most effective remain important research gaps.

Acknowledgments

The authors would like to thank Bénédicte Nauche, Miguel Chagnon, and Paul Di Biase for their valuable support with the search strategy development, statistical analysis, and linguistic revision, respectively. This work was supported by the Ministère de la Santé et des Services sociaux du Québec as part of a broader study aimed at developing and evaluating a digital intervention for young adult cannabis users. Additional funding was provided by the Research Chair in Innovative Nursing Practices. The views and opinions expressed in this manuscript do not necessarily reflect those of these funding entities.

Data Availability

The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

JC contributed to conceptualization, methodology, formal analysis, writing—original draft, supervision, and funding acquisition. GC contributed to conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft, visualization, and project administration. BV contributed to conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft, and visualization. PA contributed to conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft, visualization, and project administration. GR contributed to conceptualization, methodology, formal analysis, investigation, data curation, and writing—review and editing. GF contributed to conceptualization, methodology, formal analysis, investigation, data curation, and writing—review and editing. DJA contributed to conceptualization, methodology, formal analysis, writing—review and editing, and funding acquisition.

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

Detailed search strategies for each database.

Population, intervention, comparison, outcome, and study design strategy.

Excluded studies and reasons for exclusion.

Study and participant characteristics.

Description of intervention characteristics in the included articles.

Summary of methodological characteristics and major findings of the included studies categorized by intervention name.

Behavior change techniques (BCTs) coded in each included study summarized by individual BCT and BCT cluster.

Risk-of-bias assessment of each included study for cannabis use and cannabis consequences.

Excluded studies and reasons for exclusion from the meta-analysis.

Summary of evidence according to the Grading of Recommendations Assessment, Development, and Evaluation tool.

  • Arnett JJ. The developmental context of substance use in emerging adulthood. J Drug Issues. 2005;35(2):235-254. [ CrossRef ]
  • Stockings E, Hall WD, Lynskey M, Morley KI, Reavley N, Strang J, et al. Prevention, early intervention, harm reduction, and treatment of substance use in young people. Lancet Psychiatry. Mar 2016;3(3):280-296. [ CrossRef ] [ Medline ]
  • ElSohly MA, Chandra S, Radwan M, Majumdar CG, Church JC. A comprehensive review of cannabis potency in the United States in the last decade. Biol Psychiatry Cogn Neurosci Neuroimaging. Jun 2021;6(6):603-606. [ CrossRef ] [ Medline ]
  • Fischer B, Robinson T, Bullen C, Curran V, Jutras-Aswad D, Medina-Mora ME, et al. Lower-Risk Cannabis Use Guidelines (LRCUG) for reducing health harms from non-medical cannabis use: a comprehensive evidence and recommendations update. Int J Drug Policy. Jan 2022;99:103381. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rotermann M. What has changed since cannabis was legalized? Health Rep. Feb 19, 2020;31(2):11-20. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Degenhardt L, Stockings E, Patton G, Hall WD, Lynskey M. The increasing global health priority of substance use in young people. Lancet Psychiatry. Mar 2016;3(3):251-264. [ CrossRef ] [ Medline ]
  • Buckner JD, Bonn-Miller MO, Zvolensky MJ, Schmidt NB. Marijuana use motives and social anxiety among marijuana-using young adults. Addict Behav. Oct 2007;32(10):2238-2252. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Carliner H, Brown QL, Sarvet AL, Hasin DS. Cannabis use, attitudes, and legal status in the U.S.: a review. Prev Med. Nov 2017;104:13-23. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • World drug report 2020. United Nations Office on Drugs and Crime. 2020. URL: https://wdr.unodc.org/wdr2020/index2020.html [accessed 2023-11-28]
  • National Academies of Sciences, Engineering, and Medicine, Health and Medicine Division, Board on Population Health and Public Health Practice, Committee on the Health Effects of Marijuana: An Evidence Review and Research Agenda. The Health Effects of Cannabis and Cannabinoids: The Current State of Evidence and Recommendations for Research. Washington, DC. The National Academies Press; 2017.
  • Hall WD, Patton G, Stockings E, Weier M, Lynskey M, Morley KI, et al. Why young people's substance use matters for global health. Lancet Psychiatry. Mar 2016;3(3):265-279. [ CrossRef ] [ Medline ]
  • Cohen K, Weizman A, Weinstein A. Positive and negative effects of cannabis and cannabinoids on health. Clin Pharmacol Ther. May 2019;105(5):1139-1147. [ CrossRef ] [ Medline ]
  • Memedovich KA, Dowsett LE, Spackman E, Noseworthy T, Clement F. The adverse health effects and harms related to marijuana use: an overview review. CMAJ Open. Aug 16, 2018;6(3):E339-E346. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Teeters JB, Armstrong NM, King SA, Hubbard SM. A randomized pilot trial of a mobile phone-based brief intervention with personalized feedback and interactive text messaging to reduce driving after cannabis use and riding with a cannabis impaired driver. J Subst Abuse Treat. Nov 2022;142:108867. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chan GC, Becker D, Butterworth P, Hines L, Coffey C, Hall W, et al. Young-adult compared to adolescent onset of regular cannabis use: a 20-year prospective cohort study of later consequences. Drug Alcohol Rev. May 2021;40(4):627-636. [ CrossRef ] [ Medline ]
  • Hall W, Stjepanović D, Caulkins J, Lynskey M, Leung J, Campbell G, et al. Public health implications of legalising the production and sale of cannabis for medicinal and recreational use. Lancet. Oct 26, 2019;394(10208):1580-1590. [ CrossRef ] [ Medline ]
  • The health and social effects of nonmedical cannabis use. World Health Organization. 2016. URL: https://apps.who.int/iris/handle/10665/251056 [accessed 2023-11-28]
  • Boumparis N, Loheide-Niesmann L, Blankers M, Ebert DD, Korf D, Schaub MP, et al. Short- and long-term effects of digital prevention and treatment interventions for cannabis use reduction: a systematic review and meta-analysis. Drug Alcohol Depend. Jul 01, 2019;200:82-94. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jutras-Aswad D, Le Foll B, Bruneau J, Wild TC, Wood E, Fischer B. Thinking beyond legalization: the case for expanding evidence-based options for cannabis use disorder treatment in Canada. Can J Psychiatry. Feb 2019;64(2):82-87. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garnett CV, Crane D, Brown J, Kaner EF, Beyer FR, Muirhead CR, et al. Behavior change techniques used in digital behavior change interventions to reduce excessive alcohol consumption: a meta-regression. Ann Behav Med. May 18, 2018;52(6):530-543. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glanz K, Rimer BK, Viswanath K. Health Behavior: Theory, Research, and Practice, 5th Edition. Hoboken, NJ. Jossey-Bass; Jul 2015.
  • Prestwich A, Webb TL, Conner M. Using theory to develop and test interventions to promote changes in health behaviour: evidence, issues, and recommendations. Curr Opin Psychol. Oct 2015;5:1-5. [ CrossRef ]
  • Webb TL, Sniehotta FF, Michie S. Using theories of behaviour change to inform interventions for addictive behaviours. Addiction. Nov 2010;105(11):1879-1892. [ CrossRef ] [ Medline ]
  • Cilliers F, Schuwirth L, van der Vleuten C. Health behaviour theories: a conceptual lens to explore behaviour change. In: Cleland J, Durning SJ, editors. Researching Medical Education. Hoboken, NJ. Wiley; 2015.
  • Davis R, Campbell R, Hildon Z, Hobbs L, Michie S. Theories of behaviour and behaviour change across the social and behavioural sciences: a scoping review. Health Psychol Rev. 2015;9(3):323-344. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eldredge LK, Markham CM, Ruiter RA, Fernández ME, Kok G, Parcel GS. Planning Health Promotion Programs: An Intervention Mapping Approach, 4th Edition. Hoboken, NJ. John Wiley & Sons; Feb 2016.
  • Marlatt GA, Blume AW, Parks GA. Integrating harm reduction therapy and traditional substance abuse treatment. J Psychoactive Drugs. 2001;33(1):13-21. [ CrossRef ] [ Medline ]
  • Adams A, Ferguson M, Greer AM, Burmeister C, Lock K, McDougall J, et al. Guideline development in harm reduction: considerations around the meaningful involvement of people who access services. Drug Alcohol Depend Rep. Aug 12, 2022;4:100086. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis ML, Powers MB, Handelsman P, Medina JL, Zvolensky M, Smits JA. Behavioral therapies for treatment-seeking cannabis users: a meta-analysis of randomized controlled trials. Eval Health Prof. Mar 2015;38(1):94-114. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gates PJ, Sabioni P, Copeland J, Le Foll B, Gowing L. Psychosocial interventions for cannabis use disorder. Cochrane Database Syst Rev. May 05, 2016;2016(5):CD005336. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Halladay J, Scherer J, MacKillop J, Woock R, Petker T, Linton V, et al. Brief interventions for cannabis use in emerging adults: a systematic review, meta-analysis, and evidence map. Drug Alcohol Depend. Nov 01, 2019;204:107565. [ CrossRef ] [ Medline ]
  • Imtiaz S, Roerecke M, Kurdyak P, Samokhvalov AV, Hasan OS, Rehm J. Brief interventions for cannabis use in healthcare settings: systematic review and meta-analyses of randomized trials. J Addict Med. 2020;14(1):78-88. [ CrossRef ] [ Medline ]
  • Standeven LR, Scialli A, Chisolm MS, Terplan M. Trends in cannabis treatment admissions in adolescents/young adults: analysis of TEDS-A 1992 to 2016. J Addict Med. 2020;14(4):e29-e36. [ CrossRef ] [ Medline ]
  • Montanari L, Guarita B, Mounteney J, Zipfel N, Simon R. Cannabis use among people entering drug treatment in europe: a growing phenomenon? Eur Addict Res. 2017;23(3):113-121. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kerridge BT, Mauro PM, Chou SP, Saha TD, Pickering RP, Fan AZ, et al. Predictors of treatment utilization and barriers to treatment utilization among individuals with lifetime cannabis use disorder in the United States. Drug Alcohol Depend. Dec 01, 2017;181:223-228. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gates P, Copeland J, Swift W, Martin G. Barriers and facilitators to cannabis treatment. Drug Alcohol Rev. May 2012;31(3):311-319. [ CrossRef ] [ Medline ]
  • Hammarlund RA, Crapanzano KA, Luce L, Mulligan L, Ward KM. Review of the effects of self-stigma and perceived social stigma on the treatment-seeking decisions of individuals with drug- and alcohol-use disorders. Subst Abuse Rehabil. Nov 23, 2018;9:115-136. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bedrouni W. On the use of digital technologies to reduce the public health impacts of cannabis legalization in Canada. Can J Public Health. Dec 2018;109(5-6):748-751. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Perski O, Hébert ET, Naughton F, Hekler EB, Brown J, Businelle MS. Technology-mediated just-in-time adaptive interventions (JITAIs) to reduce harmful substance use: a systematic review. Addiction. May 2022;117(5):1220-1241. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kazemi DM, Borsari B, Levine MJ, Li S, Lamberson KA, Matta LA. A systematic review of the mHealth interventions to prevent alcohol and substance abuse. J Health Commun. May 2017;22(5):413-432. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nesvåg S, McKay JR. Feasibility and effects of digital interventions to support people in recovery from substance use disorders: systematic review. J Med Internet Res. Aug 23, 2018;20(8):e255. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hoch E, Preuss UW, Ferri M, Simon R. Digital interventions for problematic cannabis users in non-clinical settings: findings from a systematic review and meta-analysis. Eur Addict Res. 2016;22(5):233-242. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Olmos A, Tirado-Muñoz J, Farré M, Torrens M. The efficacy of computerized interventions to reduce cannabis use: a systematic review and meta-analysis. Addict Behav. Apr 2018;79:52-60. [ CrossRef ] [ Medline ]
  • Tait RJ, Spijkerman R, Riper H. Internet and computer based interventions for cannabis use: a meta-analysis. Drug Alcohol Depend. Dec 01, 2013;133(2):295-304. [ CrossRef ] [ Medline ]
  • Beneria A, Santesteban-Echarri O, Daigre C, Tremain H, Ramos-Quiroga JA, McGorry PD, et al. Online interventions for cannabis use among adolescents and young adults: systematic review and meta-analysis. Early Interv Psychiatry. Aug 2022;16(8):821-844. [ CrossRef ] [ Medline ]
  • Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. Int J Nurs Stud. May 2013;50(5):587-592. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Michie S, Abraham C, Eccles MP, Francis JJ, Hardeman W, Johnston M. Strengthening evaluation and implementation by specifying components of behaviour change interventions: a study protocol. Implement Sci. Feb 07, 2011;6:10. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. Aug 2013;46(1):81-95. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Michie S, Johnston M, Francis J, Hardeman W, Eccles M. From theory to intervention: mapping theoretically derived behavioural determinants to behaviour change techniques. Appl Psychol. Oct 2008;57(4):660-680. [ CrossRef ]
  • Scott C, de Barra M, Johnston M, de Bruin M, Scott N, Matheson C, et al. Using the behaviour change technique taxonomy v1 (BCTTv1) to identify the active ingredients of pharmacist interventions to improve non-hospitalised patient health outcomes. BMJ Open. Sep 15, 2020;10(9):e036500. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dombrowski SU, Sniehotta FF, Avenell A, Johnston M, MacLennan G, Araújo-Soares V. Identifying active ingredients in complex behavioural interventions for obese adults with obesity-related co-morbidities or additional risk factors for co-morbidities: a systematic review. Health Psychol Rev. 2012;6(1):7-32. [ CrossRef ]
  • Michie S, Abraham C, Whittington C, McAteer J, Gupta S. Effective techniques in healthy eating and physical activity interventions: a meta-regression. Health Psychol. Nov 2009;28(6):690-701. [ CrossRef ] [ Medline ]
  • Howlett N, García-Iglesias J, Bontoft C, Breslin G, Bartington S, Freethy I, et al. A systematic review and behaviour change technique analysis of remotely delivered alcohol and/or substance misuse interventions for adults. Drug Alcohol Depend. Oct 01, 2022;239:109597. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions Version 6.4. London, UK. The Cochrane Collaboration; 2023.
  • Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. Mar 29, 2021;372:n160. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. Jul 2016;75:40-46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Halladay J, Petker T, Fein A, Munn C, MacKillop J. Brief interventions for cannabis use in emerging adults: protocol for a systematic review, meta-analysis, and evidence map. Syst Rev. Jul 25, 2018;7(1):106. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. Apr 23, 2011;6:42. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Arnett JJ. Emerging adulthood. A theory of development from the late teens through the twenties. Am Psychol. May 2000;55(5):469-480. [ Medline ]
  • Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. J Med Libr Assoc. Jul 2016;104(3):240-243. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. Mar 07, 2014;348:g1687. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Presseau J, Ivers NM, Newham JJ, Knittle K, Danko KJ, Grimshaw JM. Using a behaviour change techniques taxonomy to identify active ingredients within trials of implementation interventions for diabetes care. Implement Sci. Apr 23, 2015;10:55. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fontaine G, Cossette S, Maheu-Cadotte MA, Deschênes MF, Rouleau G, Lavallée A, et al. Effect of implementation interventions on nurses' behaviour in clinical practice: a systematic review, meta-analysis and meta-regression protocol. Syst Rev. Dec 05, 2019;8(1):305. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fontaine G, Cossette S. A theory-based adaptive E-learning program aimed at increasing intentions to provide brief behavior change counseling: randomized controlled trial. Nurse Educ Today. Dec 2021;107:105112. [ CrossRef ] [ Medline ]
  • Fontaine G, Cossette S. Development and design of E_MOTIV: a theory-based adaptive e-learning program to support nurses' provision of brief behavior change counseling. Comput Inform Nurs. Mar 01, 2023;41(3):130-141. [ CrossRef ] [ Medline ]
  • Sobell LC, Sobell MB. Timeline follow-back: a technique for assessing self-reported alcohol consumption. In: Litten RZ, Allen JP, editors. Measuring Alcohol Consumption. Totowa, NJ. Humana Press; 1992.
  • Stephens RS, Roffman RA, Simpson EE. Treating adult marijuana dependence: a test of the relapse prevention model. J Consult Clin Psychol. 1994;62(1):92-99. [ CrossRef ]
  • Harris RJ, Deeks JJ, Altman DG, Bradburn MJ, Harbord RM, Sterne JA. Metan: fixed- and random-effects meta-analysis. Stata J. 2008;8(1):3-28. [ CrossRef ]
  • Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. Sep 06, 2003;327(7414):557-560. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Baumgartner C, Schaub MP, Wenger A, Malischnig D, Augsburger M, Walter M, et al. CANreduce 2.0 adherence-focused guidance for internet self-help among cannabis users: three-arm randomized controlled trial. J Med Internet Res. Apr 30, 2021;23(4):e27463. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rooke S, Copeland J, Norberg M, Hine D, McCambridge J. Effectiveness of a self-guided web-based cannabis treatment program: randomized controlled trial. J Med Internet Res. Feb 15, 2013;15(2):e26. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tossmann HP, Jonas B, Tensil MD, Lang P, Strüber E. A controlled trial of an internet-based intervention program for cannabis users. Cyberpsychol Behav Soc Netw. Nov 2011;14(11):673-679. [ CrossRef ] [ Medline ]
  • StataCorp. Stata statistical software: release 18. StataCorp LLC. College Station, TX. StataCorp LLC; 2023. URL: https://www.stata.com/ [accessed 2023-11-28]
  • Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. Aug 28, 2019;366:l4898. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McGuinness LA, Higgins JP. Risk-of-bias VISualization (robvis): an R package and Shiny web app for visualizing risk-of-bias assessments. Res Synth Methods. Jan 2021;12(1):55-61. [ CrossRef ] [ Medline ]
  • Haddaway NR, Page MJ, Pritchard CC, McGuinness LA. PRISMA2020: an R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and open synthesis. Campbell Syst Rev. Mar 27, 2022;18(2):e1230. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elliott JC, Carey KB. Correcting exaggerated marijuana use norms among college abstainers: a preliminary test of a preventive intervention. J Stud Alcohol Drugs. Nov 2012;73(6):976-980. [ CrossRef ] [ Medline ]
  • Elliott JC, Carey KB, Vanable PA. A preliminary evaluation of a web-based intervention for college marijuana use. Psychol Addict Behav. Mar 2014;28(1):288-293. [ CrossRef ] [ Medline ]
  • Lee CM, Neighbors C, Kilmer JR, Larimer ME. A brief, web-based personalized feedback selective intervention for college student marijuana use: a randomized clinical trial. Psychol Addict Behav. Jun 2010;24(2):265-273. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Buckner JD, Zvolensky MJ, Lewis EM. On-line personalized feedback intervention for negative affect and cannabis: a pilot randomized controlled trial. Exp Clin Psychopharmacol. Apr 2020;28(2):143-149. [ CrossRef ] [ Medline ]
  • Goodness TM, Palfai TP. Electronic screening and brief intervention to reduce cannabis use and consequences among graduate students presenting to a student health center: a pilot study. Addict Behav. Jul 2020;106:106362. [ CrossRef ] [ Medline ]
  • Palfai TP, Saitz R, Winter M, Brown TA, Kypri K, Goodness TM, et al. Web-based screening and brief intervention for student marijuana use in a university health center: pilot study to examine the implementation of eCHECKUP TO GO in different contexts. Addict Behav. Sep 2014;39(9):1346-1352. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Riggs NR, Conner BT, Parnes JE, Prince MA, Shillington AM, George MW. Marijuana eCHECKUPTO GO: effects of a personalized feedback plus protective behavioral strategies intervention for heavy marijuana-using college students. Drug Alcohol Depend. Sep 01, 2018;190:13-19. [ CrossRef ] [ Medline ]
  • Bonar EE, Chapman L, Pagoto S, Tan CY, Duval ER, McAfee J, et al. Social media interventions addressing physical activity among emerging adults who use cannabis: a pilot trial of feasibility and acceptability. Drug Alcohol Depend. Jan 01, 2023;242:109693. [ CrossRef ] [ Medline ]
  • Bonar EE, Goldstick JE, Chapman L, Bauermeister JA, Young SD, McAfee J, et al. A social media intervention for cannabis use among emerging adults: randomized controlled trial. Drug Alcohol Depend. Mar 01, 2022;232:109345. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Walukevich-Dienst K, Neighbors C, Buckner JD. Online personalized feedback intervention for cannabis-using college students reduces cannabis-related problems among women. Addict Behav. Nov 2019;98:106040. [ CrossRef ] [ Medline ]
  • Côté J, Tessier S, Gagnon H, April N, Rouleau G, Chagnon M. Efficacy of a web-based tailored intervention to reduce cannabis use among young people attending adult education centers in Quebec. Telemed J E Health. Nov 2018;24(11):853-860. [ CrossRef ] [ Medline ]
  • Cunningham JA, Schell C, Bertholet N, Wardell JD, Quilty LC, Agic B, et al. Online personalized feedback intervention to reduce risky cannabis use. Randomized controlled trial. Internet Interv. Nov 14, 2021;26:100484. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Copeland J, Rooke S, Rodriquez D, Norberg MM, Gibson L. Comparison of brief versus extended personalised feedback in an online intervention for cannabis users: short-term findings of a randomised trial. J Subst Abuse Treat. May 2017;76:43-48. [ CrossRef ] [ Medline ]
  • Jonas B, Tensil MD, Tossmann P, Strüber E. Effects of treatment length and chat-based counseling in a web-based intervention for cannabis users: randomized factorial trial. J Med Internet Res. May 08, 2018;20(5):e166. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schaub MP, Wenger A, Berg O, Beck T, Stark L, Buehler E, et al. A web-based self-help intervention with and without chat counseling to reduce cannabis use in problematic cannabis users: three-arm randomized controlled trial. J Med Internet Res. Oct 13, 2015;17(10):e232. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sinadinovic K, Johansson M, Johansson AS, Lundqvist T, Lindner P, Hermansson U. Guided web-based treatment program for reducing cannabis use: a randomized controlled trial. Addict Sci Clin Pract. Feb 18, 2020;15(1):9. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kanfer FH. Implications of a self-regulation model of therapy for treatment of addictive behaviors. In: Miller WR, Heather N, editors. Treating Addictive Behaviors. Boston, MA. Springer; 1986;29-47.
  • Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res. Mar 10, 2011;13(1):e30. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marlatt GA, Baer JS, Kivlahan DR, Dimeff LA, Larimer ME, Quigley LA, et al. Screening and brief intervention for high-risk college student drinkers: results from a 2-year follow-up assessment. J Consult Clin Psychol. Aug 1998;66(4):604-615. [ CrossRef ]
  • Miller WR, Rollnick S. Motivational Interviewing: Preparing People for Change. New York, NY. Guilford Press; 2002.
  • Prince MA, Carey KB, Maisto SA. Protective behavioral strategies for reducing alcohol involvement: a review of the methodological issues. Addict Behav. Jul 2013;38(7):2343-2351. [ CrossRef ] [ Medline ]
  • Lundqvist TN. Cognitive Dysfunctions in Chronic Cannabis Users Observed During Treatment: An Integrative Approach. Stockholm, Sweden. Almqvist & Wiksell; 1997.
  • Kanter JW, Puspitasari AJ, Santos MM, Nagy GA. Behavioural activation: history, evidence and promise. Br J Psychiatry. May 2012;200(5):361-363. [ CrossRef ] [ Medline ]
  • Jaffee WB, D'Zurilla TJ. Personality, problem solving, and adolescent substance use. Behav Ther. Mar 2009;40(1):93-101. [ CrossRef ] [ Medline ]
  • Miller W, Rollnick S. Motivational Interviewing: Preparing People to Change Addictive Behavior. New York, NY. The Guilford Press; 1991.
  • Gordon JR, Marlatt GA. Relapse Prevention: Maintenance Strategies in the Treatment of Addictive Behaviors. 2nd edition. New York, NY. The Guilford Press; 2005.
  • Platt JJ, Husband SD. An overview of problem-solving and social skills approaches in substance abuse treatment. Psychotherapy (Chic). 1993;30(2):276-283. [ FREE Full text ]
  • Steinberg KL, Roffman R, Carroll K, McRee B, Babor T, Miller M. Brief counseling for marijuana dependence: a manual for treating adults. Center for Substance Abuse Treatment, Substance Abuse and Mental Health Services Administration, US Department of Health and Human Services. URL: https:/​/store.​samhsa.gov/​product/​brief-counseling-marijuana-dependence-manual-treating-adults/​sma15-4211 [accessed 2024-03-23]
  • de Shazer S, Dolan Y. More Than Miracles: The State of the Art of Solution-Focused Brief Therapy. Oxfordshire, UK. Routledge; 2007.
  • Copeland J, Swift W, Roffman R, Stephens R. A randomized controlled trial of brief cognitive-behavioral interventions for cannabis use disorder. J Subst Abuse Treat. Sep 2001;21(2):55-65. [ CrossRef ] [ Medline ]
  • Linke S, McCambridge J, Khadjesari Z, Wallace P, Murray E. Development of a psychologically enhanced interactive online intervention for hazardous drinking. Alcohol Alcohol. 2008;43(6):669-674. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wang ML, Waring ME, Jake-Schoffman DE, Oleski JL, Michaels Z, Goetz JM, et al. Clinic versus online social network-delivered lifestyle interventions: protocol for the get social noninferiority randomized controlled trial. JMIR Res Protoc. Dec 11, 2017;6(12):e243. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepah SC, Jiang L, Peters AL. Translating the diabetes prevention program into an online social network: validation against CDC standards. Diabetes Educ. Jul 2014;40(4):435-443. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cunningham JA, van Mierlo T. The check your cannabis screener: a new online personalized feedback tool. Open Med Inform J. May 07, 2009;3:27-31. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bertholet N, Cunningham JA, Faouzi M, Gaume J, Gmel G, Burnand B, et al. Internet-based brief intervention for young men with unhealthy alcohol use: a randomized controlled trial in a general population sample. Addiction. Nov 2015;110(11):1735-1743. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Walker DD, Roffman RA, Stephens RS, Wakana K, Berghuis J, Kim W. Motivational enhancement therapy for adolescent marijuana users: a preliminary randomized controlled trial. J Consult Clin Psychol. Jun 2006;74(3):628-632. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Miller MB, Leffingwell T, Claborn K, Meier E, Walters S, Neighbors C. Personalized feedback interventions for college alcohol misuse: an update of Walters and Neighbors (2005). Psychol Addict Behav. Dec 2013;27(4):909-920. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ajzen I. From intentions to actions: a theory of planned behavior. In: Kuhl J, Beckmann J, editors. Action Control. Berlin, Germany. Springer; 1985;11-39.
  • Dennis M, Titus JC, Diamond G, Donaldson J, Godley SH, Tims FM, et al. The Cannabis Youth Treatment (CYT) experiment: rationale, study design and analysis plans. Addiction. Dec 11, 2002;97 Suppl 1(s1):16-34. [ CrossRef ] [ Medline ]
  • Simons JS, Dvorak RD, Merrill JE, Read JP. Dimensions and severity of marijuana consequences: development and validation of the Marijuana Consequences Questionnaire (MACQ). Addict Behav. May 2012;37(5):613-621. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Legleye S. The Cannabis Abuse Screening Test and the DSM-5 in the general population: optimal thresholds and underlying common structure using multiple factor analysis. Int J Methods Psychiatr Res. Jun 10, 2018;27(2):e1597. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Legleye S, Karila LM, Beck F, Reynaud M. Validation of the CAST, a general population Cannabis Abuse Screening Test. J Subst Use. Jul 12, 2009;12(4):233-242. [ CrossRef ]
  • Sobell LC, Sobell MB. Timeline follow back. In: Litten RZ, Allen JP, editors. Measuring Alcohol Consumption: Psychosocial and Biochemical Methods. Totowa, NJ. Humana Press; 1992;41-72.
  • White HR, Labouvie EW, Papadaratsakis V. Changes in substance use during the transition to adulthood: a comparison of college students and their noncollege age peers. J Drug Issues. Aug 03, 2016;35(2):281-306. [ CrossRef ]
  • White HR, Labouvie EW. Towards the assessment of adolescent problem drinking. J Stud Alcohol. Jan 1989;50(1):30-37. [ CrossRef ] [ Medline ]
  • Cloutier RM, Natesan Batley P, Kearns NT, Knapp AA. A psychometric evaluation of the Marijuana Problems Index among college students: confirmatory factor analysis and measurement invariance by gender. Exp Clin Psychopharmacol. Dec 2022;30(6):907-917. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guo H, Yang H, Yuan G, Zhu Z, Zhang K, Zhang X, et al. Effectiveness of information and communication technology (ICT) for addictive behaviors: an umbrella review of systematic reviews and meta-analysis of randomized controlled trials. Comput Hum Behav. Oct 2023;147:107843. [ CrossRef ]
  • Grigsby TJ, Lopez A, Albers L, Rogers CJ, Forster M. A scoping review of risk and protective factors for negative cannabis use consequences. Subst Abuse. Apr 07, 2023;17:11782218231166622. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Harkin B, Webb TL, Chang BP, Prestwich A, Conner M, Kellar I, et al. Does monitoring goal progress promote goal attainment? A meta-analysis of the experimental evidence. Psychol Bull. Feb 2016;142(2):198-229. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Samdal GB, Eide GE, Barth T, Williams G, Meland E. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses. Int J Behav Nutr Phys Act. Mar 28, 2017;14(1):42. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garnett C, Crane D, Brown J, Kaner E, Beyer F, Muirhead C. Behavior Change Techniques Used in Digital Behavior Change Interventions to Reduce Excessive Alcohol Consumption: A Meta-regression. Ann Behav Med May 18. 2018;52(6):A. [ CrossRef ]
  • Kaner EF, Beyer FR, Muirhead C, Campbell F, Pienaar ED, Bertholet N, et al. Effectiveness of brief alcohol interventions in primary care populations. Cochrane Database Syst Rev. Feb 24, 2018;2(2):CD004148. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sedrati H, Belrhiti Z, Nejjari C, Ghazal H. Evaluation of mobile health apps for non-medical cannabis use: a scoping review. Procedia Comput Sci. 2022;196:581-589. [ CrossRef ]
  • Curtis BL, Ashford RD, Magnuson KI, Ryan-Pettes SR. Comparison of smartphone ownership, social media use, and willingness to use digital interventions between generation Z and millennials in the treatment of substance use: cross-sectional questionnaire study. J Med Internet Res. Apr 17, 2019;21(4):e13050. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schünemann H, Brożek J, Guyatt G, Oxman A. The GRADE Handbook. London, UK. The Cochrane Collaboration; 2013.
  • Guyatt GH, Oxman AD, Schünemann HJ, Tugwell P, Knottnerus A. GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology. J Clin Epidemiol. Apr 2011;64(4):380-382. [ CrossRef ] [ Medline ]
  • Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. Apr 26, 2008;336(7650):924-926. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hjorthøj CR, Hjorthøj AR, Nordentoft M. Validity of Timeline Follow-Back for self-reported use of cannabis and other illicit substances--systematic review and meta-analysis. Addict Behav. Mar 2012;37(3):225-233. [ CrossRef ] [ Medline ]
  • Robinson SM, Sobell LC, Sobell MB, Leo GI. Reliability of the Timeline Followback for cocaine, cannabis, and cigarette use. Psychol Addict Behav. Mar 2014;28(1):154-162. [ CrossRef ] [ Medline ]
  • Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol. May 2008;27(3):379-387. [ CrossRef ] [ Medline ]
  • Garrett JJ. The Elements of User Experience: User-Centered Design for the Web and Beyond. London, UK. Pearson Education; 2010.
  • Lorencatto F, West R, Bruguera C, Brose LS, Michie S. Assessing the quality of goal setting in behavioural support for smoking cessation and its association with outcomes. Ann Behav Med. Apr 24, 2016;50(2):310-318. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung, G Eysenbach; submitted 30.11.23; peer-reviewed by H Sedrati; comments to author 02.01.24; revised version received 09.01.24; accepted 08.03.24; published 17.04.24.

©José Côté, Gabrielle Chicoine, Billy Vinette, Patricia Auger, Geneviève Rouleau, Guillaume Fontaine, Didier Jutras-Aswad. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. 5 Steps of the Data Analysis Process

    method of data analysis in research

  2. Data Analysis: What it is + Free Guide with Examples

    method of data analysis in research

  3. What is Data Analysis ?

    method of data analysis in research

  4. What is Data Analysis in Research

    method of data analysis in research

  5. What Is Data Analysis In Research Process

    method of data analysis in research

  6. CHOOSING A QUALITATIVE DATA ANALYSIS (QDA) PLAN

    method of data analysis in research

VIDEO

  1. Data Analysis

  2. How to Assess the Quantitative Data Collected from Questionnaire

  3. NVIVO 14 Training Day-13: Thematic & Content Analysis

  4. How to interpret Reliability analysis results

  5. Data Analysis & Interpretation

  6. interpretation of data , analysis and thesis writing (Nta UGC net sociology)

COMMENTS

  1. Data Analysis in Research: Types & Methods

    Methods used for data analysis in qualitative research. There are several techniques to analyze the data in qualitative research, but here are some commonly used methods, Content Analysis: It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented ...

  2. Data Analysis: Types, Methods & Techniques (a Complete List)

    Description: Quantitative data analysis is a high-level branch of data analysis that designates methods and techniques concerned with numbers instead of words. It accounts for more than 50% of all data analysis and is by far the most widespread and well-known type of data analysis.

  3. Data analysis

    data analysis, the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data, generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making.Data analysis techniques are used to gain useful insights from datasets, which ...

  4. What is data analysis? Methods, techniques, types & how-to

    A method of data analysis that is the umbrella term for engineering metrics and insights for additional value, direction, and context. By using exploratory statistical evaluation, data mining aims to identify dependencies, relations, patterns, and trends to generate advanced knowledge.

  5. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.

  6. Quantitative Data Analysis Methods & Techniques 101

    The two "branches" of quantitative analysis. As I mentioned, quantitative analysis is powered by statistical analysis methods.There are two main "branches" of statistical methods that are used - descriptive statistics and inferential statistics.In your research, you might only use descriptive statistics, or you might use a mix of both, depending on what you're trying to figure out.

  7. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  8. Research Methods Guide: Data Analysis

    Data Analysis and Presentation Techniques that Apply to both Survey and Interview Research. Create a documentation of the data and the process of data collection. Analyze the data rather than just describing it - use it to tell a story that focuses on answering the research question. Use charts or tables to help the reader understand the data ...

  9. Data Analysis in Quantitative Research

    Abstract. Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility.

  10. PDF The SAGE Handbook of Qualitative Data Analysis

    Data analysis is the central step in qualitative research. Whatever the data are, it is their analysis that, in a decisive way, forms the outcomes of the research. Sometimes, data collection is limited to recording and docu-menting naturally occurring phenomena, for example by recording interactions. Then qualitative research is concentrated on ...

  11. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  12. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  13. (PDF) Different Types of Data Analysis; Data Analysis Methods and

    Data analysis is simply the process of converting the gathered data to meanin gf ul information. Different techniques such as modeling to reach trends, relatio nships, and therefore conclusions to ...

  14. Data Analysis in Research

    Data analysis in research is the systematic process of investigating facts and figures to make conclusions about a specific question or topic; there are two major types of data analysis methods in ...

  15. Qualitative Data Analysis Methods: Top 6 + Examples

    QDA Method #1: Qualitative Content Analysis. Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

  16. The 7 Most Useful Data Analysis Methods and Techniques

    4. The data analysis process. In order to gain meaningful insights from data, data analysts will perform a rigorous step-by-step process. We go over this in detail in our step by step guide to the data analysis process —but, to briefly summarize, the data analysis process generally consists of the following phases: Defining the question

  17. (PDF) Data Analysis in Mixed Research: A Primer

    The research applies a mixed-methods research approach with a convergent parallel design type supported by a survey method. Research data was obtained from the distribution of questionnaires ...

  18. Qualitative Data Analysis: What is it, Methods + Examples

    Method 1: Content Analysis. Content analysis is a methodical technique for analyzing textual or visual data in a structured manner. In this method, you will categorize qualitative data by splitting it into manageable pieces and assigning the manual coding process to these units.

  19. Data Analysis

    Data analysis methods in the absence of primary data collection can involve discussing common patterns, as well as, controversies within secondary data directly related to the research area. My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance offers practical assistance to complete a ...

  20. Basic statistical tools in research and data analysis

    Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if ...

  21. A Practical Guide to Writing Quantitative and Qualitative Research

    A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. ... this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed.1 ...

  22. A Step-by-Step Process of Thematic Analysis to Develop a Conceptual

    Thematic analysis is a research method used to identify and interpret patterns or themes in a data set; it often leads to new insights and understanding (Boyatzis, 1998; Elliott, 2018; Thomas, 2006).However, it is critical that researchers avoid letting their own preconceptions interfere with the identification of key themes (Morse & Mitcham, 2002; Patton, 2015).

  23. Transfer learning under the Cox model with interval‐censored data

    Our method aims to utilize data from other population groups as a complement to enhance risk factor discernment and failure time prediction among underrepresented subgroups. However, a literature gap exists in effective knowledge transfer from the source to the target for risk assessment with interval-censored data while accommodating ...

  24. Global burden associated with 85 pathogens in 2019

    Research and analysis Close; Research and analysis overview Global Burden of Disease (GBD) ... Methods. We estimated disability-adjusted life-years (DALYs) associated with 85 pathogens in 2019, globally, regionally, and for 204 countries and territories. ... We created the ratios by using multiple cause of death data, hospital discharge data ...

  25. How Artificial Intelligence Is Shaping Medical Imaging Technology: A

    The challenges associated with COVID-19 research, including the importance of large-scale datasets and efficient analysis methods are covered. The potential of radiomics, which involves extracting quantitative features from medical images, in aiding COVID-19 diagnosis, prognosis, and treatment planning, are also mentioned.

  26. Visual Scribing: A Qualitative Research Tool in a Community Engagement

    One of these tools being arts-based research. In arts-based research methods the researcher can either be an observer of the process of making art, or they can be involved in the process as an artist who constructs part of the research as a form of data collection (Gerstenblatt, 2013). These artistic products can then be used as a reflective ...

  27. Real-World Data Analysis of Adverse Events Attributable to Large Joint

    Methods: This research focused on large joint arthroplasty,utilized an EHR dataset of ~27,000 patients who had an arthroplasty encounter (2016 - 2019) that was collated by Loopback Analytics LLC ...

  28. JMIR Mental Health

    Background: As depression is highly heterogenous, an increasing number of studies investigate person-specific associations of depressive symptoms in longitudinal data. However, most studies in this area of research conceptualize symptom interrelations to be static and time invariant, which may lead to important temporal features of the disorder being missed.

  29. Journal of Medical Internet Research

    Methods: We conducted a systematic search of 7 databases for empirical studies published between database inception and February 13, 2023, assessing the following outcomes: cannabis use (frequency, quantity, or both) and cannabis-related negative consequences. ... digital interventions, and their comparators were summarized. Meta-analysis ...

  30. U.S.: Leader or Loser in the G7?

    Employee Engagement Create a culture that ensures employees are involved, enthusiastic and highly productive in their work and workplace.; Employee Experience Analyze and improve the experiences across your employee life cycle, so your people and organization can thrive.; Leadership Identify and enable future-ready leaders who can inspire exceptional performance.