• AI & NLP
  • Churn & Loyalty
  • Customer Experience
  • Customer Journeys
  • Customer Metrics
  • Feedback Analysis
  • Product Experience
  • Product Updates
  • Sentiment Analysis
  • Surveys & Feedback Collection
  • Try Thematic

Welcome to the community

manual data analysis qualitative research

Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic)

When we conduct qualitative methods of research, need to explain changes in metrics or understand people's opinions, we always turn to qualitative data. Qualitative data is typically generated through:

  • Interview transcripts
  • Surveys with open-ended questions
  • Contact center transcripts
  • Texts and documents
  • Audio and video recordings
  • Observational notes

Compared to quantitative data, which captures structured information, qualitative data is unstructured and has more depth. It can answer our questions, can help formulate hypotheses and build understanding.

It's important to understand the differences between quantitative data & qualitative data . But unfortunately, analyzing qualitative data is difficult. While tools like Excel, Tableau and PowerBI crunch and visualize quantitative data with ease, there are a limited number of mainstream tools for analyzing qualitative data . The majority of qualitative data analysis still happens manually.

That said, there are two new trends that are changing this. First, there are advances in natural language processing (NLP) which is focused on understanding human language. Second, there is an explosion of user-friendly software designed for both researchers and businesses. Both help automate the qualitative data analysis process.

In this post we want to teach you how to conduct a successful qualitative data analysis. There are two primary qualitative data analysis methods; manual & automatic. We will teach you how to conduct the analysis manually, and also, automatically using software solutions powered by NLP. We’ll guide you through the steps to conduct a manual analysis, and look at what is involved and the role technology can play in automating this process.

More businesses are switching to fully-automated analysis of qualitative customer data because it is cheaper, faster, and just as accurate. Primarily, businesses purchase subscriptions to feedback analytics platforms so that they can understand customer pain points and sentiment.

Overwhelming quantity of feedback

We’ll take you through 5 steps to conduct a successful qualitative data analysis. Within each step we will highlight the key difference between the manual, and automated approach of qualitative researchers. Here's an overview of the steps:

The 5 steps to doing qualitative data analysis

  • Gathering and collecting your qualitative data
  • Organizing and connecting into your qualitative data
  • Coding your qualitative data
  • Analyzing the qualitative data for insights
  • Reporting on the insights derived from your analysis

What is Qualitative Data Analysis?

Qualitative data analysis is a process of gathering, structuring and interpreting qualitative data to understand what it represents.

Qualitative data is non-numerical and unstructured. Qualitative data generally refers to text, such as open-ended responses to survey questions or user interviews, but also includes audio, photos and video.

Businesses often perform qualitative data analysis on customer feedback. And within this context, qualitative data generally refers to verbatim text data collected from sources such as reviews, complaints, chat messages, support centre interactions, customer interviews, case notes or social media comments.

How is qualitative data analysis different from quantitative data analysis?

Understanding the differences between quantitative & qualitative data is important. When it comes to analyzing data, Qualitative Data Analysis serves a very different role to Quantitative Data Analysis. But what sets them apart?

Qualitative Data Analysis dives into the stories hidden in non-numerical data such as interviews, open-ended survey answers, or notes from observations. It uncovers the ‘whys’ and ‘hows’ giving a deep understanding of people’s experiences and emotions.

Quantitative Data Analysis on the other hand deals with numerical data, using statistics to measure differences, identify preferred options, and pinpoint root causes of issues.  It steps back to address questions like "how many" or "what percentage" to offer broad insights we can apply to larger groups.

In short, Qualitative Data Analysis is like a microscope,  helping us understand specific detail. Quantitative Data Analysis is like the telescope, giving us a broader perspective. Both are important, working together to decode data for different objectives.

Qualitative Data Analysis methods

Once all the data has been captured, there are a variety of analysis techniques available and the choice is determined by your specific research objectives and the kind of data you’ve gathered.  Common qualitative data analysis methods include:

Content Analysis

This is a popular approach to qualitative data analysis. Other qualitative analysis techniques may fit within the broad scope of content analysis. Thematic analysis is a part of the content analysis.  Content analysis is used to identify the patterns that emerge from text, by grouping content into words, concepts, and themes. Content analysis is useful to quantify the relationship between all of the grouped content. The Columbia School of Public Health has a detailed breakdown of content analysis .

Narrative Analysis

Narrative analysis focuses on the stories people tell and the language they use to make sense of them.  It is particularly useful in qualitative research methods where customer stories are used to get a deep understanding of customers’ perspectives on a specific issue. A narrative analysis might enable us to summarize the outcomes of a focused case study.

Discourse Analysis

Discourse analysis is used to get a thorough understanding of the political, cultural and power dynamics that exist in specific situations.  The focus of discourse analysis here is on the way people express themselves in different social contexts. Discourse analysis is commonly used by brand strategists who hope to understand why a group of people feel the way they do about a brand or product.

Thematic Analysis

Thematic analysis is used to deduce the meaning behind the words people use. This is accomplished by discovering repeating themes in text. These meaningful themes reveal key insights into data and can be quantified, particularly when paired with sentiment analysis . Often, the outcome of thematic analysis is a code frame that captures themes in terms of codes, also called categories. So the process of thematic analysis is also referred to as “coding”. A common use-case for thematic analysis in companies is analysis of customer feedback.

Grounded Theory

Grounded theory is a useful approach when little is known about a subject. Grounded theory starts by formulating a theory around a single data case. This means that the theory is “grounded”. Grounded theory analysis is based on actual data, and not entirely speculative. Then additional cases can be examined to see if they are relevant and can add to the original grounded theory.

Methods of qualitative data analysis; approaches and techniques to qualitative data analysis

Challenges of Qualitative Data Analysis

While Qualitative Data Analysis offers rich insights, it comes with its challenges. Each unique QDA method has its unique hurdles. Let’s take a look at the challenges researchers and analysts might face, depending on the chosen method.

  • Time and Effort (Narrative Analysis): Narrative analysis, which focuses on personal stories, demands patience. Sifting through lengthy narratives to find meaningful insights can be time-consuming, requires dedicated effort.
  • Being Objective (Grounded Theory): Grounded theory, building theories from data, faces the challenges of personal biases. Staying objective while interpreting data is crucial, ensuring conclusions are rooted in the data itself.
  • Complexity (Thematic Analysis): Thematic analysis involves identifying themes within data, a process that can be intricate. Categorizing and understanding themes can be complex, especially when each piece of data varies in context and structure. Thematic Analysis software can simplify this process.
  • Generalizing Findings (Narrative Analysis): Narrative analysis, dealing with individual stories, makes drawing broad challenging. Extending findings from a single narrative to a broader context requires careful consideration.
  • Managing Data (Thematic Analysis): Thematic analysis involves organizing and managing vast amounts of unstructured data, like interview transcripts. Managing this can be a hefty task, requiring effective data management strategies.
  • Skill Level (Grounded Theory): Grounded theory demands specific skills to build theories from the ground up. Finding or training analysts with these skills poses a challenge, requiring investment in building expertise.

Benefits of qualitative data analysis

Qualitative Data Analysis (QDA) is like a versatile toolkit, offering a tailored approach to understanding your data. The benefits it offers are as diverse as the methods. Let’s explore why choosing the right method matters.

  • Tailored Methods for Specific Needs: QDA isn't one-size-fits-all. Depending on your research objectives and the type of data at hand, different methods offer unique benefits. If you want emotive customer stories, narrative analysis paints a strong picture. When you want to explain a score, thematic analysis reveals insightful patterns
  • Flexibility with Thematic Analysis: thematic analysis is like a chameleon in the toolkit of QDA. It adapts well to different types of data and research objectives, making it a top choice for any qualitative analysis.
  • Deeper Understanding, Better Products: QDA helps you dive into people's thoughts and feelings. This deep understanding helps you build products and services that truly matches what people want, ensuring satisfied customers
  • Finding the Unexpected: Qualitative data often reveals surprises that we miss in quantitative data. QDA offers us new ideas and perspectives, for insights we might otherwise miss.
  • Building Effective Strategies: Insights from QDA are like strategic guides. They help businesses in crafting plans that match people’s desires.
  • Creating Genuine Connections: Understanding people’s experiences lets businesses connect on a real level. This genuine connection helps build trust and loyalty, priceless for any business.

How to do Qualitative Data Analysis: 5 steps

Now we are going to show how you can do your own qualitative data analysis. We will guide you through this process step by step. As mentioned earlier, you will learn how to do qualitative data analysis manually , and also automatically using modern qualitative data and thematic analysis software.

To get best value from the analysis process and research process, it’s important to be super clear about the nature and scope of the question that’s being researched. This will help you select the research collection channels that are most likely to help you answer your question.

Depending on if you are a business looking to understand customer sentiment, or an academic surveying a school, your approach to qualitative data analysis will be unique.

Once you’re clear, there’s a sequence to follow. And, though there are differences in the manual and automatic approaches, the process steps are mostly the same.

The use case for our step-by-step guide is a company looking to collect data (customer feedback data), and analyze the customer feedback - in order to improve customer experience. By analyzing the customer feedback the company derives insights about their business and their customers. You can follow these same steps regardless of the nature of your research. Let’s get started.

Step 1: Gather your qualitative data and conduct research (Conduct qualitative research)

The first step of qualitative research is to do data collection. Put simply, data collection is gathering all of your data for analysis. A common situation is when qualitative data is spread across various sources.

Classic methods of gathering qualitative data

Most companies use traditional methods for gathering qualitative data: conducting interviews with research participants, running surveys, and running focus groups. This data is typically stored in documents, CRMs, databases and knowledge bases. It’s important to examine which data is available and needs to be included in your research project, based on its scope.

Using your existing qualitative feedback

As it becomes easier for customers to engage across a range of different channels, companies are gathering increasingly large amounts of both solicited and unsolicited qualitative feedback.

Most organizations have now invested in Voice of Customer programs , support ticketing systems, chatbot and support conversations, emails and even customer Slack chats.

These new channels provide companies with new ways of getting feedback, and also allow the collection of unstructured feedback data at scale.

The great thing about this data is that it contains a wealth of valubale insights and that it’s already there! When you have a new question about user behavior or your customers, you don’t need to create a new research study or set up a focus group. You can find most answers in the data you already have.

Typically, this data is stored in third-party solutions or a central database, but there are ways to export it or connect to a feedback analysis solution through integrations or an API.

Utilize untapped qualitative data channels

There are many online qualitative data sources you may not have considered. For example, you can find useful qualitative data in social media channels like Twitter or Facebook. Online forums, review sites, and online communities such as Discourse or Reddit also contain valuable data about your customers, or research questions.

If you are considering performing a qualitative benchmark analysis against competitors - the internet is your best friend. Gathering feedback in competitor reviews on sites like Trustpilot, G2, Capterra, Better Business Bureau or on app stores is a great way to perform a competitor benchmark analysis.

Customer feedback analysis software often has integrations into social media and review sites, or you could use a solution like DataMiner to scrape the reviews.

G2.com reviews of the product Airtable. You could pull reviews from G2 for your analysis.

Step 2: Connect & organize all your qualitative data

Now you all have this qualitative data but there’s a problem, the data is unstructured. Before feedback can be analyzed and assigned any value, it needs to be organized in a single place. Why is this important? Consistency!

If all data is easily accessible in one place and analyzed in a consistent manner, you will have an easier time summarizing and making decisions based on this data.

The manual approach to organizing your data

The classic method of structuring qualitative data is to plot all the raw data you’ve gathered into a spreadsheet.

Typically, research and support teams would share large Excel sheets and different business units would make sense of the qualitative feedback data on their own. Each team collects and organizes the data in a way that best suits them, which means the feedback tends to be kept in separate silos.

An alternative and a more robust solution is to store feedback in a central database, like Snowflake or Amazon Redshift .

Keep in mind that when you organize your data in this way, you are often preparing it to be imported into another software. If you go the route of a database, you would need to use an API to push the feedback into a third-party software.

Computer-assisted qualitative data analysis software (CAQDAS)

Traditionally within the manual analysis approach (but not always), qualitative data is imported into CAQDAS software for coding.

In the early 2000s, CAQDAS software was popularised by developers such as ATLAS.ti, NVivo and MAXQDA and eagerly adopted by researchers to assist with the organizing and coding of data.  

The benefits of using computer-assisted qualitative data analysis software:

  • Assists in the organizing of your data
  • Opens you up to exploring different interpretations of your data analysis
  • Allows you to share your dataset easier and allows group collaboration (allows for secondary analysis)

However you still need to code the data, uncover the themes and do the analysis yourself. Therefore it is still a manual approach.

The user interface of CAQDAS software 'NVivo'

Organizing your qualitative data in a feedback repository

Another solution to organizing your qualitative data is to upload it into a feedback repository where it can be unified with your other data , and easily searchable and taggable. There are a number of software solutions that act as a central repository for your qualitative research data. Here are a couple solutions that you could investigate:  

  • Dovetail: Dovetail is a research repository with a focus on video and audio transcriptions. You can tag your transcriptions within the platform for theme analysis. You can also upload your other qualitative data such as research reports, survey responses, support conversations, and customer interviews. Dovetail acts as a single, searchable repository. And makes it easier to collaborate with other people around your qualitative research.
  • EnjoyHQ: EnjoyHQ is another research repository with similar functionality to Dovetail. It boasts a more sophisticated search engine, but it has a higher starting subscription cost.

Organizing your qualitative data in a feedback analytics platform

If you have a lot of qualitative customer or employee feedback, from the likes of customer surveys or employee surveys, you will benefit from a feedback analytics platform. A feedback analytics platform is a software that automates the process of both sentiment analysis and thematic analysis . Companies use the integrations offered by these platforms to directly tap into their qualitative data sources (review sites, social media, survey responses, etc.). The data collected is then organized and analyzed consistently within the platform.

If you have data prepared in a spreadsheet, it can also be imported into feedback analytics platforms.

Once all this rich data has been organized within the feedback analytics platform, it is ready to be coded and themed, within the same platform. Thematic is a feedback analytics platform that offers one of the largest libraries of integrations with qualitative data sources.

Some of qualitative data integrations offered by Thematic

Step 3: Coding your qualitative data

Your feedback data is now organized in one place. Either within your spreadsheet, CAQDAS, feedback repository or within your feedback analytics platform. The next step is to code your feedback data so we can extract meaningful insights in the next step.

Coding is the process of labelling and organizing your data in such a way that you can then identify themes in the data, and the relationships between these themes.

To simplify the coding process, you will take small samples of your customer feedback data, come up with a set of codes, or categories capturing themes, and label each piece of feedback, systematically, for patterns and meaning. Then you will take a larger sample of data, revising and refining the codes for greater accuracy and consistency as you go.

If you choose to use a feedback analytics platform, much of this process will be automated and accomplished for you.

The terms to describe different categories of meaning (‘theme’, ‘code’, ‘tag’, ‘category’ etc) can be confusing as they are often used interchangeably.  For clarity, this article will use the term ‘code’.

To code means to identify key words or phrases and assign them to a category of meaning. “I really hate the customer service of this computer software company” would be coded as “poor customer service”.

How to manually code your qualitative data

  • Decide whether you will use deductive or inductive coding. Deductive coding is when you create a list of predefined codes, and then assign them to the qualitative data. Inductive coding is the opposite of this, you create codes based on the data itself. Codes arise directly from the data and you label them as you go. You need to weigh up the pros and cons of each coding method and select the most appropriate.
  • Read through the feedback data to get a broad sense of what it reveals. Now it’s time to start assigning your first set of codes to statements and sections of text.
  • Keep repeating step 2, adding new codes and revising the code description as often as necessary.  Once it has all been coded, go through everything again, to be sure there are no inconsistencies and that nothing has been overlooked.
  • Create a code frame to group your codes. The coding frame is the organizational structure of all your codes. And there are two commonly used types of coding frames, flat, or hierarchical. A hierarchical code frame will make it easier for you to derive insights from your analysis.
  • Based on the number of times a particular code occurs, you can now see the common themes in your feedback data. This is insightful! If ‘bad customer service’ is a common code, it’s time to take action.

We have a detailed guide dedicated to manually coding your qualitative data .

Example of a hierarchical coding frame in qualitative data analysis

Using software to speed up manual coding of qualitative data

An Excel spreadsheet is still a popular method for coding. But various software solutions can help speed up this process. Here are some examples.

  • CAQDAS / NVivo - CAQDAS software has built-in functionality that allows you to code text within their software. You may find the interface the software offers easier for managing codes than a spreadsheet.
  • Dovetail/EnjoyHQ - You can tag transcripts and other textual data within these solutions. As they are also repositories you may find it simpler to keep the coding in one platform.
  • IBM SPSS - SPSS is a statistical analysis software that may make coding easier than in a spreadsheet.
  • Ascribe - Ascribe’s ‘Coder’ is a coding management system. Its user interface will make it easier for you to manage your codes.

Automating the qualitative coding process using thematic analysis software

In solutions which speed up the manual coding process, you still have to come up with valid codes and often apply codes manually to pieces of feedback. But there are also solutions that automate both the discovery and the application of codes.

Advances in machine learning have now made it possible to read, code and structure qualitative data automatically. This type of automated coding is offered by thematic analysis software .

Automation makes it far simpler and faster to code the feedback and group it into themes. By incorporating natural language processing (NLP) into the software, the AI looks across sentences and phrases to identify common themes meaningful statements. Some automated solutions detect repeating patterns and assign codes to them, others make you train the AI by providing examples. You could say that the AI learns the meaning of the feedback on its own.

Thematic automates the coding of qualitative feedback regardless of source. There’s no need to set up themes or categories in advance. Simply upload your data and wait a few minutes. You can also manually edit the codes to further refine their accuracy.  Experiments conducted indicate that Thematic’s automated coding is just as accurate as manual coding .

Paired with sentiment analysis and advanced text analytics - these automated solutions become powerful for deriving quality business or research insights.

You could also build your own , if you have the resources!

The key benefits of using an automated coding solution

Automated analysis can often be set up fast and there’s the potential to uncover things that would never have been revealed if you had given the software a prescribed list of themes to look for.

Because the model applies a consistent rule to the data, it captures phrases or statements that a human eye might have missed.

Complete and consistent analysis of customer feedback enables more meaningful findings. Leading us into step 4.

Step 4: Analyze your data: Find meaningful insights

Now we are going to analyze our data to find insights. This is where we start to answer our research questions. Keep in mind that step 4 and step 5 (tell the story) have some overlap . This is because creating visualizations is both part of analysis process and reporting.

The task of uncovering insights is to scour through the codes that emerge from the data and draw meaningful correlations from them. It is also about making sure each insight is distinct and has enough data to support it.

Part of the analysis is to establish how much each code relates to different demographics and customer profiles, and identify whether there’s any relationship between these data points.

Manually create sub-codes to improve the quality of insights

If your code frame only has one level, you may find that your codes are too broad to be able to extract meaningful insights. This is where it is valuable to create sub-codes to your primary codes. This process is sometimes referred to as meta coding.

Note: If you take an inductive coding approach, you can create sub-codes as you are reading through your feedback data and coding it.

While time-consuming, this exercise will improve the quality of your analysis. Here is an example of what sub-codes could look like.

Example of sub-codes

You need to carefully read your qualitative data to create quality sub-codes. But as you can see, the depth of analysis is greatly improved. By calculating the frequency of these sub-codes you can get insight into which  customer service problems you can immediately address.

Correlate the frequency of codes to customer segments

Many businesses use customer segmentation . And you may have your own respondent segments that you can apply to your qualitative analysis. Segmentation is the practise of dividing customers or research respondents into subgroups.

Segments can be based on:

  • Demographic
  • And any other data type that you care to segment by

It is particularly useful to see the occurrence of codes within your segments. If one of your customer segments is considered unimportant to your business, but they are the cause of nearly all customer service complaints, it may be in your best interest to focus attention elsewhere. This is a useful insight!

Manually visualizing coded qualitative data

There are formulas you can use to visualize key insights in your data. The formulas we will suggest are imperative if you are measuring a score alongside your feedback.

If you are collecting a metric alongside your qualitative data this is a key visualization. Impact answers the question: “What’s the impact of a code on my overall score?”. Using Net Promoter Score (NPS) as an example, first you need to:

  • Calculate overall NPS
  • Calculate NPS in the subset of responses that do not contain that theme
  • Subtract B from A

Then you can use this simple formula to calculate code impact on NPS .

Visualizing qualitative data: Calculating the impact of a code on your score

You can then visualize this data using a bar chart.

You can download our CX toolkit - it includes a template to recreate this.

Trends over time

This analysis can help you answer questions like: “Which codes are linked to decreases or increases in my score over time?”

We need to compare two sequences of numbers: NPS over time and code frequency over time . Using Excel, calculate the correlation between the two sequences, which can be either positive (the more codes the higher the NPS, see picture below), or negative (the more codes the lower the NPS).

Now you need to plot code frequency against the absolute value of code correlation with NPS. Here is the formula:

Analyzing qualitative data: Calculate which codes are linked to increases or decreases in my score

The visualization could look like this:

Visualizing qualitative data trends over time

These are two examples, but there are more. For a third manual formula, and to learn why word clouds are not an insightful form of analysis, read our visualizations article .

Using a text analytics solution to automate analysis

Automated text analytics solutions enable codes and sub-codes to be pulled out of the data automatically. This makes it far faster and easier to identify what’s driving negative or positive results. And to pick up emerging trends and find all manner of rich insights in the data.

Another benefit of AI-driven text analytics software is its built-in capability for sentiment analysis, which provides the emotive context behind your feedback and other qualitative textual data therein.

Thematic provides text analytics that goes further by allowing users to apply their expertise on business context to edit or augment the AI-generated outputs.

Since the move away from manual research is generally about reducing the human element, adding human input to the technology might sound counter-intuitive. However, this is mostly to make sure important business nuances in the feedback aren’t missed during coding. The result is a higher accuracy of analysis. This is sometimes referred to as augmented intelligence .

Codes displayed by volume within Thematic. You can 'manage themes' to introduce human input.

Step 5: Report on your data: Tell the story

The last step of analyzing your qualitative data is to report on it, to tell the story. At this point, the codes are fully developed and the focus is on communicating the narrative to the audience.

A coherent outline of the qualitative research, the findings and the insights is vital for stakeholders to discuss and debate before they can devise a meaningful course of action.

Creating graphs and reporting in Powerpoint

Typically, qualitative researchers take the tried and tested approach of distilling their report into a series of charts, tables and other visuals which are woven into a narrative for presentation in Powerpoint.

Using visualization software for reporting

With data transformation and APIs, the analyzed data can be shared with data visualisation software, such as Power BI or Tableau , Google Studio or Looker. Power BI and Tableau are among the most preferred options.

Visualizing your insights inside a feedback analytics platform

Feedback analytics platforms, like Thematic, incorporate visualisation tools that intuitively turn key data and insights into graphs.  This removes the time consuming work of constructing charts to visually identify patterns and creates more time to focus on building a compelling narrative that highlights the insights, in bite-size chunks, for executive teams to review.

Using a feedback analytics platform with visualization tools means you don’t have to use a separate product for visualizations. You can export graphs into Powerpoints straight from the platforms.

Two examples of qualitative data visualizations within Thematic

Conclusion - Manual or Automated?

There are those who remain deeply invested in the manual approach - because it’s familiar, because they’re reluctant to spend money and time learning new software, or because they’ve been burned by the overpromises of AI.  

For projects that involve small datasets, manual analysis makes sense. For example, if the objective is simply to quantify a simple question like “Do customers prefer X concepts to Y?”. If the findings are being extracted from a small set of focus groups and interviews, sometimes it’s easier to just read them

However, as new generations come into the workplace, it’s technology-driven solutions that feel more comfortable and practical. And the merits are undeniable.  Especially if the objective is to go deeper and understand the ‘why’ behind customers’ preference for X or Y. And even more especially if time and money are considerations.

The ability to collect a free flow of qualitative feedback data at the same time as the metric means AI can cost-effectively scan, crunch, score and analyze a ton of feedback from one system in one go. And time-intensive processes like focus groups, or coding, that used to take weeks, can now be completed in a matter of hours or days.

But aside from the ever-present business case to speed things up and keep costs down, there are also powerful research imperatives for automated analysis of qualitative data: namely, accuracy and consistency.

Finding insights hidden in feedback requires consistency, especially in coding.  Not to mention catching all the ‘unknown unknowns’ that can skew research findings and steering clear of cognitive bias.

Some say without manual data analysis researchers won’t get an accurate “feel” for the insights. However, the larger data sets are, the harder it is to sort through the feedback and organize feedback that has been pulled from different places.  And, the more difficult it is to stay on course, the greater the risk of drawing incorrect, or incomplete, conclusions grows.

Though the process steps for qualitative data analysis have remained pretty much unchanged since psychologist Paul Felix Lazarsfeld paved the path a hundred years ago, the impact digital technology has had on types of qualitative feedback data and the approach to the analysis are profound.  

If you want to try an automated feedback analysis solution on your own qualitative data, you can get started with Thematic .

manual data analysis qualitative research

Community & Marketing

Tyler manages our community of CX, insights & analytics professionals. Tyler's goal is to help unite insights professionals around common challenges.

We make it easy to discover the customer and product issues that matter.

Unlock the value of feedback at scale, in one platform. Try it for free now!

  • Questions to ask your Feedback Analytics vendor
  • How to end customer churn for good
  • Scalable analysis of NPS verbatims
  • 5 Text analytics approaches
  • How to calculate the ROI of CX

Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.

Recent posts

When two major storms wreaked havoc on Auckland and Watercare’s infrastructurem the utility went through a CX crisis. With a massive influx of calls to their support center, Thematic helped them get inisghts from this data to forge a new approach to restore services and satisfaction levels.

Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.

Qualtrics is one of the most well-known and powerful Customer Feedback Management platforms. But even so, it has limitations. We recently hosted a live panel where data analysts from two well-known brands shared their experiences with Qualtrics, and how they extended this platform’s capabilities. Below, we’ll share the

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

29 Qualitative Data Analysis Strategies

Johnny Saldaña, School of Theatre and Film, Arizona State University

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter provides an overview of selected qualitative data analysis strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding, values coding, dramaturgical coding, and versus coding. Strategies for constructing themes and assertions from the data follow. Analytic memo writing is woven throughout as a method for generating additional analytic insight. Next, display and arts-based strategies are provided, followed by recommended qualitative data analytic software programs and a discussion on verifying the researcher’s analytic findings.

Qualitative Data Analysis Strategies

Anthropologist Clifford Geertz ( 1983 ) charmingly mused, “Life is just a bowl of strategies” (p. 25). Strategy , as I use it here, refers to a carefully considered plan or method to achieve a particular goal. The goal in this case is to develop a write-up of your analytic work with the qualitative data you have been given and collected as part of a study. The plans and methods you might employ to achieve that goal are what this article profiles.

Some may perceive strategy as an inappropriate, if not manipulative, word, suggesting formulaic or regimented approaches to inquiry. I assure you that is not my intent. My use of strategy is dramaturgical in nature: Strategies are actions that characters in plays take to overcome obstacles to achieve their objectives. Actors portraying these characters rely on action verbs to generate belief within themselves and to motivate them as they interpret their lines and move appropriately on stage.

What I offer is a qualitative researcher’s array of actions from which to draw to overcome the obstacles to thinking to achieve an analysis of your data. But unlike the prescripted text of a play in which the obstacles, strategies, and outcomes have been predetermined by the playwright, your work must be improvisational—acting, reacting, and interacting with data on a moment-by-moment basis to determine what obstacles stand in your way and thus what strategies you should take to reach your goals.

Another intriguing quote to keep in mind comes from research methodologist Robert E. Stake ( 1995 ), who posited, “Good research is not about good methods as much as it is about good thinking” (p. 19). In other words, strategies can take you only so far. You can have a box full of tools, but if you do not know how to use them well or use them creatively, the collection seems rather purposeless. One of the best ways we learn is by doing . So, pick up one or more of these strategies (in the form of verbs) and take analytic action with your data. Also keep in mind that these are discussed in the order in which they may typically occur, although humans think cyclically, iteratively, and reverberatively, and each research project has its unique contexts and needs. Be prepared for your mind to jump purposefully and/or idiosyncratically from one strategy to another throughout the study.

Qualitative Data Analysis Strategy: To Foresee

To foresee in qualitative data analysis (QDA) is to reflect beforehand on what forms of data you will most likely need and collect, which thus informs what types of data analytic strategies you anticipate using. Analysis, in a way, begins even before you collect data (Saldaña & Omasta, 2018 ). As you design your research study in your mind and on a text editing page, one strategy is to consider what types of data you may need to help inform and answer your central and related research questions. Interview transcripts, participant observation field notes, documents, artifacts, photographs, video recordings, and so on are not only forms of data but also foundations for how you may plan to analyze them. A participant interview, for example, suggests that you will transcribe all or relevant portions of the recording and use both the transcription and the recording itself as sources for data analysis. Any analytic memos (discussed later) you make about your impressions of the interview also become data to analyze. Even the computing software you plan to employ will be relevant to data analysis because it may help or hinder your efforts.

As your research design formulates, compose one to two paragraphs that outline how your QDA may proceed. This will necessitate that you have some background knowledge of the vast array of methods available to you. Thus, surveying the literature is vital preparatory work.

Qualitative Data Analysis Strategy: To Survey

To survey in QDA is to look for and consider the applicability of the QDA literature in your field that may provide useful guidance for your forthcoming data analytic work. General sources in QDA will provide a good starting point for acquainting you with the data analysis strategies available for the variety of methodologies or genres in qualitative inquiry (e.g., ethnography, phenomenology, case study, arts-based research, mixed methods). One of the most accessible (and humorous) is Galman’s ( 2013 ) The Good, the Bad, and the Data , and one of the most richly detailed is Frederick J. Wertz et al.’s ( 2011 ) Five Ways of Doing Qualitative Analysis . The author’s core texts for this chapter come from The Coding Manual for Qualitative Researchers (Saldaña, 2016 ) and Qualitative Research: Analyzing Life (Saldaña & Omasta, 2018 ).

If your study’s methodology or approach is grounded theory, for example, then a survey of methods works by authors such as Barney G. Glaser, Anselm L. Strauss, Juliet Corbin, and, in particular, the prolific Kathy Charmaz ( 2014 ) may be expected. But there has been a recent outpouring of additional book publications in grounded theory by Birks and Mills ( 2015 ), Bryant ( 2017 ), Bryant and Charmaz ( 2019 ), and Stern and Porr ( 2011 ), plus the legacy of thousands of articles and chapters across many disciplines that have addressed grounded theory in their studies.

Fields such as education, psychology, social work, healthcare, and others also have their own QDA methods literature in the form of texts and journals, as well as international conferences and workshops for members of the profession. It is important to have had some university coursework and/or mentorship in qualitative research to suitably prepare you for the intricacies of QDA, and you must acknowledge that the emergent nature of qualitative inquiry may require you to adopt analysis strategies that differ from what you originally planned.

Qualitative Data Analysis Strategy: To Collect

To collect in QDA is to receive the data given to you by participants and those data you actively gather to inform your study. Qualitative data analysis is concurrent with data collection and management. As interviews are transcribed, field notes are fleshed out, and documents are filed, the researcher uses opportunities to carefully read the corpus and make preliminary notations directly on the data documents by highlighting, bolding, italicizing, or noting in some way any particularly interesting or salient portions. As these data are initially reviewed, the researcher also composes supplemental analytic memos that include first impressions, reminders for follow-up, preliminary connections, and other thinking matters about the phenomena at work.

Some of the most common fieldwork tools you might use to collect data are notepads, pens and pencils; file folders for hard-copy documents; a laptop, tablet, or desktop with text editing software (Microsoft Word and Excel are most useful) and Internet access; and a digital camera and voice recorder (functions available on many electronic devices such as smartphones). Some fieldworkers may even employ a digital video camera to record social action, as long as participant permissions have been secured. But everything originates from the researcher. Your senses are immersed in the cultural milieu you study, taking in and holding onto relevant details, or significant trivia , as I call them. You become a human camera, zooming out to capture the broad landscape of your field site one day and then zooming in on a particularly interesting individual or phenomenon the next. Your analysis is only as good as the data you collect.

Fieldwork can be an overwhelming experience because so many details of social life are happening in front of you. Take a holistic approach to your entrée, but as you become more familiar with the setting and participants, actively focus on things that relate to your research topic and questions. Keep yourself open to the intriguing, surprising, and disturbing (Sunstein & Chiseri-Strater, 2012 , p. 115), because these facets enrich your study by making you aware of the unexpected.

Qualitative Data Analysis Strategy: To Feel

To feel in QDA is to gain deep emotional insight into the social worlds you study and what it means to be human. Virtually everything we do has an accompanying emotion(s), and feelings are both reactions and stimuli for action. Others’ emotions clue you to their motives, values, attitudes, beliefs, worldviews, identities, and other subjective perceptions and interpretations. Acknowledge that emotional detachment is not possible in field research. Attunement to the emotional experiences of your participants plus sympathetic and empathetic responses to the actions around you are necessary in qualitative endeavors. Your own emotional responses during fieldwork are also data because they document the tacit and visceral. It is important during such analytic reflection to assess why your emotional reactions were as they were. But it is equally important not to let emotions alone steer the course of your study. A proper balance must be found between feelings and facts.

Qualitative Data Analysis Strategy: To Organize

To organize in QDA is to maintain an orderly repository of data for easy access and analysis. Even in the smallest of qualitative studies, a large amount of data will be collected across time. Prepare both a hard drive and hard-copy folders for digital data and paperwork, and back up all materials for security from loss. I recommend that each data unit (e.g., one interview transcript, one document, one day’s worth of field notes) have its own file, with subfolders specifying the data forms and research study logistics (e.g., interviews, field notes, documents, institutional review board correspondence, calendar).

For small-scale qualitative studies, I have found it quite useful to maintain one large master file with all participant and field site data copied and combined with the literature review and accompanying researcher analytic memos. This master file is used to cut and paste related passages together, deleting what seems unnecessary as the study proceeds and eventually transforming the document into the final report itself. Cosmetic devices such as font style, font size, rich text (italicizing, bolding, underlining, etc.), and color can help you distinguish between different data forms and highlight significant passages. For example, descriptive, narrative passages of field notes are logged in regular font. “Quotations, things spoken by participants, are logged in bold font.”   Observer’s comments, such as the researcher’s subjective impressions or analytic jottings, are set in italics.

Qualitative Data Analysis Strategy: To Jot

To jot in QDA is to write occasional, brief notes about your thinking or reminders for follow-up. A jot is a phrase or brief sentence that will fit on a standard-size sticky note. As data are brought and documented together, take some initial time to review their contents and jot some notes about preliminary patterns, participant quotes that seem vivid, anomalies in the data, and so forth.

As you work on a project, keep something to write with or to voice record with you at all times to capture your fleeting thoughts. You will most likely find yourself thinking about your research when you are not working exclusively on the project, and a “mental jot” may occur to you as you ruminate on logistical or analytic matters. Document the thought in some way for later retrieval and elaboration as an analytic memo.

Qualitative Data Analysis Strategy: To Prioritize

To prioritize in QDA is to determine which data are most significant in your corpus and which tasks are most necessary. During fieldwork, massive amounts of data in various forms may be collected, and your mind can be easily overwhelmed by the magnitude of the quantity, its richness, and its management. Decisions will need to be made about the most pertinent data because they help answer your research questions or emerge as salient pieces of evidence. As a sweeping generalization, approximately one half to two thirds of what you collect may become unnecessary as you proceed toward the more formal stages of QDA.

To prioritize in QDA is also to determine what matters most in your assembly of codes, categories, patterns, themes, assertions, propositions, and concepts. Return to your research purpose and questions to keep you framed for what the focus should be.

Qualitative Data Analysis Strategy: To Analyze

To analyze in QDA is to observe and discern patterns within data and to construct meanings that seem to capture their essences and essentials. Just as there are a variety of genres, elements, and styles of qualitative research, so too are there a variety of methods available for QDA. Analytic choices are most often based on what methods will harmonize with your genre selection and conceptual framework, what will generate the most sufficient answers to your research questions, and what will best represent and present the project’s findings.

Analysis can range from the factual to the conceptual to the interpretive. Analysis can also range from a straightforward descriptive account to an emergently constructed grounded theory to an evocatively composed short story. A qualitative research project’s outcomes may range from rigorously achieved, insightful answers to open-ended, evocative questions; from rich descriptive detail to a bullet-point list of themes; and from third-person, objective reportage to first-person, emotion-laden poetry. Just as there are multiple destinations in qualitative research, there are multiple pathways and journeys along the way.

Analysis is accelerated as you take cognitive ownership of your data. By reading and rereading the corpus, you gain intimate familiarity with its contents and begin to notice significant details as well as make new connections and insights about their meanings. Patterns, categories, themes, and their interrelationships become more evident the more you know the subtleties of the database.

Since qualitative research’s design, fieldwork, and data collection are most often provisional, emergent, and evolutionary processes, you reflect on and analyze the data as you gather them and proceed through the project. If preplanned methods are not working, you change them to secure the data you need. There is generally a postfieldwork period when continued reflection and more systematic data analysis occur, concurrent with or followed by additional data collection, if needed, and the more formal write-up of the study, which is in itself an analytic act. Through field note writing, interview transcribing, analytic memo writing, and other documentation processes, you gain cognitive ownership of your data; and the intuitive, tacit, synthesizing capabilities of your brain begin sensing patterns, making connections, and seeing the bigger picture. The purpose and outcome of data analysis is to reveal to others through fresh insights what we have observed and discovered about the human condition. Fortunately, there are heuristics for reorganizing and reflecting on your qualitative data to help you achieve that goal.

Qualitative Data Analysis Strategy: To Pattern

To pattern in QDA is to detect similarities within and regularities among the data you have collected. The natural world is filled with patterns because we, as humans, have constructed them as such. Stars in the night sky are not just a random assembly; our ancestors pieced them together to form constellations like the Big Dipper. A collection of flowers growing wild in a field has a pattern, as does an individual flower’s patterns of leaves and petals. Look at the physical objects humans have created and notice how pattern oriented we are in our construction, organization, and decoration. Look around you in your environment and notice how many patterns are evident on your clothing, in a room, and on most objects themselves. Even our sometimes mundane daily and long-term human actions are reproduced patterns in the form of routines, rituals, rules, roles, and relationships (Saldaña & Omasta, 2018 ).

This human propensity for pattern-making follows us into QDA. From the vast array of interview transcripts, field notes, documents, and other forms of data, there is this instinctive, hardwired need to bring order to the collection—not just to reorganize it but to look for and construct patterns out of it. The discernment of patterns is one of the first steps in the data analytic process, and the methods described next are recommended ways to construct them.

Qualitative Data Analysis Strategy: To Code

To code in QDA is to assign a truncated, symbolic meaning to each datum for purposes of qualitative analysis—primarily patterning and categorizing. Coding is a heuristic—a method of discovery—to the meanings of individual sections of data. These codes function as a way of patterning, classifying, and later reorganizing them into emergent categories for further analysis. Different types of codes exist for different types of research genres and qualitative data analytic approaches, but this chapter will focus on only a few selected methods. First, a code can be defined as follows:

A code in qualitative data analysis is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data. The data can consist of interview transcripts, participant observation field notes, journals, documents, open-ended survey responses, drawings, artifacts, photographs, video, Internet sites, e-mail correspondence, academic and fictional literature, and so on. The portion of data coded … can range in magnitude from a single word to a full paragraph, an entire page of text or a stream of moving images.… Just as a title represents and captures a book or film or poem’s primary content and essence, so does a code represent and capture a datum’s primary content and essence. (Saldaña, 2016 , p. 4)

One helpful precoding task is to divide or parse long selections of field note or interview transcript data into shorter stanzas . Stanza division unitizes or “chunks” the corpus into more manageable paragraph-like units for coding assignments and analysis. The transcript sample that follows illustrates one possible way of inserting line breaks between self-standing passages of interview text for easier readability.

Process Coding

As a first coding example, the following interview excerpt about an employed, single, lower middle-class adult male’s spending habits during a difficult economic period in the United States is coded in the right-hand margin in capital letters. The superscript numbers match the beginning of the datum unit with its corresponding code. This method is called process coding (Charmaz, 2014 ), and it uses gerunds (“-ing” words) exclusively to represent action suggested by the data. Processes can consist of observable human actions (e.g., BUYING BARGAINS), mental or internal processes (e.g., THINKING TWICE), and more conceptual ideas (e.g., APPRECIATING WHAT YOU’VE GOT). Notice that the interviewer’s (I) portions are not coded, just the participant’s (P). A code is applied each time the subtopic of the interview shifts—even within a stanza—and the same codes can (and should) be used more than once if the subtopics are similar. The central research question driving this qualitative study is, “In what ways are middle-class Americans influenced and affected by an economic recession?”

Different researchers analyzing this same piece of data may develop completely different codes, depending on their personal lenses, filters, and angles. The previous codes are only one person’s interpretation of what is happening in the data, not a definitive list. The process codes have transformed the raw data units into new symbolic representations for analysis. A listing of the codes applied to this interview transcript, in the order they appear, reads:

BUYING BARGAINS

QUESTIONING A PURCHASE

THINKING TWICE

STOCKING UP

REFUSING SACRIFICE

PRIORITIZING

FINDING ALTERNATIVES

LIVING CHEAPLY

NOTICING CHANGES

STAYING INFORMED

MAINTAINING HEALTH

PICKING UP THE TAB

APPRECIATING WHAT YOU’VE GOT

Coding the data is the first step in this approach to QDA, and categorization is just one of the next possible steps.

Qualitative Data Analysis Strategy: To Categorize

To categorize in QDA is to cluster similar or comparable codes into groups for pattern construction and further analysis. Humans categorize things in innumerable ways. Think of an average apartment or house’s layout. The rooms of a dwelling have been constructed or categorized by their builders and occupants according to function. A kitchen is designated as an area to store and prepare food and to store the cooking and dining materials, such as pots, pans, and utensils. A bedroom is designated for sleeping, a closet for clothing storage, a bathroom for bodily functions and hygiene, and so on. Each room is like a category in which related and relevant patterns of human action occur. There are exceptions now and then, such as eating breakfast in bed rather than in a dining area or living in a small studio apartment in which most possessions are contained within one large room (but nonetheless are most often organized and clustered into subcategories according to function and optimal use of space).

The point is that the patterns of social action we designate into categories during QDA are not perfectly bounded. Category construction is our best attempt to cluster the most seemingly alike things into the most seemingly appropriate groups. Categorizing is reorganizing and reordering the vast array of data from a study because it is from these smaller, larger, and meaning-rich units that we can better grasp the particular features of each one and the categories’ possible interrelationships with one another.

One analytic strategy with a list of codes is to classify them into similar clusters. The same codes share the same category, but it is also possible that a single code can merit its own group if you feel it is unique enough. After the codes have been classified, a category label is applied to each grouping. Sometimes a code can also double as a category name if you feel it best summarizes the totality of the cluster. Like coding, categorizing is an interpretive act, because there can be different ways of separating and collecting codes that seem to belong together. The cut-and-paste functions of text editing software are most useful for exploring which codes share something in common.

Below is my categorization of the 15 codes generated from the interview transcript presented earlier. Like the gerunds for process codes, the categories have also been labeled as “-ing” words to connote action. And there was no particular reason why 15 codes resulted in three categories—there could have been less or even more, but this is how the array came together after my reflections on which codes seemed to belong together. The category labels are ways of answering why they belong together. For at-a-glance differentiation, I place codes in CAPITAL LETTERS and categories in upper- and lowercase Bold Font :

Category 1: Thinking Strategically

Category 2: Spending Strategically

Category 3: Living Strategically

Notice that the three category labels share a common word: strategically . Where did this word come from? It came from analytic reflection on the original data, the codes, and the process of categorizing the codes and generating their category labels. It was the analyst’s choice based on the interpretation of what primary action was happening. Your categories generated from your coded data do not need to share a common word or phrase, but I find that this technique, when appropriate, helps build a sense of unity to the initial analytic scheme.

The three categories— Thinking Strategically, Spending Strategically , and Living Strategically —are then reflected on for how they might interact and interplay. This is where the next major facet of data analysis, analytic memos, enters the scheme. But a necessary section on the basic principles of interrelationship and analytic reasoning must precede that discussion.

Qualitative Data Analysis Strategy: To Interrelate

To interrelate in QDA is to propose connections within, between, and among the constituent elements of analyzed data. One task of QDA is to explore the ways our patterns and categories interact and interplay. I use these terms to suggest the qualitative equivalent of statistical correlation, but interaction and interplay are much more than a simple relationship. They imply interrelationship . Interaction refers to reverberative connections—for example, how one or more categories might influence and affect the others, how categories operate concurrently, or whether there is some kind of domino effect to them. Interplay refers to the structural and processual nature of categories—for example, whether some type of sequential order, hierarchy, or taxonomy exists; whether any overlaps occur; whether there is superordinate and subordinate arrangement; and what types of organizational frameworks or networks might exist among them. The positivist construct of cause and effect becomes influences and affects in QDA.

There can even be patterns of patterns and categories of categories if your mind thinks conceptually and abstractly enough. Our minds can intricately connect multiple phenomena, but only if the data and their analyses support the constructions. We can speculate about interaction and interplay all we want, but it is only through a more systematic investigation of the data—in other words, good thinking—that we can plausibly establish any possible interrelationships.

Qualitative Data Analysis Strategy: To Reason

To reason in QDA is to think in ways that lead to summative findings, causal probabilities, and evaluative conclusions. Unlike quantitative research, with its statistical formulas and established hypothesis-testing protocols, qualitative research has no standardized methods of data analysis. Rest assured, there are recommended guidelines from the field’s scholars and a legacy of analysis strategies from which to draw. But the primary heuristics (or methods of discovery) you apply during a study are retroductive, inductive, substructive, abductive , and deductive reasoning.

Retroduction is historic reconstruction, working backward to figure out how the current conditions came to exist. Induction is what we experientially explore and infer to be transferable from the particular to the general, based on an examination of the evidence and an accumulation of knowledge. Substruction takes things apart to more carefully examine the constituent elements of the whole. Abduction is surmising from a range of possibilities that which is most likely, those explanatory hunches of plausibility based on clues. Deduction is what we generally draw and conclude from established facts and evidence.

It is not always necessary to know the names of these five ways of reasoning as you proceed through analysis. In fact, you will more than likely reverberate quickly from one to another depending on the task at hand. But what is important to remember about reasoning is:

to examine the evidence carefully and make reasonable inferences;

to base your conclusions primarily on the participants’ experiences, not just your own;

not to take the obvious for granted, because sometimes the expected will not happen;

your hunches can be right and, at other times, quite wrong; and

to logically yet imaginatively think about what is going on and how it all comes together.

Futurists and inventors propose three questions when they think about creating new visions for the world: What is possible (induction)? What is plausible (abduction)? What is preferable (deduction)? These same three questions might be posed as you proceed through QDA and particularly through analytic memo writing, which is substructive and retroductive reflection on your analytic work thus far.

Qualitative Data Analysis Strategy: To Memo

To memo in QDA is to reflect in writing on the nuances, inferences, meanings, and transfer of coded and categorized data plus your analytic processes. Like field note writing, perspectives vary among practitioners as to the methods for documenting the researcher’s analytic insights and subjective experiences. Some advise that such reflections should be included in field notes as relevant to the data. Others advise that a separate researcher’s journal should be maintained for recording these impressions. And still others advise that these thoughts be documented as separate analytic memos. I prescribe the latter as a method because it is generated by and directly connected to the data themselves.

An analytic memo is a “think piece” of reflective free writing, a narrative that sets in words your interpretations of the data. Coding and categorizing are heuristics to detect some of the possible patterns and interrelationships at work within the corpus, and an analytic memo further articulates your retroductive, inductive, substructive, abductive, and deductive thinking processes on what things may mean. Though the metaphor is a bit flawed and limiting, think of codes and their consequent categories as separate jigsaw puzzle pieces and their integration into an analytic memo as the trial assembly of the complete picture.

What follows is an example of an analytic memo based on the earlier process coded and categorized interview transcript. It is intended not as the final write-up for a publication, but as an open-ended reflection on the phenomena and processes suggested by the data and their analysis thus far. As the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the final report. Note how the memo is dated and given a title for future and further categorization, how participant quotes are occasionally included for evidentiary support, and how the category names are bolded and the codes kept in capital letters to show how they integrate or weave into the thinking:

April 14, 2017 EMERGENT CATEGORIES: A STRATEGIC AMALGAM There’s a popular saying: “Smart is the new rich.” This participant is Thinking Strategically about his spending through such tactics as THINKING TWICE and QUESTIONING A PURCHASE before he decides to invest in a product. There’s a heightened awareness of both immediate trends and forthcoming economic bad news that positively affects his Spending Strategically . However, he seems unaware that there are even more ways of LIVING CHEAPLY by FINDING ALTERNATIVES. He dines at all-you-can-eat restaurants as a way of STOCKING UP on meals, but doesn’t state that he could bring lunch from home to work, possibly saving even more money. One of his “bad habits” is cigarettes, which he refuses to give up; but he doesn’t seem to realize that by quitting smoking he could save even more money, not to mention possible health care costs. He balks at the idea of paying $2.00 for a soft drink, but doesn’t mind paying $6.00–$7.00 for a pack of cigarettes. Penny-wise and pound-foolish. Addictions skew priorities. Living Strategically , for this participant during “scary times,” appears to be a combination of PRIORITIZING those things which cannot be helped, such as pet care and personal dental care; REFUSING SACRIFICE for maintaining personal creature-comforts; and FINDING ALTERNATIVES to high costs and excessive spending. Living Strategically is an amalgam of thinking and action-oriented strategies.

There are several recommended topics for analytic memo writing throughout the qualitative study. Memos are opportunities to reflect on and write about:

A descriptive summary of the data;

How the researcher personally relates to the participants and/or the phenomenon;

The participants’ actions, reactions, and interactions;

The participants’ routines, rituals, rules, roles, and relationships;

What is surprising, intriguing, or disturbing (Sunstein & Chiseri-Strater, 2012 , p. 115);

Code choices and their operational definitions;

Emergent patterns, categories, themes, concepts, assertions, and propositions;

The possible networks and processes (links, connections, overlaps, flows) among the codes, patterns, categories, themes, concepts, assertions, and propositions;

An emergent or related existent theory;

Any problems with the study;

Any personal or ethical dilemmas with the study;

Future directions for the study;

The analytic memos generated thus far (i.e., metamemos);

Tentative answers to the study’s research questions; and

The final report for the study. (adapted from Saldaña & Omasta, 2018 , p. 54)

Since writing is analysis, analytic memos expand on the inferential meanings of the truncated codes, categories, and patterns as a transitional stage into a more coherent narrative with hopefully rich social insight.

Qualitative Data Analysis Strategy: To Code—A Different Way

The first example of coding illustrated process coding, a way of exploring general social action among humans. But sometimes a researcher works with an individual case study in which the language is unique or with someone the researcher wishes to honor by maintaining the authenticity of his or her speech in the analysis. These reasons suggest that a more participant-centered form of coding may be more appropriate.

In Vivo Coding

A second frequently applied method of coding is called in vivo coding. The root meaning of in vivo is “in that which is alive”; it refers to a code based on the actual language used by the participant (Strauss, 1987 ). The words or phrases in the data record you select as codes are those that seem to stand out as significant or summative of what is being said.

Using the same transcript of the male participant living in difficult economic times, in vivo codes are listed in the right-hand column. I recommend that in vivo codes be placed in quotation marks as a way of designating that the code is extracted directly from the data record. Note that instead of 15 codes generated from process coding, the total number of in vivo codes is 30. This is not to suggest that there should be specific numbers or ranges of codes used for particular methods. In vivo codes, however, tend to be applied more frequently to data. Again, the interviewer’s questions and prompts are not coded, just the participant’s responses:

The 30 in vivo codes are then extracted from the transcript and could be listed in the order they appear, but this time they are placed in alphabetical order as a heuristic to prepare them for analytic action and reflection:

“ALL-YOU-CAN-EAT”

“ANOTHER DING IN MY WALLET”

“BAD HABITS”

“CHEAP AND FILLING”

“COUPLE OF THOUSAND”

“DON’T REALLY NEED”

“HAVEN’T CHANGED MY HABITS”

“HIGH MAINTENANCE”

“INSURANCE IS JUST WORTHLESS”

“IT ALL ADDS UP”

“LIVED KIND OF CHEAP”

“NOT A BIG SPENDER”

“NOT AS BAD OFF”

“NOT PUTTING AS MUCH INTO SAVINGS”

“PICK UP THE TAB”

“SCARY TIMES”

“SKYROCKETED”

“SPENDING MORE”

“THE LITTLE THINGS”

“THINK TWICE”

“TWO-FOR-ONE”

Even though no systematic categorization has been conducted with the codes thus far, an analytic memo of first impressions can still be composed:

March 19, 2017 CODE CHOICES: THE EVERYDAY LANGUAGE OF ECONOMICS After eyeballing the in vivo codes list, I noticed that variants of “CHEAP” appear most often. I recall a running joke between me and a friend of mine when we were shopping for sales. We’d say, “We’re not ‘cheap,’ we’re frugal .” There’s no formal economic or business language in this transcript—no terms such as “recession” or “downsizing”—just the everyday language of one person trying to cope during “SCARY TIMES” with “ANOTHER DING IN MY WALLET.” The participant notes that he’s always “LIVED KIND OF CHEAP” and is “NOT A BIG SPENDER” and, due to his employment, “NOT AS BAD OFF” as others in the country. Yet even with his middle class status, he’s still feeling the monetary pinch, dining at inexpensive “ALL-YOU-CAN-EAT” restaurants and worried about the rising price of peanut butter, observing that he’s “NOT PUTTING AS MUCH INTO SAVINGS” as he used to. Of all the codes, “ANOTHER DING IN MY WALLET” stands out to me, particularly because on the audio recording he sounded bitter and frustrated. It seems that he’s so concerned about “THE LITTLE THINGS” because of high veterinary and dental charges. The only way to cope with a “COUPLE OF THOUSAND” dollars worth of medical expenses is to find ways of trimming the excess in everyday facets of living: “IT ALL ADDS UP.”

Like process coding, in vivo codes could be clustered into similar categories, but another simple data analytic strategy is also possible.

Qualitative Data Analysis Strategy: To Outline

To outline in QDA is to hierarchically, processually, and/or temporally assemble such things as codes, categories, themes, assertions, propositions, and concepts into a coherent, text-based display. Traditional outlining formats and content provide not only templates for writing a report but also templates for analytic organization. This principle can be found in several computer-assisted qualitative data analysis software (CAQDAS) programs through their use of such functions as “hierarchies,” “trees,” and “nodes,” for example. Basic outlining is simply a way of arranging primary, secondary, and subsecondary items into a patterned display. For example, an organized listing of things in a home might consist of the following:

Large appliances

Refrigerator

Stove-top oven

Microwave oven

Small appliances

Coffee maker

Dining room

In QDA, outlining may include descriptive nouns or topics but, depending on the study, it may also involve processes or phenomena in extended passages, such as in vivo codes or themes.

The complexity of what we learn in the field can be overwhelming, and outlining is a way of organizing and ordering that complexity so that it does not become complicated. The cut-and-paste and tab functions of a text editing page enable you to arrange and rearrange the salient items from your preliminary coded analytic work into a more streamlined flow. By no means do I suggest that the intricate messiness of life can always be organized into neatly formatted arrangements, but outlining is an analytic act that stimulates deep reflection on both the interconnectedness and the interrelationships of what we study. As an example, here are the 30 in vivo codes generated from the initial transcript analysis, arranged in such a way as to construct five major categories:

Now that the codes have been rearranged into an outline format, an analytic memo is composed to expand on the rationale and constructed meanings in progress:

March 19, 2017 NETWORKS: EMERGENT CATEGORIES The five major categories I constructed from the in vivo codes are: “SCARY TIMES,” “PRIORTY,” “ANOTHER DING IN MY WALLET,” “THE LITTLE THINGS,” and “LIVED KIND OF CHEAP.” One of the things that hit me today was that the reason he may be pinching pennies on smaller purchases is that he cannot control the larger ones he has to deal with. Perhaps the only way we can cope with or seem to have some sense of agency over major expenses is to cut back on the smaller ones that we can control. $1,000 for a dental bill? Skip lunch for a few days a week. Insulin medication to buy for a pet? Don’t buy a soft drink from a vending machine. Using this reasoning, let me try to interrelate and weave the categories together as they relate to this particular participant: During these scary economic times, he prioritizes his spending because there seems to be just one ding after another to his wallet. A general lifestyle of living cheaply and keeping an eye out for how to save money on the little things compensates for those major expenses beyond his control.

Qualitative Data Analysis Strategy: To Code—In Even More Ways

The process and in vivo coding examples thus far have demonstrated only two specific methods of 33 documented approaches (Saldaña, 2016 ). Which one(s) you choose for your analysis depends on such factors as your conceptual framework, the genre of qualitative research for your project, the types of data you collect, and so on. The following sections present four additional approaches available for coding qualitative data that you may find useful as starting points.

Descriptive Coding

Descriptive codes are primarily nouns that simply summarize the topic of a datum. This coding approach is particularly useful when you have different types of data gathered for one study, such as interview transcripts, field notes, open-ended survey responses, documents, and visual materials such as photographs. Descriptive codes not only help categorize but also index the data corpus’s basic contents for further analytic work. An example of an interview portion coded descriptively, taken from the participant living in tough economic times, follows to illustrate how the same data can be coded in multiple ways:

For initial analysis, descriptive codes are clustered into similar categories to detect such patterns as frequency (i.e., categories with the largest number of codes) and interrelationship (i.e., categories that seem to connect in some way). Keep in mind that descriptive coding should be used sparingly with interview transcript data because other coding methods will reveal richer participant dynamics.

Values Coding

Values coding identifies the values, attitudes, and beliefs of a participant, as shared by the individual and/or interpreted by the analyst. This coding method infers the “heart and mind” of an individual or group’s worldview as to what is important, perceived as true, maintained as opinion, and felt strongly. The three constructs are coded separately but are part of a complex interconnected system.

Briefly, a value (V) is what we attribute as important, be it a person, thing, or idea. An attitude (A) is the evaluative way we think and feel about ourselves, others, things, or ideas. A belief (B) is what we think and feel as true or necessary, formed from our “personal knowledge, experiences, opinions, prejudices, morals, and other interpretive perceptions of the social world” (Saldaña, 2016 , p. 132). Values coding explores intrapersonal, interpersonal, and cultural constructs, or ethos . It is an admittedly slippery task to code this way because it is sometimes difficult to discern what is a value, attitude, or belief since they are intricately interrelated. But the depth you can potentially obtain is rich. An example of values coding follows:

For analysis, categorize the codes for each of the three different constructs together (i.e., all values in one group, attitudes in a second group, and beliefs in a third group). Analytic memo writing about the patterns and possible interrelationships may reveal a more detailed and intricate worldview of the participant.

Dramaturgical Coding

Dramaturgical coding perceives life as performance and its participants as characters in a social drama. Codes are assigned to the data (i.e., a “play script”) that analyze the characters in action, reaction, and interaction. Dramaturgical coding of participants examines their objectives (OBJ) or wants, needs, and motives; the conflicts (CON) or obstacles they face as they try to achieve their objectives; the tactics (TAC) or strategies they employ to reach their objectives; their attitudes (ATT) toward others and their given circumstances; the particular emotions (EMO) they experience throughout; and their subtexts (SUB), or underlying and unspoken thoughts. The following is an example of dramaturgically coded data:

Not included in this particular interview excerpt are the emotions the participant may have experienced or talked about. His later line, “that’s another ding in my wallet,” would have been coded EMO: BITTER. A reader may not have inferred that specific emotion from seeing the line in print. But the interviewer, present during the event and listening carefully to the audio recording during transcription, noted that feeling in his tone of voice.

For analysis, group similar codes together (e.g., all objectives in one group, all conflicts in another group, all tactics in a third group) or string together chains of how participants deal with their circumstances to overcome their obstacles through tactics:

OBJ: SAVING MEAL MONEY → TAC: SKIPPING MEALS + COUPONS

Dramaturgical coding is particularly useful as preliminary work for narrative inquiry story development or arts-based research representations such as performance ethnography. The method explores how the individuals or groups manage problem solving in their daily lives.

Versus Coding

Versus (VS) coding identifies the conflicts, struggles, and power issues observed in social action, reaction, and interaction as an X VS Y code, such as MEN VS WOMEN, CONSERVATIVES VS LIBERALS, FAITH VS LOGIC, and so on. Conflicts are rarely this dichotomous; they are typically nuanced and much more complex. But humans tend to perceive these struggles with an US VS THEM mindset. The codes can range from the observable to the conceptual and can be applied to data that show humans in tension with others, themselves, or ideologies.

What follows are examples of versus codes applied to the case study participant’s descriptions of his major medical expenses:

As an initial analytic tactic, group the versus codes into one of three categories: the Stakeholders , their Perceptions and/or Actions , and the Issues at stake. Examine how the three interrelate and identify the central ideological conflict at work as an X VS Y category. Analytic memos and the final write-up can detail the nuances of the issues.

Remember that what has been profiled in this section is a broad brushstroke description of just a few basic coding processes, several of which can be compatibly mixed and matched within a single analysis (see Saldaña’s 2016   The Coding Manual for Qualitative Researchers for a complete discussion). Certainly with additional data, more in-depth analysis can occur, but coding is only one approach to extracting and constructing preliminary meanings from the data corpus. What now follows are additional methods for qualitative analysis.

Qualitative Data Analysis Strategy: To Theme

To theme in QDA is to construct summative, phenomenological meanings from data through extended passages of text. Unlike codes, which are most often single words or short phrases that symbolically represent a datum, themes are extended phrases or sentences that summarize the manifest (apparent) and latent (underlying) meanings of data (Auerbach & Silverstein, 2003 ; Boyatzis, 1998 ). Themes, intended to represent the essences and essentials of humans’ lived experiences, can also be categorized or listed in superordinate and subordinate outline formats as an analytic tactic.

Below is the interview transcript example used in the previous coding sections. (Hopefully you are not too fatigued at this point with the transcript, but it is important to know how inquiry with the same data set can be approached in several different ways.) During the investigation of the ways middle-class Americans are influenced and affected by an economic recession, the researcher noticed that participants’ stories exhibited facets of what he labeled economic intelligence , or EI (based on the formerly developed theories of Howard Gardner’s multiple intelligences and Daniel Goleman’s emotional intelligence). Notice how theming interprets what is happening through the use of two distinct phrases—ECONOMIC INTELLIGENCE IS (i.e., manifest or apparent meanings) and ECONOMIC INTELLIGENCE MEANS (i.e., latent or underlying meanings):

Unlike the 15 process codes and 30 in vivo codes in the previous examples, there are now 14 themes to work with. They could be listed in the order they appear, but one initial heuristic is to group them separately by “is” and “means” statements to detect any possible patterns (discussed later):

EI IS TAKING ADVANTAGE OF UNEXPECTED OPPORTUNITY

EI IS BUYING CHEAP

EI IS SAVING A FEW DOLLARS NOW AND THEN

EI IS SETTING PRIORITIES

EI IS FINDING CHEAPER FORMS OF ENTERTAINMENT

EI IS NOTICING PERSONAL AND NATIONAL ECONOMIC TRENDS

EI IS TAKING CARE OF ONE’S OWN HEALTH

EI MEANS THINKING BEFORE YOU ACT

EI MEANS SACRIFICE

EI MEANS KNOWING YOUR FLAWS

EI MEANS LIVING AN INEXPENSIVE LIFESTYLE

EI MEANS YOU CANNOT CONTROL EVERYTHING

EI MEANS KNOWING YOUR LUCK

There are several ways to categorize the themes as preparation for analytic memo writing. The first is to arrange them in outline format with superordinate and subordinate levels, based on how the themes seem to take organizational shape and structure. Simply cutting and pasting the themes in multiple arrangements on a text editing page eventually develops a sense of order to them. For example:

A second approach is to categorize the themes into similar clusters and to develop different category labels or theoretical constructs . A theoretical construct is an abstraction that transforms the central phenomenon’s themes into broader applications but can still use “is” and “means” as prompts to capture the bigger picture at work:

Theoretical Construct 1: EI Means Knowing the Unfortunate Present

Supporting Themes:

Theoretical Construct 2: EI Is Cultivating a Small Fortune

Theoretical Construct 3: EI Means a Fortunate Future

What follows is an analytic memo generated from the cut-and-paste arrangement of themes into “is” and “means” statements, into an outline, and into theoretical constructs:

March 19, 2017 EMERGENT THEMES: FORTUNE/FORTUNATELY/UNFORTUNATELY I first reorganized the themes by listing them in two groups: “is” and “means.” The “is” statements seemed to contain positive actions and constructive strategies for economic intelligence. The “means” statements held primarily a sense of caution and restriction with a touch of negativity thrown in. The first outline with two major themes, LIVING AN INEXPENSIVE LIFESTYLE and YOU CANNOT CONTROL EVERYTHING also had this same tone. This reminded me of the old children’s picture book, Fortunately/Unfortunately , and the themes of “fortune” as a motif for the three theoretical constructs came to mind. Knowing the Unfortunate Present means knowing what’s (most) important and what’s (mostly) uncontrollable in one’s personal economic life. Cultivating a Small Fortune consists of those small money-saving actions that, over time, become part of one’s lifestyle. A Fortunate Future consists of heightened awareness of trends and opportunities at micro and macro levels, with the understanding that health matters can idiosyncratically affect one’s fortune. These three constructs comprise this particular individual’s EI—economic intelligence.

Again, keep in mind that the examples for coding and theming were from one small interview transcript excerpt. The number of codes and their categorization would increase, given a longer interview and/or multiple interviews to analyze. But the same basic principles apply: codes and themes relegated into patterned and categorized forms are heuristics—stimuli for good thinking through the analytic memo-writing process on how everything plausibly interrelates. Methodologists vary in the number of recommended final categories that result from analysis, ranging anywhere from three to seven, with traditional grounded theorists prescribing one central or core category from coded work.

Qualitative Data Analysis Strategy: To Assert

To assert in QDA is to put forward statements that summarize particular fieldwork and analytic observations that the researcher believes credibly represent and transcend the experiences. Educational anthropologist Frederick Erickson ( 1986 ) wrote a significant and influential chapter on qualitative methods that outlined heuristics for assertion development . Assertions are declarative statements of summative synthesis, supported by confirming evidence from the data and revised when disconfirming evidence or discrepant cases require modification of the assertions. These summative statements are generated from an interpretive review of the data corpus and then supported and illustrated through narrative vignettes—reconstructed stories from field notes, interview transcripts, or other data sources that provide a vivid profile as part of the evidentiary warrant.

Coding or theming data can certainly precede assertion development as a way of gaining intimate familiarity with the data, but Erickson’s ( 1986 ) methods are a more admittedly intuitive yet systematic heuristic for analysis. Erickson promotes analytic induction and exploration of and inferences about the data, based on an examination of the evidence and an accumulation of knowledge. The goal is not to look for “proof” to support the assertions, but to look for plausibility of inference-laden observations about the local and particular social world under investigation.

Assertion development is the writing of general statements, plus subordinate yet related ones called subassertions and a major statement called a key assertion that represents the totality of the data. One also looks for key linkages between them, meaning that the key assertion links to its related assertions, which then link to their respective subassertions. Subassertions can include particulars about any discrepant related cases or specify components of their parent assertions.

Excerpts from the interview transcript of our case study will be used to illustrate assertion development at work. By now, you should be quite familiar with the contents, so I will proceed directly to the analytic example. First, there is a series of thematically related statements the participant makes:

“Buy one package of chicken, get the second one free. Now that was a bargain. And I got some.”

“With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.”

“I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

Assertions can be categorized into low-level and high-level inferences . Low-level inferences address and summarize what is happening within the particulars of the case or field site—the micro . High-level inferences extend beyond the particulars to speculate on what it means in the more general social scheme of things—the meso or macro . A reasonable low-level assertion about the three statements above collectively might read, The participant finds several small ways to save money during a difficult economic period . A high-level inference that transcends the case to the meso level might read, Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending.

Assertions are instantiated (i.e., supported) by concrete instances of action or participant testimony, whose patterns lead to more general description outside the specific field site. The author’s interpretive commentary can be interspersed throughout the report, but the assertions should be supported with the evidentiary warrant . A few assertions and subassertions based on the case interview transcript might read as follows (and notice how high-level assertions serve as the paragraphs’ topic sentences):

Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending. Restaurants, for example, need to find ways during difficult economic periods when potential customers may be opting to eat inexpensively at home rather than spending more money by dining out. Special offers can motivate cash-strapped clientele to patronize restaurants more frequently. An adult male dealing with such major expenses as underinsured dental care offers: “With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.” The film and video industries also seem to be suffering from a double-whammy during the current recession: less consumer spending on higher-priced entertainment, resulting in a reduced rate of movie theater attendance (recently 39 percent of the American population, according to a CNN report); coupled with a media technology and business revolution that provides consumers less costly alternatives through video rentals and Internet viewing: “I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

To clarify terminology, any assertion that has an if–then or predictive structure to it is called a proposition since it proposes a conditional event. For example, this assertion is also a proposition: “Special offers can motivate cash-strapped clientele to patronize restaurants more frequently.” Propositions are the building blocks of hypothesis testing in the field and for later theory construction. Research not only documents human action but also can sometimes formulate statements that predict it. This provides a transferable and generalizable quality in our findings to other comparable settings and contexts. And to clarify terminology further, all propositions are assertions, but not all assertions are propositions.

Particularizability —the search for specific and unique dimensions of action at a site and/or the specific and unique perspectives of an individual participant—is not intended to filter out trivial excess but to magnify the salient characteristics of local meaning. Although generalizable knowledge is difficult to formulate in qualitative inquiry since each naturalistic setting will contain its own unique set of social and cultural conditions, there will be some aspects of social action that are plausibly universal or “generic” across settings and perhaps even across time.

To work toward this, Erickson advocates that the interpretive researcher look for “concrete universals” by studying actions at a particular site in detail and then comparing those actions to actions at other sites that have also been studied in detail. The exhibit or display of these generalizable features is to provide a synoptic representation, or a view of the whole. What the researcher attempts to uncover is what is both particular and general at the site of interest, preferably from the perspective of the participants. It is from the detailed analysis of actions at a specific site that these universals can be concretely discerned, rather than abstractly constructed as in grounded theory.

In sum, assertion development is a qualitative data analytic strategy that relies on the researcher’s intense review of interview transcripts, field notes, documents, and other data to inductively formulate, with reasonable certainty, composite statements that credibly summarize and interpret participant actions and meanings and their possible representation of and transfer into broader social contexts and issues.

Qualitative Data Analysis Strategy: To Display

To display in QDA is to visually present the processes and dynamics of human or conceptual action represented in the data. Qualitative researchers use not only language but also illustrations to both analyze and display the phenomena and processes at work in the data. Tables, charts, matrices, flow diagrams, and other models and graphics help both you and your readers cognitively and conceptually grasp the essence and essentials of your findings. As you have seen thus far, even simple outlining of codes, categories, and themes is one visual tactic for organizing the scope of the data. Rich text, font, and format features such as italicizing, bolding, capitalizing, indenting, and bullet pointing provide simple emphasis to selected words and phrases within the longer narrative.

Think display was a phrase coined by methodologists Miles and Huberman ( 1994 ) to encourage the researcher to think visually as data were collected and analyzed. The magnitude of text can be essentialized into graphics for at-a-glance review. Bins in various shapes and lines of various thicknesses, along with arrows suggesting pathways and direction, render the study a portrait of action. Bins can include the names of codes, categories, concepts, processes, key participants, and/or groups.

As a simple example, Figure 29.1 illustrates the three categories’ interrelationship derived from process coding. It displays what could be the apex of this interaction, LIVING STRATEGICALLY, and its connections to THINKING STRATEGICALLY, which influences and affects SPENDING STRATEGICALLY.

Three categories’ interrelationship derived from process coding.

Figure 29.2 represents a slightly more complex (if not playful) model, based on the five major in vivo codes/categories generated from analysis. The graphic is used as a way of initially exploring the interrelationship and flow from one category to another. The use of different font styles, font sizes, and line and arrow thicknesses is intended to suggest the visual qualities of the participant’s language and his dilemmas—a way of heightening in vivo coding even further.

In vivo categories in rich text display

Accompanying graphics are not always necessary for a qualitative report. They can be very helpful for the researcher during the analytic stage as a heuristic for exploring how major ideas interrelate, but illustrations are generally included in published work when they will help supplement and clarify complex processes for readers. Photographs of the field setting or the participants (and only with their written permission) also provide evidentiary reality to the write-up and help your readers get a sense of being there.

Qualitative Data Analysis Strategy: To Narrate

To narrate in QDA is to create an evocative literary representation and presentation of the data in the form of creative nonfiction. All research reports are stories of one kind or another. But there is yet another approach to QDA that intentionally documents the research experience as story, in its traditional literary sense. Narrative inquiry serves to plot and story-line the participant’s experiences into what might be initially perceived as a fictional short story or novel. But the story is carefully crafted and creatively written to provide readers with an almost omniscient perspective about the participants’ worldview. The transformation of the corpus from database to creative nonfiction ranges from systematic transcript analysis to open-ended literary composition. The narrative, however, should be solidly grounded in and emerge from the data as a plausible rendering of social life.

The following is a narrative vignette based on interview transcript selections from the participant living through tough economic times:

Jack stood in front of the soft drink vending machine at work and looked almost worriedly at the selections. With both hands in his pants pockets, his fingers jingled the few coins he had inside them as he contemplated whether he could afford the purchase. Two dollars for a twenty-ounce bottle of Diet Coke. Two dollars. “I can practically get a two-liter bottle for that same price at the grocery store,” he thought. Then Jack remembered the upcoming dental surgery he needed—that would cost one thousand dollars—and the bottle of insulin and syringes he needed to buy for his diabetic, high maintenance cat—almost two hundred dollars. He sighed, took his hands out of his pockets, and walked away from the vending machine. He was skipping lunch that day anyway so he could stock up on dinner later at the cheap-but-filling all-you-can-eat Chinese buffet. He could get his Diet Coke there.

Narrative inquiry representations, like literature, vary in tone, style, and point of view. The common goal, however, is to create an evocative portrait of participants through the aesthetic power of literary form. A story does not always have to have a moral explicitly stated by its author. The reader reflects on personal meanings derived from the piece and how the specific tale relates to one’s self and the social world.

Qualitative Data Analysis Strategy: To Poeticize

To poeticize in QDA is to create an evocative literary representation and presentation of the data in poetic form. One approach to analyzing or documenting analytic findings is to strategically truncate interview transcripts, field notes, and other pertinent data into poetic structures. Like coding, poetic constructions capture the essence and essentials of data in a creative, evocative way. The elegance of the format attests to the power of carefully chosen language to represent and convey complex human experience.

In vivo codes (codes based on the actual words used by participants themselves) can provide imagery, symbols, and metaphors for rich category, theme, concept, and assertion development, in addition to evocative content for arts-based interpretations of the data. Poetic inquiry takes note of what words and phrases seem to stand out from the data corpus as rich material for reinterpretation. Using some of the participant’s own language from the interview transcript illustrated previously, a poetic reconstruction or “found poetry” might read as follows:

Scary Times Scary times … spending more   (another ding in my wallet) a couple of thousand   (another ding in my wallet) insurance is just worthless   (another ding in my wallet) pick up the tab   (another ding in my wallet) not putting as much into savings   (another ding in my wallet) It all adds up. Think twice:   don’t really need    skip Think twice, think cheap:   coupons   bargains   two-for-one    free Think twice, think cheaper:   stock up   all-you-can-eat    (cheap—and filling) It all adds up.

Anna Deavere Smith, a verbatim theatre performer, attests that people speak in forms of “organic poetry” in everyday life. Thus, in vivo codes can provide core material for poetic representation and presentation of lived experiences, potentially transforming the routine and mundane into the epic. Some researchers also find the genre of poetry to be the most effective way to compose original work that reflects their own fieldwork experiences and autoethnographic stories.

Qualitative Data Analysis Strategy: To Compute

To compute in QDA is to employ specialized software programs for qualitative data management and analysis. The acronym for computer-assisted qualitative data analysis software is CAQDAS. There are diverse opinions among practitioners in the field about the utility of such specialized programs for qualitative data management and analysis. The software, unlike statistical computation, does not actually analyze data for you at higher conceptual levels. These CAQDAS software packages serve primarily as a repository for your data (both textual and visual) that enables you to code them, and they can perform such functions as calculating the number of times a particular word or phrase appears in the data corpus (a particularly useful function for content analysis) and can display selected facets after coding, such as possible interrelationships. Basic software such as Microsoft Word and Excel provides utilities that can store and, with some preformatting and strategic entry, organize qualitative data to enable the researcher’s analytic review. The following Internet addresses are listed to help in exploring selected CAQDAS packages and obtaining demonstration/trial software; video tutorials are available on the companies’ websites and on YouTube:

ATLAS.ti: http://www.atlasti.com

Dedoose: http://www.dedoose.com

HyperRESEARCH: http://www.researchware.com

MAXQDA: http://www.maxqda.com

NVivo: http://www.qsrinternational.com

QDA Miner: http://www.provalisresearch.com

Quirkos: http://www.quirkos.com

Transana: http://www.transana.com

V-Note: http://www.v-note.org

Some qualitative researchers attest that the software is indispensable for qualitative data management, especially for large-scale studies. Others feel that the learning curve of most CAQDAS programs is too lengthy to be of pragmatic value, especially for small-scale studies. From my own experience, if you have an aptitude for picking up quickly on the scripts and syntax of software programs, explore one or more of the packages listed. If you are a novice to qualitative research, though, I recommend working manually or “by hand” for your first project so you can focus exclusively on the data and not on the software.

Qualitative Data Analysis Strategy: To Verify

To verify in QDA is to administer an audit of “quality control” to your analysis. After your data analysis and the development of key findings, you may be thinking to yourself, “Did I get it right?” “Did I learn anything new?” Reliability and validity are terms and constructs of the positivist quantitative paradigm that refer to the replicability and accuracy of measures. But in the qualitative paradigm, other constructs are more appropriate.

Credibility and trustworthiness (Lincoln & Guba, 1985 ) are two factors to consider when collecting and analyzing the data and presenting your findings. In our qualitative research projects, we must present a convincing story to our audiences that we “got it right” methodologically. In other words, the amount of time we spent in the field, the number of participants we interviewed, the analytic methods we used, the thinking processes evident to reach our conclusions, and so on should be “just right” to assure the reader that we have conducted our jobs soundly. But remember that we can never conclusively prove something; we can only, at best, convincingly suggest. Research is an act of persuasion.

Credibility in a qualitative research report can be established in several ways. First, citing the key writers of related works in your literature review is essential. Seasoned researchers will sometimes assess whether a novice has “done her homework” by reviewing the bibliography or references. You need not list everything that seminal writers have published about a topic, but their names should appear at least once as evidence that you know the field’s key figures and their work.

Credibility can also be established by specifying the particular data analysis methods you employed (e.g., “Interview transcripts were taken through two cycles of process coding, resulting in three primary categories”), through corroboration of data analysis with the participants themselves (e.g., “I asked my participants to read and respond to a draft of this report for their confirmation of accuracy and recommendations for revision”), or through your description of how data and findings were substantiated (e.g., “Data sources included interview transcripts, participant observation field notes, and participant response journals to gather multiple perspectives about the phenomenon”).

Data scientist W. Edwards Deming is attributed with offering this cautionary advice about making a convincing argument: “Without data, you’re just another person with an opinion.” Thus, researchers can also support their findings with relevant, specific evidence by quoting participants directly and/or including field note excerpts from the data corpus. These serve both as illustrative examples for readers and to present more credible testimony of what happened in the field.

Trustworthiness, or providing credibility to the writing, is when we inform the reader of our research processes. Some make the case by stating the duration of fieldwork (e.g., “Forty-five clock hours were spent in the field”; “The study extended over a 10-month period”). Others put forth the amounts of data they gathered (e.g., “Sixteen individuals were interviewed”; “My field notes totaled 157 pages”). Sometimes trustworthiness is established when we are up front or confessional with the analytic or ethical dilemmas we encountered (e.g., “It was difficult to watch the participant’s teaching effectiveness erode during fieldwork”; “Analysis was stalled until I recoded the entire data corpus with a new perspective”).

The bottom line is that credibility and trustworthiness are matters of researcher honesty and integrity . Anyone can write that he worked ethically, rigorously, and reflexively, but only the writer will ever know the truth. There is no shame if something goes wrong with your research. In fact, it is more than likely the rule, not the exception. Work and write transparently to achieve credibility and trustworthiness with your readers.

The length of this chapter does not enable me to expand on other QDA strategies such as to conceptualize, theorize, and write. Yet there are even more subtle thinking strategies to employ throughout the research enterprise, such as to synthesize, problematize, and create. Each researcher has his or her own ways of working, and deep reflexivity (another strategy) on your own methodology and methods as a qualitative inquirer throughout fieldwork and writing provides you with metacognitive awareness of data analysis processes and possibilities.

Data analysis is one of the most elusive practices in qualitative research, perhaps because it is a backstage, behind-the-scenes, in-your-head enterprise. It is not that there are no models to follow. It is just that each project is contextual and case specific. The unique data you collect from your unique research design must be approached with your unique analytic signature. It truly is a learning-by-doing process, so accept that and leave yourself open to discovery and insight as you carefully scrutinize the data corpus for patterns, categories, themes, concepts, assertions, propositions, and possibly new theories through strategic analysis.

Auerbach, C. F. , & Silverstein, L. B. ( 2003 ). Qualitative data: An introduction to coding and analysis . New York, NY: New York University Press.

Google Scholar

Google Preview

Birks, M. , & Mills, J. ( 2015 ). Grounded theory: A practical guide (2nd ed.). London, England: Sage.

Boyatzis, R. E. ( 1998 ). Transforming qualitative information: Thematic analysis and code development . Thousand Oaks, CA: Sage.

Bryant, A. ( 2017 ). Grounded theory and grounded theorizing: Pragmatism in research practice. New York, NY: Oxford.

Bryant, A. , & Charmaz, K. (Eds.). ( 2019 ). The Sage handbook of current developments in grounded theory . London, England: Sage.

Charmaz, K. ( 2014 ). Constructing grounded theory: A practical guide through qualitative analysis (2nd ed.). London, England: Sage.

Erickson, F. ( 1986 ). Qualitative methods in research on teaching. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 119–161). New York, NY: Macmillan.

Galman, S. C. ( 2013 ). The good, the bad, and the data: Shane the lone ethnographer’s basic guide to qualitative data analysis. Walnut Creek, CA: Left Coast Press.

Geertz, C. ( 1983 ). Local knowledge: Further essays in interpretive anthropology . New York, NY: Basic Books.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Newbury Park, CA: Sage.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.

Saldaña, J. ( 2016 ). The coding manual for qualitative researchers (3rd ed.). London, England: Sage.

Saldaña, J. , & Omasta, M. ( 2018 ). Qualitative research: Analyzing life . Thousand Oaks, CA: Sage.

Stake, R. E. ( 1995 ). The art of case study research . Thousand Oaks, CA: Sage.

Stern, P. N. , & Porr, C. J. ( 2011 ). Essentials of accessible grounded theory . Walnut Creek, CA: Left Coast Press.

Strauss, A. L. ( 1987 ). Qualitative analysis for social scientists . Cambridge, England: Cambridge University Press.

Sunstein, B. S. , & Chiseri-Strater, E. ( 2012 ). FieldWorking: Reading and writing research (4th ed.). Boston, MA: Bedford/St. Martin’s.

Wertz, F. J. , Charmaz, K. , McMullen, L. M. , Josselson, R. , Anderson, R. , & McSpadden, E. ( 2011 ). Five ways of doing qualitative analysis: Phenomenological psychology, grounded theory, discourse analysis, narrative research, and intuitive inquiry . New York, NY: Guilford Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

manual data analysis qualitative research

Enter the URL below into your favorite RSS reader.

Approaches to Analysis of Qualitative Research Data: A Reflection on the Manual and Technological Approaches

  • Citation (BibTeX)

manual data analysis qualitative research

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

View more stats

This paper addresses a gap in the literature by providing reflective and critical insights into the experiences during two PhD qualitative studies, which adopted different approaches to data analysis. We first consider how the two PhD studies unfolded before discussing the motivations, challenges and benefits of choosing either a technological (NVivo) or manual approach to qualitative data analysis. The paper contributes to the limited literature which has explored the comparative experiences of those undertaking qualitative data analysis using different approaches. It provides insights into how researchers conduct qualitative data analysis, using different approaches and the lessons learnt.

1. Introduction

Qualitative data analysis has a long history in the social sciences. Reflecting this, a substantial literature has developed to guide the researcher through the process of qualitative data analysis (e.g. Bryman & Burgess , 1994; Harding , 2018; Saunders et al. , 2019; Silverman , 2017 ). While earlier literature focuses on the manual approach [1] to qualitative data analysis (Bogdan & Bilken , 1982; Lofland , 1971) , more recent literature provides support in the application of a range of technological approaches (alternatively referred to as Computer Assisted Qualitative Data Analysis Software or CAQDAS): e.g., Excel (Meyer & Avery , 2009) ; NVivo (Jackson & Bazeley , 2019) ; and ATLAS.ti (Friese , 2019) . Moreover, in an accounting context, a critical literature has emerged which attempts to elucidate the messy and problematic nature of qualitative data analysis (Ahrens & Chapman , 2006; Lee & Humphrey , 2006; Modell & Humphrey , 2008; O’Dwyer , 2004; Parker , 2003) . However, while a substantial literature exists to guide the researcher in undertaking qualitative data analysis and in providing an understanding of the problematic nature of such analyses, a dearth of research reports on the comparative experiences of those undertaking qualitative data analysis using different approaches. The paper aims to address this gap by reporting on the experiences of two recently qualified doctoral students as they reflect on how they each approached the task of analysing qualitative data, Researcher A (second author) choosing a technological approach (NVivo) while Researcher B (third author) opted for a manual approach. The paper contributes to the limited literature which explores the comparative experiences of those undertaking qualitative data analysis using different approaches. In so doing, we hope that the critical reflections and insights provided will assist qualitative researchers in making important decisions around their approach to data analysis.

The remainder of the paper is structured as follows. In section two, we provide an overview of the problematic nature of qualitative research and a review of the manual and technological approaches of data analysis available to researchers. Section three follows with a discussion of two qualitative PhD studies. Section four discusses the experiences, challenges and critical reflections of Researchers A and B as they engaged with their particular approach to qualitative data analysis. The paper concludes with a comparative analysis of the experiences of Researchers A and B and implications for further work.

2. Literature Review

2.1 a qualitative research approach: debates and challenging issues.

Qualitative researchers pursue qualia , that is phenomena as experienced (sometimes uniquely) by individuals, that enlarge our conception of the “really real” (Sherry & Kozinets , 2001 , p. 2) . Qualitative studies seek to answer ‘how’ and ‘why’ rather than ‘what’ or ‘how often’ questions. In so doing, qualitative studies involve collecting rich data that are understood within context and are associated with an interpretivist philosophy. Mason (2002) notes that qualitative research is not just about words, rather it reflects a view of practice that is socially constructed and requires researchers to embrace subjectivity in order to interpret data. Furthermore, Bédard & Gendron (2004) argue that “being tolerant of uncertainty is part of the fundamental skills of the qualitative researcher” (p. 199). That said, a qualitative approach can be extremely labour intensive, given the volume of data collected and the commitment required to generate themes.

In the accounting and management literatures, there has been considerable debate on the challenges of qualitative data analysis. In early work, Parker (2003) highlights a potential challenge in that qualitative researchers need to be reflexive in the data analysis process. To that end, researchers often construct field notes and memos (during interviews for example) to report their feelings, perceptions and impressions, which can be viewed as data, alongside all other data collected from the field. Bédard & Gendron (2004) highlight a further challenge in that analysing qualitative data is both labour intensive and requires high levels of research knowledge and ability. Furthermore, they argue that qualitative researchers need to be immersed in data collection and analysis, and should be mindful that the “specific objectives of the study are not always determined a priori, but often ‘emerge’ from fieldwork” (p. 200). Ahrens & Chapman (2006) identify the challenge of data reduction without “‘thinning’ out the data to the point where it loses its specificity and becomes bland” (p. 832). Qualitative data analysis is, they argue, not a straightforward process: “Like other practices, the doing of qualitative field studies is difficult to articulate. One can point to the golden rules but, at the heart of it lies a problem of transformation. Out of data, snippets of conversations and formal interviews, hours and days of observation, tabulations of behaviours and other occurrences, must arise the plausible field study” (Ahrens & Chapman , 2006 , p. 837) . This chimes with O’Dwyer’s (2004) description of qualitative data analysis as ‘messy’. To address this, O’Dwyer (2004) highlights the importance of imposing structure onto the analysis process and outlines an intuitive approach to analyse interview data using Miles and Huberman’s (1994) three stage process of data reduction, data display and data interpretation/conclusion drawing and verification. This process involves the categorisation of themes and individual aspects of interviews in several stages to ensure that general patterns and differences are articulated. While O’Dwyer (2004) considered using a technological approach to assist in data analysis, he discounted it as an option at an early stage of his research, largely as a result of his lack of understanding of what it could offer. Lee & Humphrey (2006) also argue that analysing interview transcripts is a key challenge facing qualitative researchers. In particular, deciding “what weight to give to meanings that are only apparent in a part of an interview, how to retain understanding of the whole interview when the focus is on individual parts and how to derive patterns both within and across interviews without losing sight of any idiosyncratic elements that may provide unique insights” (p. 188). Finally, Modell & Humphrey (2008 , p. 96) , while calling for further research in the area of qualitative data analysis, contend that problems exist where there is undue focus on the approach to data analysis to the detriment of the development of ideas. They suggest that this appears to be an increasingly common issue, particularly with increased use of technology in the data analysis process.

2.2 Approaches to Data Analysis: Manual and Technological (i.e. NVivo) Approaches

The data analysis phase of qualitative research is described as the “most intellectually challenging phase” (Marshall & Rossman , 1995 , p. 114) and the active role of the researcher in identifying and communicating themes is critical (Braun & Clarke , 2006; Edwards & Skinner , 2009; Silverman , 2017) . While early technological approaches to data analysis have been in existence since the 1960s, many qualitative researchers have continued to employ the manual approach to analysis (Séror , 2005) . In part, this may be due to the perceptions of some researchers that the technological approach may attempt to do more than assist in the management of data, potentially influencing the abstraction of themes from data in unintended ways (Crowley et al. , 2002) . However, a review of the literature suggests that the manual approach can be an unwieldy, cumbersome, “tedious and frustrating” process (Basit , 2003 , p. 152) . Furthermore, comparatively little has been published in relation to the mechanics of the manual approach (Bazeley , 2009; Bogdan & Bilken , 1982; Braun & Clarke , 2006; Edwards & Skinner , 2009; Lofland , 1971; Maher et al. , 2018; Miles & Huberman , 1994; Silverman , 2017) .

Edwards & Skinner (2009) assert that the manual analysis of hundreds of pages of raw data is a “daunting” task (p. 134). To assist in this process, some basic mechanical procedures are described in the literature, including: printing hardcopy transcripts, photocopying, marking up, line-by-line coding, coding in margins, cutting, cut-and-paste, sorting, reorganising, hanging files and arranging colour-coded sticky notes on large format display boards (Basit , 2003; Bogdan & Bilken , 1982; Lofland , 1971; Maher et al. , 2018; L. Richards & Richards , 1994) . Moreover, Braun & Clarke (2006) provide a comprehensive description of the manual data analysis process, involving “writing notes on the texts you are analysing, by using highlighters or coloured pens to indicate potential patterns, or by using ‘post-it’ notes to identify segments of data” (p. 89). As ‘codes’ are identified, data extracts are manually grouped and collated within the individual codes. The subsequent generation of sub-themes and overarching themes involves the trialling of combinations of codes until “all extracts of data have been coded in relation to them” (p. 89). The above is an iterative process and involves re-reading, coding and recoding until all data has been included in sub-themes and overarching themes. The researcher’s interaction with the data is important in this regard, and involves a series of physical activities around arranging and re-arranging data excerpts and post-it notes, followed by visual mapping on “large format display boards” (Maher et al. , 2018 , p. 11) . This process “encourages a slower and more meaningful interaction with the data [and] great freedom in terms of constant comparison, trialling arrangements, viewing perspectives, reflection and ultimately developing interpretative insights” (Maher et al. , 2018 , p. 11) .

An alternative to the manual approach is the use of CAQDAS (i.e. technological approach) to support qualitative data analysis. CAQDAS offers the ability to import, organise and explore data from various sources (text, audio, video, emails, images, spreadsheets, online surveys, social and web content). The origins of NVivo, one of the market leaders, can be traced back to the 1980s with the development of a computer programme called Non-numerical Unstructured Data Indexing Searching and Theorizing (NUD*IST). Richards, one of the co-developers of NVivo provides an “intellectual history” of NUD*IST and NVivo (R. Richards , 2002 , p. 199) , arguing that “NVivo … is being preferred by researchers wishing to do a very detailed and finely articulated study … [and that NVivo’s] tools support close and multi-faceted analysis on small and moderate amounts of data” (p. 211). Reflecting its widespread usage as a mainstream CAQDAS, a literature has now developed around the use of NVivo. For example, Bandara (2006) provides guidance to novice researchers and academics involved in NVivo research training in information systems research; García-Horta & Guerra-Ramos (2009) provide reflections on the use of NVivo in education; Leech & Onwuegbuzie (2011) present guidance for psychology researchers; and Zamawe (2015) presents experiences in the context of health professionals.

Acknowledging that little is known about how researchers use CAQDAS, Paulus et al. (2017) present the results of a discourse analysis of some 763 empirical studies which use NVivo or ATLAS.ti (a competitor of NVivo – see https://atlasti.com/ ). Drawing on peer reviewed papers, published between 1994 and 2013, Paulus et al. (2017) report that the majority of researchers (87.5% of their sample) using CAQDAS to support qualitative data analysis fail to provide details of the technological approach used beyond naming the software, or what they refer to as ‘name-dropping’. Some 10% of the sample provide moderate levels of reporting, mainly concerned with “descriptions of software capability” (Paulus et al. , 2017 , p. 37) . The remaining 2% of the sample provide more detailed descriptions of the CAQDAS used, including “detailed descriptions of how the analysis was conducted” (p. 39) or “how the researchers used the software to go beyond coding to a deeper layer of analysis” (p. 41). Based on their findings, Paulus et al. (2017) suggest that future studies should provide more detail about their experiences of using CAQDAS to support qualitative data analysis, including: what software is used; how they are used; why they are used; and how effective they have been.

A limited number of studies report on the benefits and drawbacks of using NVivo. In an early study, García-Horta & Guerra-Ramos (2009) report their experiences of using NVivo (and MAX QDA ) to analyse qualitative data collected from teachers. Their experiences suggest a number of advantages, including the ability to: organise and store large volumes of data; deal with data overload; and enable fast and efficient retrieval of relevant information. However, they also highlight a number of limitations, most notably the “real hard work” of “generating categories or taxonomies, assigning meaning, synthesizing or theorizing” (p. 163) which, they argue, remains that of the researcher and not the software. García-Horta & Guerra-Ramos (2009) also highlight the potential for “data fetishism … or the ‘let’s code everything’ strategy [which] can lead to excessive and non-reflexive coding” (p. 163). They caution against the possibility of assumptions that ‘meaning-making’ can be computerised and the possibility of what they call ‘technologism’ whereby there is an implicit assumption that the qualitative data analysis process will be enhanced by the use of software. More recently, Zamawe (2015) argues that NVivo works well with most research designs as it is not methodologically specific and “the presence of NVivo makes it more compatible with grounded theory and thematic analysis approaches” (p. 14). Furthermore, Zamawe (2015) suggests NVivo eases the burden associated with manual qualitative data analysis in terms of the ‘copy-cut-paste’ requirement. NVivo also lends itself to more effective and efficient coding, and the reshaping and reorganisation of the coding structure by “simply clicking a few buttons” (p. 14). Zamawe (2015) , however, points out some pitfalls associated with using NVivo. These include: the time consuming, and difficult, nature of the software; the potential for NVivo to “take over the analysis process from the researcher” (p. 15); the process of coding the data; and the danger of the researcher becoming distant from his/her data with the result that the ‘thickness’ of the data is diluted.

2.3 Comparison of Manual and Technological Approaches

Few studies report on comparisons of the manual and technological approaches to qualitative data analysis. In one such study, Basit (2003) compares the use of the manual and technological approach to qualitative data analysis drawing on two research projects. She argues that the approach chosen is dependent on the size of the project, the funds and time available, and the inclination and expertise of the researcher. Basit (2003) maintains that while the technological approach may not be considered feasible to code a small number of interviews, it is more worthwhile when a large number of interviews are involved. When compared to the manual approach, she highlights a number of perceived benefits of the technological approach. First, the data analysis process is relatively smooth and facilitates a more in-depth analysis. Second, the search facility is particularly useful, as is the ability to generate reports. Despite the perceived benefits, Basit (2003) acknowledges some challenges of the technological approach when compared to the manual approach. There is a considerable amount of time and formal training involved in getting acquainted with a software package to code qualitative data electronically, an investment not required for the manual approach. However, that said, Basit notes that the benefit of the software search facility and the generation of comprehensive reports compensates for the time investment required. In another study, Maher et al. (2018) argue that qualitative data analysis software packages, such as NVivo, do not fully scaffold the data analysis process. They therefore advocate for the use of manual coding (such as using coloured pens, paper, and sticky notes) to be combined with digital software to overcome this. Reflecting on their research, which combined both a manual and software analysis, they argue that NVivo provides excellent data management and retrieval facilities to generate answers to complex questions that support analysis and write-up, a facility not available with a manual approach. However, they suggest that the manual approach of physically writing on sticky notes, arranging and rearranging them and visual mapping, encourages more meaningful interaction with the data, compared to a technological approach. Furthermore, they advocate that the manual approach has a particular advantage over the technological approach as manual analysis usually results in displays of the analysis. The resulting visualisations, sticky notes, and concept maps may remain in place, allowing the researcher to engage with the research material on a variety of levels and over a period of time. In contrast to the manual approach, Maher et al. (2018) believe that NVivo operated on a computer screen does not facilitate broad overviews of the data and that data views may therefore become fragmented.

The above review indicates that limited research has reported on the comparative experiences of those undertaking qualitative data analysis. This paper addresses this gap, and in so doing, reports on the experiences of two recently qualified doctoral students, as they each reflect on how they approached the task of analysing qualitative data using different approaches. Section three presents details of the two research projects.

3. The Doctoral Research Projects

In this section, the background, motivation and research question/objectives of the research projects undertaken by Researchers A and B (both undertaking a part-time PhD) are outlined. This provides context for a comparison of the technological (NVivo) and manual approaches used for qualitative data analysis.

3.1 Researcher A: Background, Motivation, Research Question and Objectives

Researcher A (a Chartered Accountant) investigated financial management practices in agriculture by exploring the financial decision-making process of Irish farmers. When the literature in the area of farm financial management (FFM) was explored, it became apparent that there were relatively few prior studies, both internationally and in the Irish context (Argiles & Slof , 2001; Jack , 2005) . The limited literature posed particular difficulties and frustration when conducting this research, but also demonstrated that there was a gap in the literature that needed to be addressed. The review of the literature identified a number of key issues which were central to the motivation of the research project. First, the majority of farmers appear to spend very little time on financial management (Boyle , 2012; Jack , 2005) and second, farmers tend to rely on intuition to a large extent when managing their farm enterprise (Nuthall , 2012; Öhlmér & Lönnstedt , 2004) .

Researcher A’s overall research question was: How and why do farmers make financial decisions? To address this question, two research objectives were formulated following a detailed literature review and findings from preliminary research, namely a pilot survey of farmers and key informant interviews. The theoretical framework adopted (sensemaking theory) also assisted in framing the research objectives.

Research Objective 1: To explore the financial decision-making process of farmers by examining:

The factors that influence farmer decision-making;

The role of advisors in farmer decision-making;

The role of FFM in farmer decision-making;

The role of other issues in farmer decision-making (e.g. demographic factors such as farm type, age and level of education of the farmer, and the role of intuition in farmer decision-making).

Research Objective 2: To establish how farmers make sense of their business situations in order to progress with decisions of a financial nature.

The research methodology chosen by Researcher A was interpretivist in nature (Ahrens & Chapman , 2006) . This was based on the assumption that farmers’ realities (in regard to how financial decisions are made) are subjective, socially constructed and may change. As a result, it was considered necessary to explore the subjective meanings motivating the decisions of farmers in order to understand the farmers’ decision-making processes. Interviews were considered the most appropriate data collection method to operationalise the interpretivist methodology chosen. The data collected via interviews with farmers allowed Researcher A to develop thick and rich explanations of how farmers make financial decisions.

3.2 Researcher B: Background, Motivation, Research Question and Objectives

Researcher B (also a Chartered Accountant) examined accounting practitioners’ perceptions of professional competence and their engagement with Continuing Professional Development (CPD) activities, as they strive to maintain and develop competence. Educational guidance on mandatory CPD within the profession was introduced in 2004 (IES 7 , 2004) , and while CPD is viewed as a bona fide stage in the lifecycle of professional education, it is in a state of infancy and transition and has yet to grow to achieve coherence, size and stature equivalent to the pre-qualification stage (Friedman & Phillips , 2004) . While professional accountancy bodies may interpret IES 7 guidance and almost exclusively decide what counts as legitimate or valid CPD, individual practitioners are mandated to complete and self-certify relevant activities on an annual basis in order to retain professional association. It is therefore questionable whether the annual declaration encapsulates the totality of practitioners’ learning and professional development in relation to professional competence (Lindsay , 2013) .

A review uncovered an extensive literature, concentrating on professionalisation, competence and professional education and learning, with attention focusing on the accounting domain. The following emerged: literature on professionalisation pertaining to the pre-qualification period (Flood & Wilson , 2009) ; findings on competence, education and learning largely focusing on higher education (Byrne & Flood , 2004; Paisey & Paisey , 2010) ; and CPD studies predominantly reporting on engagement (Paisey et al. , 2007) . The literature review highlighted a research gap and acknowledged the need for enhanced understanding in relation to post-qualification stages, where learning and professional development could more appropriately be examined from a competence angle (Lindsay , 2013) .

The overall research objective of Researcher B’s study was to explore how individual accounting professionals perceive professional competence, and how, in light of these perceptions, they manage their CPD with the purpose of maintaining and further developing their professional competence. Given that the study set out to gain an understanding of individual perceptions and practices, this supported the use of an interpretivist approach (Silverman , 2017) . A phenomenographic approach (a distinct research perspective located within the broad interpretivist paradigm) was selected. The root of phenomenography, phenomenon , means “to make manifest” or “to bring light” (Larsson & Holmström , 2007 , p. 55) and phenomenography examines phenomena “as they appear to people” (Larsson & Holmström , 2007 , p. 62) . The phenomenographic approach is an experiential, relational and qualitative approach, enabling the researcher to describe the different ways people understand, experience, and conceptualise a phenomenon (Larsson & Holmström , 2007; Marton , 1994) . It emphasises the individual as agent who interprets his/her own experiences and who actively creates an order to his/her own existence. It therefore facilitated the exploration of the ‘qualitatively different ways’ in which professional competence and associated CPD “are experienced, conceptualised, understood, perceived and apprehended” (Marton , 1994 , p. 4424) . ‘Bracketing’ is central to the phenomenographic approach and requires the researcher to effectively suspend research theories, previous research findings, researcher understandings, perceived notions, judgements, biases and own experience of a research topic (Merleau-Ponty , 1962) . This ensures “phenomena are revisited, freshly, naively, in a wide-open sense” (Moustakas , 1994 , p. 33) “in order to reveal engaged, lived experience” of research participants ( Merleau-Ponty , 1962 cited in Ashworth , 1999 , p. 708 ). In turn, participant experiences and understandings are examined and “characterised in terms of ‘categories of description’, logically related to each other, and forming hierarchies in relation to given criteria” (Marton , 1994 , p. 4424) . Such conceptions are assumed to have both meaning, a ‘what’ attribute, and structure, a ‘how’ attribute (Marton , 1994) . The anticipated output from Researcher B’s study sought an understanding of professional competence (the ‘what’ attribute) and the manner in which individual practitioners achieve and maintain such competence (the ‘how’ attribute). Interviews were considered the most appropriate data collection method to gain this understanding. The professional status of practitioners was therefore central to Researcher B’s study and the research focused on gaining an understanding of individual perceptions and practices with regard to maintaining and further developing professional competence. Mindful of this focus, the following research questions were developed:

What does it mean to be a ‘professional’?

What does ‘professional competence’ mean?

How is professional competence maintained and developed?

4. The NVivo and Manual Approaches to Qualitative Data Analysis

While Researchers A and B addressed disparate research areas, the above discussion indicates that qualitative data analysis represented a significant and central component of both researchers’ doctoral studies. Both researchers adopted an interpretivist philosophy involving a broadly similar number of interviews (27 in the case of Researcher A and 23 in the case of Researcher B). Despite the similarities between Researchers A and B, their choice of approach to qualitative data analysis was fundamentally different, with Researcher A choosing the technological approach (i.e. NVivo) and Researcher B the manual approach. In the remainder of this section, we discuss the factors influencing the choices made by Researchers A and B and provide insights into the data analysis process conducted. We then present critical reflections and the challenges faced by both researchers, as they undertook their respective approaches to qualitative data analysis.

4.1 Researcher A: Factors Influencing Approach to Qualitative Data Analysis

A number of factors influenced Researcher A’s decision to use NVivo (version 12) over the manual approach of qualitative data analysis. The most prominent of these was the multidimensional nature of the data collected. Researcher A investigated the financial decision-making process of farmers by exploring both strategic and operational decision-making. The farmers interviewed operated different farm types, had diverse levels of formal education and their age profile varied. The presence of multiple attributes highlighted the importance of reporting findings not only on how individual farmers undertook decision-making, but also to engage in comparisons of decision-making in different types of farming, and to explore how demographic factors (e.g. education, age) affected farmers’ decision-making processes.

Researcher A explored the option of adopting a technological approach to data analysis at an early stage in his study by attending a training course on NVivo. Despite attending the training course with an open mind and being aware of the alternative manual approach of qualitative data analysis, the training course convinced Researcher A of the potential power of NVivo to assist in qualitative data analysis. In particular, Researcher A was drawn to the ‘slice and dice’ capability of NVivo, whereby data could be analysed for a specific type of decision (strategic or operational), across multiple farm types (dairy, tillage or beef), or with respect to the demographic profile of farmers (education, age). By setting up different types of decisions, farm types and demographic factors as overarching themes (referred to as ‘nodes’ in NVivo), NVivo presented Researcher A with the ability to conduct numerous queries to address the research objectives, whilst simultaneously facilitating the extraction of relevant quotations to support findings. While the analysis could have been conducted manually, the search facility within NVivo was considered by Researcher A to be a very useful function and more efficient than using word processing software, which would be used with a manual approach. An additional and related factor which influenced Researcher A’s decision to proceed with NVivo was the possibility of availing of on-going one-to-one support for the duration of the research project from an NVivo trainer, when the actual qualitative data analysis commenced. In addition, Researcher A’s decision to opt for NVivo was influenced by his supervisor’s experience when conducting her own PhD studies. To that end, Researcher A’s supervisor had experience of using a technological approach (NUD*IST) to undertake qualitative data analysis. As a result of her familiarity with a technological approach, and an overall relatively positive experience, Researcher A’s supervisor provided some reassurance that this approach, versus the manual approach, was appropriate.

Before finally making the decision to adopt either a manual or technological approach to qualitative data analysis, Researcher A engaged with the various academic debates in the literature concerning the appropriateness of both. Based on these debates, Researcher A was confident that the technological approach to qualitative data analysis was appropriate. However, reflecting the debates in the literature, Researcher A was particularly mindful that “[NVivo] is merely a tool designed to assist analysis” (O’Dwyer , 2004 , p. 395) and that data analysis is ‘messy’ and very much the responsibility of the researcher who “must ask the questions, interpret the data, decide what to code” (Bringer et al. , 2006 , p. 248) .

4.2 Researcher A: An NVivo Approach to Data Analysis

Researcher A conducted 27 in-depth semi-structured interviews with farmers to develop an understanding of their financial decision-making processes. As with any qualitative research project, prior to formal data analysis, there was a significant amount of work involved in ‘cleansing’ the interview data collected. Researcher A transcribed all interview recordings, after which transcriptions were listened to and carefully read to identify inaccuracies. Field notes were also written by Researcher A immediately after each interview and these complemented the analysis of qualitative data and assisted the researcher in being reflexive during the data analysis process.

Researcher A adopted a thematic approach to qualitative data analysis as advocated by Braun & Clarke (2006) . Thematic analysis is a method for identifying, analysing and reporting patterns (themes) within data, where a theme is “something important about the data in relation to the research question and represents some level of patterned response or meaning from the data set” (Braun & Clarke , 2006 , p. 80) . In undertaking qualitative analysis, Researcher A followed a six phase thematic data analysis process (see Figure 1 ) developed by Braun & Clarke (2006) as follows:

Familiarising yourself with your data – interview transcripts were read and re-read by Researcher A, noting down initial ideas. Interview transcripts were then imported into the data management software NVivo.

Generating initial codes – this phase, descriptive coding, involved the deconstruction of the data from its initial chronology. The inductive process resulted in 227 hierarchical codes identified from the interview data, across 11 areas.

Searching for themes – this phase involved reviewing the open coding, merging, re-naming, distilling and collapsing the initial codes into broader categories of codes. This allowed the data to be constructed in a manner that enabled the objectives of the research to be fulfilled. Phase 3 resulted in the generation of 11 empirical themes related to strategic decision-making and 10 related to operational decision-making.

Reviewing themes – a process of ‘drilling down’ was conducted, including re-coding the text in the initial codes, re-organising into a coding framework, and breaking the themes down into sub-codes to better understand the meanings embedded therein.

Defining and naming themes – this involved abstraction of the data into a broader thematic framework. Using an inductive process, data was coded in relation to the four components of research objective 1, namely influencing factors; role of advisors; role of FFM; and other issues.

Producing the report – the final phase involved writing analytical memos to accurately summarise the content of each theme and propose empirical findings. The analytical memos helped Researcher A to produce a timely written interpretation of the findings, with the addition of his own annotations and recollections from interviews. The analytical memos also greatly assisted Researcher A to draft the findings chapter of his PhD thesis.

Figure 1

4.3 Researcher A: A Critical Reflection and Challenges with NVivo Qualitative Data Analysis

Reflecting on the journey of using NVivo as an approach to qualitative data analysis, Researcher A observed a number of salient points. First, a considerable amount of time and commitment is involved in developing the necessary skills to use the technology. Initially some time and effort are needed to learn how to operate the technology and formal NVivo training provides an essential support mechanism in this regard, particularly where training utilises standardised test data. Formal training also provides the researcher with an excellent overview of the technology and its potential capabilities. However, Researcher A cautions that it is not until the researcher actually begins to analyse their own data, which could potentially be some months/years later given the nature of the PhD research process, that specific study-related queries in using NVivo emerge. Due to the potential time lag, the researcher may have forgotten many aspects covered during the training or they may encounter queries that they have not experienced before. Hence, further specific guidance and/or further training may be required from the service provider. On a positive note, Researcher A found that the significant time and commitment invested towards the beginning of the data analysis process reaped considerable benefits towards the latter end of the research project. In particular, the systematic and structured coding process conducted allowed the retrieval of multi-layered analyses of the data relatively quickly. Furthermore, NVivo enabled the researcher to analyse, compare and contrast various aspects of the data efficiently and effectively. This was particularly useful for Researcher A, given the multidimensional aspect of the data collected. The time invested in learning how to operate the technology is a transferable research skill that the researcher could use on future research projects. While Researcher A invested a considerable amount of time becoming proficient with NVivo, it should be noted that the cost of both the technological approach (licence fee for NVivo) and formal training was not an issue, as these were funded by the researcher’s institution.

Second, critical reflection by Researcher A highlights the need to be mindful of the inclination to quantify qualitative data when using data analysis technologies. To that end, the coding process undertaken when using NVivo has the potential to focus the researcher’s attention on counting and quantifying the number of times a particular issue is identified or emphasised in the data. Braun & Clarke (2006) highlight that there are no hard and fast rules on how to identify a theme during qualitative data analysis. One cannot quantify how many times an issue must appear in the data in order for it to be labelled a theme. Indeed, an issue may appear infrequently in a data set, yet be labelled as a theme. Therefore, researcher judgement is necessary in determining themes. During the initial stages of writing up the findings, Researcher A found the above to be a particular challenge, as NVivo focused his attention on counting the number of times a particular issue appeared in the data. The ‘counting’ of data can be done easily through NVivo via the generation of graphs, tables or charts at the ‘push of a button’. Such analyses are useful for presenting a high-level overview of issues emphasised in the data, but they can also distract from the richness of the underlying interview data. Reflecting on this, Researcher A identified that it was necessary to pause, refocus and consider the underlying essence of the interview data, alongside the more quantitative output that NVivo generates. This is an important issue that qualitative researchers need to be cognisant of, particularly those who are first time users of the technological approach to analysing qualitative data.

Third, Researcher A reflects that the coding and analysis of the large volume of qualitative data collected was challenging and there was a need to be tolerant of uncertainty during this process. In particular, there was an element of drudgery and repetitiveness in coding the data using NVivo, necessitating the need for resilience and a ‘stick with it’ attitude as it was necessary to consistently code all interview data. However, one of the main benefits of adopting a systematic process, such as that facilitated by NVivo, is that it provides a map and audit trail of how the coding and analysis process was conducted. To some extent, this helped to structure the “messiness” (O’Dwyer , 2004 , p. 403) that is often attributed to qualitative data analysis.

Finally, reflecting on his overall experience, Researcher A found the NVivo data analysis software to be an excellent tool in terms of its ability to organise and manage qualitative data. In particular, the structured and systematic process of data analysis was very useful and effective. It is, however, important to note that while NVivo is a useful tool, it cannot replace the researcher’s own knowledge of the empirical data or the high level of research skills and judgement required to comprehend the data and elucidate themes, or the need for the researcher to be reflective in the data analysis process. In conclusion, Researcher A’s experience suggests that the benefits of using NVivo during the qualitative analysis phase outweigh the challenges it poses. Additionally, given the benefit of hindsight, Researcher A would use this technology in future qualitative research projects.

4.4 Researcher B: Factors Influencing Approach to Qualitative Data Analysis

A review of pertinent literature (Ashworth & Lucas , 2000; Larsson & Holmström , 2007; Svensson , 1997) highlights that there is no one ‘best’ method of phenomenographic data analysis. The overriding objective is to describe the data in the form of qualitative categories. This necessitates an approach for data analysis that enables resulting themes to be grounded in the data itself, rather than in prior literature or the researcher’s own experiences. However, Svensson (1997) cautions against replicating quantitative methodological traditions which view categories as “predefined assumptions” (p. 64). Mindful of this, and conscious that only a small number of phenomenographic studies had adopted a technological approach to data analysis at the time that Researcher B was making her decision on whether or not to adopt a technological approach (e.g. Ozkan , 2004) , Researcher B selected a non-technological manual approach. A further factor impacting on Researcher B’s decision to proceed with the manual approach was a perception that technological approaches, such as NVivo, were not used extensively by qualitative researchers within the Higher Education Institution in which she was enrolled as a PhD student. Whilst completing her doctorate studies at a UK University on a part-time basis, Researcher B attended a number of research methodology training sessions (funded by the researcher’s institution) and research seminars. Researchers who presented their work had adopted a manual approach to qualitative data analysis and were not very knowledgeable in relation to technological approaches. This highlighted an absence of an established community of practice in this regard and could mean that any adoption of a technological approach might not be appropriately aligned with the research community.

The experience of Researcher B’s supervisory team also influenced her decision to adopt the manual approach of qualitative data analysis. To that end, Researcher B’s supervisory team had no experience of using a qualitative technological approach for data analysis. This problem was compounded in that the supervisory team also had limited experience of qualitative research and was therefore reluctant to recommend any specific approach to data analysis. Taking on board the above factors, Researcher B believed there was no compelling reason to adopt a technological approach, thus she was not positively disposed towards NVivo or other such technological tool for qualitative data analysis. As a result, Researcher B selected a manual approach to qualitative data analysis.

4.5 Researcher B: A Manual Approach to Data Analysis

Researcher B was conscious of the “inevitable tension between being faithful to the data and at the same time creating, from the point of view of the researcher, a tidy construction useful for some further exploratory or educational purpose” (Bowden & Walsh , 2000 , p. 19) . Reflecting this, the analysis phase sought to gain insights into interview participants’ perceptions, meanings, understandings, experiences and interpretations. Consistent with the phenomenographic approach, Researcher B was mindful of the need for conscious bracketing with reference to the analysis of the interviews. [2] This comprised careful transcription of interviews, with emphasis on tone and emotions, and simultaneous continuous cycles of listening to interview recordings and reading of interview transcripts to highlight themes.

Researcher B found “the path from interviews through inference to categories…quite a challenge” (Entwistle , 1997 , p. 128) . The substantial volume of interview data required multiple and simultaneous continuous cycles of reading, note-making, interpretation, write-up and reflective review and the overall analysis of hard copy transcripts was quite a “messy” process (O’Dwyer , 2004 , p. 403) . It comprised substantial participant quotes highlighted in an array of colours on transcripts, a large amount of handwritten suggested thematic descriptions on both left and right transcript margins and large quantities of post-it notes of varying shades attached to the transcripts.

In undertaking the manual qualitative data analysis, Researcher B methodically worked through a series of steps, based on the work of Lucas (1998) and Ashworth & Lucas (2000) , as follows:

Familiarising self with the interviewee data and highlighting initial themes – Researcher B initially read each transcript a number of times and highlighted what she considered important elements of text with highlighter marker. She re-read each transcript a number of additional times and noted possible themes by writing on the right-hand margin of the hard copy transcript. She then highlighted more broad-based themes in the left-hand margin. Following this initial thematic identification, Researcher B re-read and listened to the interview recordings several more times, re-examining the analysis with a view to being methodical, yet open-minded about the content of the interviews.

Grounding themes in individual interviewee contexts – while many aspects of analysis focus on comparative experiences and mindful that these are of value, the phenomenographic approach positions individual experiences and lifeworlds as a backdrop to meanings. It was therefore important that individual experiences were not lost in an attempt to understand more generalising aspects. To this end, Researcher B also compiled individual interviewee profiles. The over-riding objective of this was to identify and examine particular points of emphasis that appeared to be central to the overall individual experiences with regard to development of professional competence. Such in-depth examination helped focus on the participants’ experiences and contributed to the empathetic understanding of participant perceptions, experiences, understandings and meanings (Lucas , 1998) . This also helped to counter tendencies to “attribute meaning out of context” (Lucas , 1998 , p. 138) and provided a means to understand participants’ experiences over a considerable period of time, from the point at which they made the conscious decision to gain admittance to the accounting profession up to the present day. This added considerable value to the analysis, not only helping to reveal what participants’ experiences and understandings of professional competence and professional development were, but also how participants shaped their ongoing actions and engagement with the development of professional competence. Predominant themes were then highlighted on the individual transcripts for each participant, in the participants’ own words. This served to maintain the bracketing process and ensured that themes were grounded in participants’ experiences.

Drafting initial thematic write-up – Researcher B drafted an initial descriptive thematic write-up, focussed around the research questions.

Reviewing interview data for supporting quotes – relevant interviewee quotes for each theme were subsequently included in the draft thematic write-up.

Reviewing thematic write-up – Researcher B re-read and listened back to the interviews several more times. She also searched individual interview transcript word documents for key words and phrases to highlight additional quotes to support thematic descriptions. She then spent some time editing the write-up with a view to generating a more “tidy construction” of descriptive overall categories (Bowden & Walsh , 2000 , p. 19) .

Generating categories of description – the final stage of analysis was the generation of overriding categories of description . The what aspect was used to characterise what professional competence means to participants (i.e. the meaning attribute) while the how aspect categorised how participant practitioners actually maintain and develop their professional competence (i.e. the structural attribute). Participants’ experiential stages were used to inform the hierarchy vis-a-vis these categories.

4.6 Researcher B: A Critical Reflection and Challenges with Manual Qualitative Data Analysis

Researcher B reflects on the challenges pertaining to data analysis during the course of her PhD study and highlights a number of issues. While the manual approach facilitated the generation and analysis of themes from the interview data, it was challenging to manage themes that were continuously being defined and redefined. Notwithstanding the iterative nature of the manual approach, Researcher B was confident that themes developed in an organic manner and were not finalised too early in the data analysis process. The ambiguity associated with the generation and analysis of themes also required Researcher B to bring high levels of research knowledge and skills to support this process and to be mindful of the need to embrace high levels of tolerance for uncertainty. Researcher B acknowledges that the iterative process of reading interviewee transcripts, listening to interview recordings (largely while in the car on the commute to and from work or while taking trips to see family at the other side of the country), generating themes, writing up themes, followed by re-reading messy transcripts and re-listening to the interview recordings while re-visiting themes, was both tedious and time consuming.

The initial excitement experienced when first listening to the interview recordings and reading the interview transcripts was somewhat depleted by the end of the process and work on the analyses increasingly developed into a test of endurance. Researcher B likened this to the declining enthusiasm often experienced by students from first reading a clean copy of a Shakespearian play in school, followed by subsequent grappling with syllabus requirements to dissect the play in multiple different ways in order to isolate significant events, explore characters, interpret language, examine subplots and understand larger themes. At the end of the school year, the once clean hard copy has become a heavily annotated and much more complex version of the original and the students’ enthusiasm considerably more subdued.

Researcher B also reflects that the manual approach required her to become very familiar with the interviewee transcripts and recordings, such that Researcher B could effectively match interview quotes to interviewees without having to check their provenance. Researcher B acknowledges that some participants provided more considered and more articulate responses to interview questions, and on review of the initial draft write-up, realised she had included excessive quotes centred around such participants. In subsequent iterations, Researcher B was careful to ensure the write-up was more representative of all of the interviewees and not dominated by a small number of interviewees.

As analysis progressed during the course of the doctorate, Researcher B presented draft write-ups of her findings to her PhD supervisors at various stages, largely to seek reassurance that data analysis was progressing appropriately. However, as indicated earlier, both supervisors had limited experience of qualitative data analysis and could provide little categorical reassurance regarding the manual approach to data analysis. As such, Researcher B had no systematic source of affirmation and was prompted to present at various doctoral colloquia to gain further insights and validation of the approach to analysis. This provided a useful, albeit more ad hoc , source of guidance and affirmation.

Finally, Researcher B reflects on the overall doctoral process and more particularly on the selection of a manual approach to data analysis. With hindsight, she recognises that while this approach enabled closeness to the interview data, data management involved a significant amount of time. For example, ‘cutting’ and ‘pasting’ within word documents which had to be done and re-done many times, reflecting the messiness of the data analysis. This was quite repetitive and was not an efficient means of organising data to support research findings. Researcher B believes that qualitative data analysis should enable both a closeness to the data and an efficient means of managing data. To that end, she would consider trialling measures to enhance the efficiency of data management in future research studies, including use of software tools such as NVivo.

Qualitative Data Analysis

  • Choosing QDA software
  • Free QDA tools
  • Transcription tools
  • Social media research
  • Mixed and multi-method research
  • Network Diagrams
  • Publishing qualitative data
  • Student specialists

General Information

For assistance, please submit a request .  You can also reach us via the chat below, email [email protected] , or join Discord server .

If you've met with us before,                        tell us how we're doing .

Service Desk and Chat

Bobst Library , 5th floor

Staffed Hours: Spring 2024

Mondays:  12pm - 5pm         Tuesdays:  12pm - 5pm         Wednesdays:  12pm - 5pm         Thursdays:  12pm - 5pm         Fridays:  12pm - 5pm        

Data Services closes for winter break at the end of the day on Friday, Dec. 22, 2023. We will reopen on Wednesday, Jan. 3, 2024.

Learning Qualitative Research Processes

  • Thematic Analysis
  • Participant Observation (Ethnography)
  • Psychological Narrative Analysis
  • Sentiment Analysis
  • Introduction to Open Coding An introductory guide to the Open Coding process using Taguette, ATLAS.ti, and MAXQDA.
  • Thinking about the Coding Process in Qualitative Data Analysis. Elliott, Victoria. “Thinking about the Coding Process in Qualitative Data Analysis.” The Qualitative Report, November 24, 2018. https://doi.org/10.46743/2160-3715/2018.3560.
  • The Coding Manual for Qualitative Researchers Saldaña,J.(2016). The coding manual for qualitative researchers. Los Angeles, Calif.L Sage Publications
  • Encyclopedia of Case Study Research Liu, F., Maitlis, S., Mills, A. J., Durepos, G., & Wiebe, E. (2010). Encyclopedia of case study research. Thousand Oaks, CA: SAGE Publications.
  • Introduction to Thematic Analysis with Digital Tools A practical guide to Thematic Analysis methods and how to use Data Services software with this methodology.
  • Thematic Analysis Fugard, Andi, and Henry W Potts. "Thematic Analysis." In SAGE Research Methods Foundations, edited by Paul Atkinson, Sara Delamont, Alexandru Cernat, Joseph W. Sakshaug, and Richard A. Williams.London: SAGE Publications Ltd., 2019. https://dx.doi.org/10.4135/9781526421036858333.
  • Thematic Networks: An Analytic Tool for Qualitative Research Attride-Stirling, J. 2001. “Thematic Networks: An Analytic Tool for Qualitative Research.” Qualitative Research 1 (3): 385–405. Accessed November 1. doi:10.1177/146879410100100307.
  • Participant Observation An introductory level How-to Guide for using qualitative software for analyzing ethnographic participant observations.
  • Introduction to Psychological Qualitative Narrative Analysis with Digital Tools Learn the basics of conducting qualitative narrative analysis with data services supported tools.
  • Introduction to Sentiment Analysis Learn how to leverage sentiment analysis tools in ATLAS.ti and MAXQDA.

Resources for Qualitative Research

  • Qualitative Research Bibliography
  • Qualitative Data Respository
  • UK Data Archive
  • Bazeley, P. & Johnson, K. (2013). Qualitative data analysis with NVivo.London: Sage. 
  • Berg, B. (2001) Qualitative research methods for the social sciences, Allyn & Bacon,9th Edition
  • Charmaz, K. (2010). Grounded theory: Objectivist and constructivist methods. In W. Luttrell (Ed.), Qualitative educational research: Readings in reflexive methodology and transformative practice (pp. 183-207). New York: Routledge. 
  • Charmaz, K. (2001). Qualitative interviewing and grounded theory analysis. In Gubrium, J. F., & Holstein, J. A. Handbook of interview research (pp. 675-694). : SAGE Publications Ltd. 
  • Corbin, J. & Strauss , A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4thed.) Thousand Oaks, CA: Sage. 
  • Creswell, J.  (2012).Qualitative Inquiry and Research Design: Choosing among Five Approaches.Sage Publications.
  • Flick, U. (2017). The SAGE handbook of qualitative data analysis.Los Angeles: SAGE Publications 
  • Friese, S.  (2011) Qualitative Data Analysis with Atlas Ti. London: Sage. 
  • Glaser, B.G. & Strauss, A.L.  (1967) The discovery of grounded theory: Strategies for qualitative research. New York Aldine de Grutyter. 
  • Lawson, H. et. al (2015). Participatory action research.New York: Oxford University Press.
  • Liggett, A.M, Glesne, C.E., Johnston, A.P., Hasazi, S.B., & Schattman, R.A. (1994). 'Teaming in qualitative research: lessons learned'. Qualitative Studies in Education , Vol 7 No. 1
  • MacQueen, K.M., McLellan, E., Kay, K., Milstein, B. (1999). 'Codebook Development for Team-based Qualitative Analysis'. Cultural Anthropology Methods Vol. 10 No.2
  • Merriham, S..B. & Tisdell, E.J. (2016). Qualitative research: a guide to design and implementation.Josey-Bass. 
  • Miles, M.B., Huberman, A.M. & Saldana, J.  (2014).  Qualitative data analysis: a methods sourcebook.Thousand Oaks: Sage 
  • Saldaña,J. (2016). The coding manual for qualitative researchers. Los Angeles, Calif.L Sage Publications
  • Sprokkereef, A., Lakin, E., Pole, C. J., & Burgess, R. G. (1995). 'The data, the team, and the Ethnograph'. in R. G. Burgess (Ed.), Computing and Qualitative Research (Vol. 5,). Greenwich: Jai Press, Inc.
  • Wertz, F.J. (2011). Five ways of doing qualitative analysis: phenomenological psychology, grounded theory, discourse analysis, narrative research and intuitive inquiry. 
  • Wolcott, H . (2009). Writing up qualitative research. Thousand Oaks, CA: Sage Publications.
  • Woolf, N.H. (2017). Qualitative analysis using Atlas. TI: The five-level QDA method. London: Taylor and Francis.

Screenshot of the Qualitative Data Repository Logo.  QDR in Blue Lettering on a white background.

The QDR is a safe place to deposit qualitative data. It has been certified as a "trustworthy data repository" by CoreTrustSeal. In tandem with the researchers who deposit data with QDR, its trained staff fully curates data to make them usable, discoverable, meaningful, citable, secure and durably preserved. NYU is a member institution of the QDR and so you can deposit your own qualitative data there, as well as use the data of others!

To register for an account with the QDR:

  • Go to: https://qdr.syr.edu/user/register
  • Fill out form using NYU email address
  • Confirmation will be sent to your NYU email, verify your account via the emailed instructions
  • Then you can login and you're good to start!

UK Data Services, Qualitative and Mixed Methods Datasets

manual data analysis qualitative research

The UK Data Service is a a place to both deposit data and find secondary datasets for use in your analysis.

Qualitative research bibliography

  • << Previous: Home
  • Next: Choosing QDA software >>
  • Last Updated: May 13, 2024 6:28 PM
  • URL: https://guides.nyu.edu/QDA
  • 1-800-933-6188 ☎
  • Apply Now ☑

manual data analysis qualitative research

Earn an Advanced Degree to Lead Social Change Through Research

  • Academic Catalog
  • Course and Instructor Evaluations
  • Institutional Services Assessment
  • Virtual Financial Aid Office
  • Voter Registration
  • NC-SARA Student Complaints

Privacy Policy

  • Library Services and Resources
  • Library Databases
  • Online Library Catalog
  • Library Tutorials

Manual Qualitative Data Analysis Tutorial Using Creswell & Poth’s Data Analysis Spiral

Hello, I’m Dr. Sara Reichard, MLitt Tutor at Omega Graduate School, and this is a brief video tutorial on manual qualitative data analysis using Microsoft Word. In this tutorial, we will explore a very easy, manual approach to analyzing qualitative data, specifically focusing on using text color highlighting in Microsoft Word to identify codes and organize them under themes. We will also follow Creswell and Poth’s Data Analysis Spiral, a comprehensive framework for qualitative data analysis. Let’s get started!

Introduction

 Qualitative data analysis involves extracting meaning and insights from qualitative research. While various software programs are available for data analysis, using Microsoft Word can be cost-effective, especially for researchers who prefer a manual approach. This tutorial will use Microsoft Word’s features to analyze qualitative data efficiently.  Creswell and Poth’s Data Analysis Spiral is a simple five-step process for qualitative data analysis: Step One: Managing and organizing the data (data preparation), Step Two: Reading and memoing emergent ideas, Step Three: Describing and classifying codes into themes, Step Four: Developing and assessing interpretations, Step Five: Representing and visualizing the data.

Step One – Managing and Organizing Data

 The first step in the data analysis is managing and organizing the data. Create a new document in Microsoft Word and copy and paste your qualitative data, such as interview transcripts or field notes, into the document. It’s important to ensure that each participant’s data is clearly labeled and organized to facilitate analysis. You can create headings or subheadings for each participant or group of participants, making navigating the document easier.

Step Two – Reading and Memoing Emergent Ideas

 Once the data is organized, proceed to read through the data carefully. As you read, make notes or memos about emergent ideas, patterns, or insights you observe. This process helps you capture initial impressions and thoughts that may guide further analysis. You can add annotated comments in Microsoft Word to record these memos alongside the relevant data. Remember to remain open-minded during this stage, as inductive and abductive coding can emerge from these observations.

Step Three – Describing and Classifying Codes into Themes

We start the coding process in the third step of the Data Analysis Spiral. Select a color for each code you want to identify. For example, you can assign one color representing a specific theme or concept. Using Microsoft Word’s text color highlighting feature, apply the designated color to the relevant sections of text that represent each code. This visual representation allows you to see patterns and connections across the data.

Next, create headings to organize these codes into themes. For instance, you can create a separate section for each theme and cut and paste the relevant text excerpts under their respective themes. Microsoft Word’s cut-and-paste functionality makes it easy to reorganize the data as you refine your themes and sub-themes. Remember, this process is iterative, and you may need to revisit and revise your codes and themes as you gain deeper insights into the data.

Step Four – Developing and Assessing Interpretations

In the fourth step, it’s time to develop interpretations based on emerging themes and patterns. Take the time to analyze the data within each theme, examining the relationships and meanings embedded in the text. Ask yourself questions such as “What does this pattern or theme signify?” or “How does it relate to the research objectives?” This interpretive process lets you explore the data and answer your research questions.

To assess the validity and reliability of your interpretations, it’s important to engage in member checking or seeking feedback from participants to ensure your transcripts accurately reflect what participants said during the interview.

Step Five – Representing and Visualizing the Data

The final step in the Data Analysis Spiral involves representing and visualizing the data. Microsoft Word offers various tools for creating tables, charts, or diagrams to enhance data presentation. You can create tables to summarize key findings or use charts and diagrams to visualize the relationships between themes or sub-themes. These visual representations provide a comprehensive overview of the qualitative data and support communicating your research findings.

Sample Interview Data

Now, let’s take a look at some sample qualitative interview data. This study is on how healthcare providers perceive the role of spirituality in a small Midwestern hospital. Here are some sample interview transcripts with inductive codes and themes already identified according to the research question.

In analyzing the interview transcripts, several key codes and themes emerged about healthcare professionals’ perceptions regarding spirituality’s role in patient care. Let’s explore how these codes and themes were identified.

Inductive Codes

First, the inductive coding process involved carefully reading and reviewing the interview transcripts to identify recurring ideas, concepts, and perspectives. Through this iterative process, the following inductive codes were identified:

  • Patient Care – This code represents the overall focus on providing comprehensive care to patients, encompassing their physical needs and emotional, psychological, and spiritual well-being.
  • Holistic Wellbeing – This code emphasizes the significance of addressing patients’ holistic well-being, recognizing that spiritual care is integral in promoting overall health and healing.
  • Incorporating Spirituality in Practice –  This code reflects the healthcare providers’ belief in integrating spirituality into their practice. It encompasses their efforts to acknowledge and address the spiritual needs of patients as an essential component of care.
  • Patient Comfort –  This code highlights the healthcare providers’ commitment to creating a comforting and supportive environment for patients, recognizing that spiritual beliefs and practices can bring solace and peace during challenging times.
  • Respect for Diverse Beliefs –  This code underscores the healthcare providers’ understanding of the importance of respecting and honoring patients’ diverse religious and spiritual beliefs. It involves fostering an inclusive environment that allows patients to express and practice their

Application and Reporting

Once the codes were identified, the next step was to cluster them into meaningful themes. Two overarching themes emerged from the data:

Theme 1 – Healthcare providers believe they should incorporate spirituality into their practice to address aspects of patient wellbeing holistically. This theme encompasses the codes of Holistic Wellbeing and Incorporating Spirituality in Practice. It recognizes that spirituality is essential to patient care, contributing to individuals’ overall wellbeing and healing.

Theme 2 – Healthcare providers believe respecting diverse beliefs can foster patient comfort and enhance patient care. This theme combines the codes of Respect for Diverse Beliefs, Patient Care, and Patient Comfort. It emphasizes the importance of creating an inclusive and respectful environment that acknowledges and supports patients’ diverse religious and spiritual beliefs, ultimately enhancing their comfort and the quality of care they receive.

The codes and themes illustrate healthcare professionals’ perspectives regarding the role of spirituality in patient care. These insights can inform the development of strategies and interventions promoting holistic care, respecting diversity, and enhancing the overall patient experience.

Illustrating Findings

To illustrate your findings, generate an APA-style table with an overview of the themes, codes, and some rich, thick descriptions based on participant responses.

That concludes our tutorial on analyzing qualitative data using Microsoft Word. We have covered the key steps of Creswell and Poth’s Data Analysis Spiral, including managing and organizing the data, reading and memoing emergent ideas, describing and classifying codes into themes, developing and assessing interpretations, and representing and visualizing the data. Remember, qualitative data analysis is an iterative process that requires time, reflection, and ongoing refinement. Microsoft Word can be a valuable tool to facilitate this process effectively.

Earn Your PhD Degree in Social Research

Do you have more questions  our admissions team is standing by to answer any questions you may have..

  Questions?  Contact Admissions!

Qualitative Data Analysis

  • First Online: 13 April 2022

Cite this chapter

manual data analysis qualitative research

  • Yanmei Li 3 &
  • Sumei Zhang 4  

1673 Accesses

3 Citations

After introducing basic statistical methods analyzing quantitative data this chapter turns to analyzing qualitative data, such as open-ended survey questions, planning documents, and narrative data collected from storytelling, planning workshops, public meetings, public hearings, planning forums, or focus groups. Practicing planners collect these types of data regularly and they are often the foundation of community needs analysis. Analyzing these data requires specialized methods. This chapter introduces methods to analyze qualitative data and conduct content analysis. Identifying trends and patterns of the data is the key to analyzing qualitative data. Related software, such as Atlas.ti, will be briefly explored to help researchers analyze complex qualitative data with complicated content or a large number of observations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Fielding, N. G., & Lee, R. M. (1998). Computer analysis and qualitative research . Sage.

Google Scholar  

Frechtling, J., & Sharp, L. (1997). User-friendly handbook for mixed method evaluations . National Science Foundation. https://www.nsf.gov/pubs/1997/nsf97153/start.htm

Hesse-Biber, S. (2010). Analyzing qualitative data: With or without software . (Lecture notes). https://www.bumc.bu.edu/crro/files/2010/07/Hesse-Bieber-4-10.pdf .

Kelle, U. (Ed.). (1996). Computer-aided qualitative data analysis . Sage.

MATH   Google Scholar  

Leech, N. L., & Onsuegbuzie, A. J. (2007). An array of qualitative data analysis tools: A call for data analysis triangulation. School Psychology Quarterly, 22 (4), 557–584. https://doi.org/10.1037/1045-3830.22.4.557

Article   Google Scholar  

Mahoney, C. (1997). Common qualitative methods. In J. Frechtling & L. Sharp (Eds.), User-friendly handbook for mixed method evaluations . National Science Foundation (NSF).

Neuman, W. L. (2011). Social research methods: Qualitative and quantitative approaches (7th ed.). Pearson Education.

Onsuegbuzie, A. J., & Teddlie, C. (2003). A framework for analyzing data in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook for mixed methods in social and behavioral research (pp. 351–383). Sage.

Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Sage.

Spadley, J. P. (1979). The ethnographic interview . Holt, Rinehart and Winston.

St. Johnson, W., & Johnson, P. (2004). The pros and cons of data analysis software for qualitative research. Journal of Nursing Scholarship, 32 (4), 393–397. https://doi.org/10.1111/j.1547-5069.2000.00393.x

Wong, L. P. (2008). Data analysis in qualitative research: A brief guide to using Nvivo. Malaysia Family Physician, 3 (1), 14–20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4267019/

Zamawe, F. C. (2015). The implication of using NVivo software in qualitative data analysis: Evidence-based reflections. Malawi Medical Journal, 27 (1). https://doi.org/10.4314/mmj.v27i1.4

Download references

Author information

Authors and affiliations.

Florida Atlantic University, Boca Raton, FL, USA

University of Louisville, Louisville, KY, USA

Sumei Zhang

You can also search for this author in PubMed   Google Scholar

Electronic Supplementary Material

Caption (docx 15 kb), rights and permissions.

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Li, Y., Zhang, S. (2022). Qualitative Data Analysis. In: Applied Research Methods in Urban and Regional Planning. Springer, Cham. https://doi.org/10.1007/978-3-030-93574-0_8

Download citation

DOI : https://doi.org/10.1007/978-3-030-93574-0_8

Published : 13 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-93573-3

Online ISBN : 978-3-030-93574-0

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic)

Qualitative data analysis is done with the help of analytical tools to extract meaning from large volumes of non-numeric data, including images, reports, audio, and video. It’s used in researches and other areas of the business where the client behaviour needs to be understood at a deeper level to meet their expectations.

Businesses mostly like to work on the qualitative research platform as it is frequently thought as more human-friendly than quantitative data, which enables them to identify potential for product expansion and improve customer acquisition and retention techniques. The techniques used in qualitative data research can help researchers draw insights from the data, identify trends, and better understand the dynamics of the data. These insights help the businesses make an informed decision that is purely client-centric.

Despite some of the stark differences between qualitative and quantitative analysis, a lot of people get confused between the two. However, the two are used for research purposes, but the difference lies in data collection and reporting.

Quantitative Data Analysis vs. Qualitative Data Analysis

Quantitative Data Analysis vs. Qualitative Data Analysis

As the name suggests, quantitative data works on the quantity and analysing numerical data (present in the form of graphs, tables, or charts) to find patterns and trends is known as quantitative data analysis. In order to systematically measure variables and report the collected data in the numerical form, quantitative research uses numbers and statistics.

On the other hand, qualitative data analysis is the process of analysing and extracting insights based on non-numerical or textual data. In contrary to quantitative data which is easy and straightforward to interpret, qualitative data is generally more “open-ended” and might be challenging to interpret and present.

Quantitative data cannot offer insights that qualitative data can. For instance, qualitative data can help you understand why the client wants a particular product and what drives their buying decision.

Now that you have understood the difference between qualitative and quantitative data analysis, let’s discuss the ways of qualitative content analysis. There are two ways of qualitative data analysis, manually and automatic. Both yield the same result but have different approaches.

Let’s first learn about the methods of qualitative data analysis

Methods of Qualitative Data Analysis

There are 5 main methods of qualitative data analysis

1. Qualitative Content Analysis

A systematic study of a content to derive particular characteristics or trends is known as content analysis. This might from an interviw over the phone, survey reports, or feedbacks from clients. Since content analysis is a faster and simpler, anyone with a solid comprehension of the data can do it.

The insights taken from the content analysis are also easier to understand and comprehend.

2. Narrative Analysis

Analysing and interpreting the customer’s or research participant observation when narrated by them, are all part of using narrative analysis to analyse qualitative data. Customer interviews or testimonies can be used to gather the data.

By using narrative analysis, product managers may better understand how customers feel about their products, find gaps in the supply and demand, look for recurring patterns in their behaviour, and change or better the in-app experiences they receive.

3. Grounded Theory Analysis

It is well-knonw methodology employed in various research studies. Qualitative researchers analyse and construct the data based on the real world theories. It is based on the particular process that seeks to comprehend how customers engage with their goods. Additionally, it can be used to come up with predictions for future consumer behaviour.

4. Thematic analysis

A popular technique for analysing qualitative data that reveals patterns and themes in the data is thematic analysis. It is a method for analyzing qualitative data obtained through study of a data set to discover, analyze, and report repeated patterns. In order to do a thematic analysis, the data must be coded, or given themes or categories. Hence the term, thematic analysis.

5. Discourse Analysis

It is about understanding the language more than just a sentence and how people speak with one another to understand how it functions in a social context. It can be used to analyse both spoken and written language. Discourse analysis is a great tool to understand how customers discuss their products online.

Analysing Qualitative Data Manually

There are five steps involved in manual qualitative data analysis:

1. Get Your Data Ready

Before beginning with the research, it is vital to gather the notes, documents, and other resources that can give you a headstart. Mark the source of the data, any data point you may have collected, or any other information that will help you examine your data.

Since it is a manual research, majority of businesses get their data from traditional data collection techniques such as discussion groups, focus groups, questionnaires, and interviews. Usually, databases,, CRMs, and papers are used to hold this data. Based on the breadth of your research, it’s important to consider which data is actually available and should be used.

2. Organise and Investigate the Data

To understand what is there in your data, you will need to read it to make notes about your ideas, questions, and opinions. This can be stored in excel spreadsheets that are typically shared by the research teams. Because each team gathers and arranges the data in a manner that works best for them, the feedback is frequently kept separately.

3. Produce the Codes

Use anything that will help you make a connection with your facts, such as highlighters, sticky notes, or comments in the margins. Here is how you can do it manually

  • To gain a general understanding of what the data reveals, read it more than once. It’s time to begin putting your first set of codes on statements and text chunks.
  • Ensure that everything once it has been coded and there are no discrepancies neither the data is missing.
  • For your codes, make a code frame to organise all your codes.
  • By collecting the data, you might observe the recurring patters in your feedback data based on the frequency of a specific code.

4. Review the Codes and Find Valuable Insights

Determine recurrent topics, viewpoints, and beliefs. It is better to have sub-codes for your principal codes at this point as massive coding can be confusing. This will enhance the calibre of your analysis even though it takes time.

Customers are segmented by several businesses into age, demography, behaviour, and more. Additionally, you might already have your own respondent groups that you can use in your qualitative analysis. Observing the frequency of codes within your segments is especially helpful. It can be helpful to focus elsewhere if one of your customer segments is undervalued by your company but accounts for the majority of customer care issues.

5. Deliver Reports In a Logical Order

In order to best convey the story of your statistics, take into account your audience, the goal of the study, and the appropriate content to present. Condense these findings into a series of graphs, tables, and other graphics to represent them better.

Also Read: 5 Market Research Tools for Precise and Qualitative Data

Analysing Qualitative data: Automatic

Unlike the manual data analysis, automatic data analysis through software is easy, faster and produces error-free results. Let’s take a look at qualitative data analysis using software:

1. Gather Qualitative Data

The majority of businesses have now eployed in Slack chats, emails, chatbots, support ticketing systems which give the businesses additional benefits to get customer feedback while also enabling the mass collection of unstructured feedback data. Social media platforms like Twitter and Facebook along with online communities, review sites, and forums also offer useful qualitative data.

2. Organise All your Data

Earlier, developers popularised CAQDAS software, which researchers quickly got used to with the coding and organisation of data. Qualitative data is loaded into CAQDAS software for coding. Software for computer-assisted qualitative data analysis has the following advantages:

  • Allows you to investigate several interpretations of your data analysis
  • Helps with the organisation of your data
  • Facilitates group cooperation
  • Makes it easier to share your dataset

3. Coding the Data

Several software options can facilitate boost this procedure. Here are some exampes.

CAQDAS – This software includes built-in capability that enables text coding within the programme. Thee software’s UI makes managing codes simpler than using a spreadsheet.

Dovetail: Transcripts and other textual data can be tagged using Dovetail. It is easier to maintain the coding on a single platform because they are also repositories.

Ascribe – The software has a code management system called “Coder.” Managing your codes is simpler because of its user interface.

4. Analysing the Data

The automatic extraction of codes and subcodes from the data is made possible by automated text analytics tools. It is easy to figure out what’s causing either negative or positive results quicker and simpler with this tool. It can also help in identifying new trends and uncover insightful information in the data.

The built-in sentiment analysis feature of AI-driven text analytics software offers the emotional context for your feedback and other qualitative data, which is an additional advantage.

5. Reporting the data

Platforms for feedback analytics include visualisation capabilities that makes it easy-to-understand the graphs of important data and insights. Automatic construction of these graphs of charts frees up time to concentrate on developing an engaging narrative that highlights the insights for executive teams to examine.

Also Read: Ways to Conduct Primary Research

Get Qualitative Data Analysis By the Research Experts

Qualitative data analysis can assist in capturing shifting attitudes within a target group, such as the views of customers of a good or service or workers in a workplace. The best way for understanding or explaining the behaviours, intentions, and characteristics of a particular group of people is qualitative research.

Our data collecting specialists, who work for a reputable qualitative research organisation, are situated in important research markets for the gathering of local data, which, when combined with the top-notch research facility, ensures success.

As a renowned qualitative research company, our research is backed 4 million+ panellists who use a consulting approach to assure your success in any nation if you are aiming for the global market. Because of our affiliation with international strategic partners, you will only have one point of contact for the entire project.

With the help of first-rate online qualitative research platforms and project management, we provide the best outlook regardless of whether you are a major market player or a start-up, or whether you need a comprehensive solution or limited insight. Connect with us to explore more about how our qualitative research can help in the growth of your business.

user icon

Team Insights

  • Quantitative Research
  • Qualitative Research
  • Global Panel
  • Mystery Shopping
  • Survey Audit
  • Programming & Hosting
  • Translation
  • Data Processing & Analytics
  • Online Community Building

Request for call back

captcha

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(3); May-Jun 2015

Logo of cjhp

Qualitative Research: Data Collection, Analysis, and Management

Introduction.

In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area. Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. Whereas quantitative research methods can be used to determine how many people undertake particular behaviours, qualitative methods can help researchers to understand how and why such behaviours take place. Within the context of pharmacy practice research, qualitative approaches have been used to examine a diverse array of topics, including the perceptions of key stakeholders regarding prescribing by pharmacists and the postgraduation employment experiences of young pharmacists (see “Further Reading” section at the end of this article).

In the previous paper, 1 we outlined 3 commonly used methodologies: ethnography 2 , grounded theory 3 , and phenomenology. 4 Briefly, ethnography involves researchers using direct observation to study participants in their “real life” environment, sometimes over extended periods. Grounded theory and its later modified versions (e.g., Strauss and Corbin 5 ) use face-to-face interviews and interactions such as focus groups to explore a particular research phenomenon and may help in clarifying a less-well-understood problem, situation, or context. Phenomenology shares some features with grounded theory (such as an exploration of participants’ behaviour) and uses similar techniques to collect data, but it focuses on understanding how human beings experience their world. It gives researchers the opportunity to put themselves in another person’s shoes and to understand the subjective experiences of participants. 6 Some researchers use qualitative methodologies but adopt a different standpoint, and an example of this appears in the work of Thurston and others, 7 discussed later in this paper.

Qualitative work requires reflection on the part of researchers, both before and during the research process, as a way of providing context and understanding for readers. When being reflexive, researchers should not try to simply ignore or avoid their own biases (as this would likely be impossible); instead, reflexivity requires researchers to reflect upon and clearly articulate their position and subjectivities (world view, perspectives, biases), so that readers can better understand the filters through which questions were asked, data were gathered and analyzed, and findings were reported. From this perspective, bias and subjectivity are not inherently negative but they are unavoidable; as a result, it is best that they be articulated up-front in a manner that is clear and coherent for readers.

THE PARTICIPANT’S VIEWPOINT

What qualitative study seeks to convey is why people have thoughts and feelings that might affect the way they behave. Such study may occur in any number of contexts, but here, we focus on pharmacy practice and the way people behave with regard to medicines use (e.g., to understand patients’ reasons for nonadherence with medication therapy or to explore physicians’ resistance to pharmacists’ clinical suggestions). As we suggested in our earlier article, 1 an important point about qualitative research is that there is no attempt to generalize the findings to a wider population. Qualitative research is used to gain insights into people’s feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as “mixed methods” research, and further reading on this topic may be found at the end of this paper.

The role of the researcher in qualitative research is to attempt to access the thoughts and feelings of study participants. This is not an easy task, as it involves asking people to talk about things that may be very personal to them. Sometimes the experiences being explored are fresh in the participant’s mind, whereas on other occasions reliving past experiences may be difficult. However the data are being collected, a primary responsibility of the researcher is to safeguard participants and their data. Mechanisms for such safeguarding must be clearly articulated to participants and must be approved by a relevant research ethics review board before the research begins. Researchers and practitioners new to qualitative research should seek advice from an experienced qualitative researcher before embarking on their project.

DATA COLLECTION

Whatever philosophical standpoint the researcher is taking and whatever the data collection method (e.g., focus group, one-to-one interviews), the process will involve the generation of large amounts of data. In addition to the variety of study methodologies available, there are also different ways of making a record of what is said and done during an interview or focus group, such as taking handwritten notes or video-recording. If the researcher is audio- or video-recording data collection, then the recordings must be transcribed verbatim before data analysis can begin. As a rough guide, it can take an experienced researcher/transcriber 8 hours to transcribe one 45-minute audio-recorded interview, a process than will generate 20–30 pages of written dialogue.

Many researchers will also maintain a folder of “field notes” to complement audio-taped interviews. Field notes allow the researcher to maintain and comment upon impressions, environmental contexts, behaviours, and nonverbal cues that may not be adequately captured through the audio-recording; they are typically handwritten in a small notebook at the same time the interview takes place. Field notes can provide important context to the interpretation of audio-taped data and can help remind the researcher of situational factors that may be important during data analysis. Such notes need not be formal, but they should be maintained and secured in a similar manner to audio tapes and transcripts, as they contain sensitive information and are relevant to the research. For more information about collecting qualitative data, please see the “Further Reading” section at the end of this paper.

DATA ANALYSIS AND MANAGEMENT

If, as suggested earlier, doing qualitative research is about putting oneself in another person’s shoes and seeing the world from that person’s perspective, the most important part of data analysis and management is to be true to the participants. It is their voices that the researcher is trying to hear, so that they can be interpreted and reported on for others to read and learn from. To illustrate this point, consider the anonymized transcript excerpt presented in Appendix 1 , which is taken from a research interview conducted by one of the authors (J.S.). We refer to this excerpt throughout the remainder of this paper to illustrate how data can be managed, analyzed, and presented.

Interpretation of Data

Interpretation of the data will depend on the theoretical standpoint taken by researchers. For example, the title of the research report by Thurston and others, 7 “Discordant indigenous and provider frames explain challenges in improving access to arthritis care: a qualitative study using constructivist grounded theory,” indicates at least 2 theoretical standpoints. The first is the culture of the indigenous population of Canada and the place of this population in society, and the second is the social constructivist theory used in the constructivist grounded theory method. With regard to the first standpoint, it can be surmised that, to have decided to conduct the research, the researchers must have felt that there was anecdotal evidence of differences in access to arthritis care for patients from indigenous and non-indigenous backgrounds. With regard to the second standpoint, it can be surmised that the researchers used social constructivist theory because it assumes that behaviour is socially constructed; in other words, people do things because of the expectations of those in their personal world or in the wider society in which they live. (Please see the “Further Reading” section for resources providing more information about social constructivist theory and reflexivity.) Thus, these 2 standpoints (and there may have been others relevant to the research of Thurston and others 7 ) will have affected the way in which these researchers interpreted the experiences of the indigenous population participants and those providing their care. Another standpoint is feminist standpoint theory which, among other things, focuses on marginalized groups in society. Such theories are helpful to researchers, as they enable us to think about things from a different perspective. Being aware of the standpoints you are taking in your own research is one of the foundations of qualitative work. Without such awareness, it is easy to slip into interpreting other people’s narratives from your own viewpoint, rather than that of the participants.

To analyze the example in Appendix 1 , we will adopt a phenomenological approach because we want to understand how the participant experienced the illness and we want to try to see the experience from that person’s perspective. It is important for the researcher to reflect upon and articulate his or her starting point for such analysis; for example, in the example, the coder could reflect upon her own experience as a female of a majority ethnocultural group who has lived within middle class and upper middle class settings. This personal history therefore forms the filter through which the data will be examined. This filter does not diminish the quality or significance of the analysis, since every researcher has his or her own filters; however, by explicitly stating and acknowledging what these filters are, the researcher makes it easer for readers to contextualize the work.

Transcribing and Checking

For the purposes of this paper it is assumed that interviews or focus groups have been audio-recorded. As mentioned above, transcribing is an arduous process, even for the most experienced transcribers, but it must be done to convert the spoken word to the written word to facilitate analysis. For anyone new to conducting qualitative research, it is beneficial to transcribe at least one interview and one focus group. It is only by doing this that researchers realize how difficult the task is, and this realization affects their expectations when asking others to transcribe. If the research project has sufficient funding, then a professional transcriber can be hired to do the work. If this is the case, then it is a good idea to sit down with the transcriber, if possible, and talk through the research and what the participants were talking about. This background knowledge for the transcriber is especially important in research in which people are using jargon or medical terms (as in pharmacy practice). Involving your transcriber in this way makes the work both easier and more rewarding, as he or she will feel part of the team. Transcription editing software is also available, but it is expensive. For example, ELAN (more formally known as EUDICO Linguistic Annotator, developed at the Technical University of Berlin) 8 is a tool that can help keep data organized by linking media and data files (particularly valuable if, for example, video-taping of interviews is complemented by transcriptions). It can also be helpful in searching complex data sets. Products such as ELAN do not actually automatically transcribe interviews or complete analyses, and they do require some time and effort to learn; nonetheless, for some research applications, it may be a valuable to consider such software tools.

All audio recordings should be transcribed verbatim, regardless of how intelligible the transcript may be when it is read back. Lines of text should be numbered. Once the transcription is complete, the researcher should read it while listening to the recording and do the following: correct any spelling or other errors; anonymize the transcript so that the participant cannot be identified from anything that is said (e.g., names, places, significant events); insert notations for pauses, laughter, looks of discomfort; insert any punctuation, such as commas and full stops (periods) (see Appendix 1 for examples of inserted punctuation), and include any other contextual information that might have affected the participant (e.g., temperature or comfort of the room).

Dealing with the transcription of a focus group is slightly more difficult, as multiple voices are involved. One way of transcribing such data is to “tag” each voice (e.g., Voice A, Voice B). In addition, the focus group will usually have 2 facilitators, whose respective roles will help in making sense of the data. While one facilitator guides participants through the topic, the other can make notes about context and group dynamics. More information about group dynamics and focus groups can be found in resources listed in the “Further Reading” section.

Reading between the Lines

During the process outlined above, the researcher can begin to get a feel for the participant’s experience of the phenomenon in question and can start to think about things that could be pursued in subsequent interviews or focus groups (if appropriate). In this way, one participant’s narrative informs the next, and the researcher can continue to interview until nothing new is being heard or, as it says in the text books, “saturation is reached”. While continuing with the processes of coding and theming (described in the next 2 sections), it is important to consider not just what the person is saying but also what they are not saying. For example, is a lengthy pause an indication that the participant is finding the subject difficult, or is the person simply deciding what to say? The aim of the whole process from data collection to presentation is to tell the participants’ stories using exemplars from their own narratives, thus grounding the research findings in the participants’ lived experiences.

Smith 9 suggested a qualitative research method known as interpretative phenomenological analysis, which has 2 basic tenets: first, that it is rooted in phenomenology, attempting to understand the meaning that individuals ascribe to their lived experiences, and second, that the researcher must attempt to interpret this meaning in the context of the research. That the researcher has some knowledge and expertise in the subject of the research means that he or she can have considerable scope in interpreting the participant’s experiences. Larkin and others 10 discussed the importance of not just providing a description of what participants say. Rather, interpretative phenomenological analysis is about getting underneath what a person is saying to try to truly understand the world from his or her perspective.

Once all of the research interviews have been transcribed and checked, it is time to begin coding. Field notes compiled during an interview can be a useful complementary source of information to facilitate this process, as the gap in time between an interview, transcribing, and coding can result in memory bias regarding nonverbal or environmental context issues that may affect interpretation of data.

Coding refers to the identification of topics, issues, similarities, and differences that are revealed through the participants’ narratives and interpreted by the researcher. This process enables the researcher to begin to understand the world from each participant’s perspective. Coding can be done by hand on a hard copy of the transcript, by making notes in the margin or by highlighting and naming sections of text. More commonly, researchers use qualitative research software (e.g., NVivo, QSR International Pty Ltd; www.qsrinternational.com/products_nvivo.aspx ) to help manage their transcriptions. It is advised that researchers undertake a formal course in the use of such software or seek supervision from a researcher experienced in these tools.

Returning to Appendix 1 and reading from lines 8–11, a code for this section might be “diagnosis of mental health condition”, but this would just be a description of what the participant is talking about at that point. If we read a little more deeply, we can ask ourselves how the participant might have come to feel that the doctor assumed he or she was aware of the diagnosis or indeed that they had only just been told the diagnosis. There are a number of pauses in the narrative that might suggest the participant is finding it difficult to recall that experience. Later in the text, the participant says “nobody asked me any questions about my life” (line 19). This could be coded simply as “health care professionals’ consultation skills”, but that would not reflect how the participant must have felt never to be asked anything about his or her personal life, about the participant as a human being. At the end of this excerpt, the participant just trails off, recalling that no-one showed any interest, which makes for very moving reading. For practitioners in pharmacy, it might also be pertinent to explore the participant’s experience of akathisia and why this was left untreated for 20 years.

One of the questions that arises about qualitative research relates to the reliability of the interpretation and representation of the participants’ narratives. There are no statistical tests that can be used to check reliability and validity as there are in quantitative research. However, work by Lincoln and Guba 11 suggests that there are other ways to “establish confidence in the ‘truth’ of the findings” (p. 218). They call this confidence “trustworthiness” and suggest that there are 4 criteria of trustworthiness: credibility (confidence in the “truth” of the findings), transferability (showing that the findings have applicability in other contexts), dependability (showing that the findings are consistent and could be repeated), and confirmability (the extent to which the findings of a study are shaped by the respondents and not researcher bias, motivation, or interest).

One way of establishing the “credibility” of the coding is to ask another researcher to code the same transcript and then to discuss any similarities and differences in the 2 resulting sets of codes. This simple act can result in revisions to the codes and can help to clarify and confirm the research findings.

Theming refers to the drawing together of codes from one or more transcripts to present the findings of qualitative research in a coherent and meaningful way. For example, there may be examples across participants’ narratives of the way in which they were treated in hospital, such as “not being listened to” or “lack of interest in personal experiences” (see Appendix 1 ). These may be drawn together as a theme running through the narratives that could be named “the patient’s experience of hospital care”. The importance of going through this process is that at its conclusion, it will be possible to present the data from the interviews using quotations from the individual transcripts to illustrate the source of the researchers’ interpretations. Thus, when the findings are organized for presentation, each theme can become the heading of a section in the report or presentation. Underneath each theme will be the codes, examples from the transcripts, and the researcher’s own interpretation of what the themes mean. Implications for real life (e.g., the treatment of people with chronic mental health problems) should also be given.

DATA SYNTHESIS

In this final section of this paper, we describe some ways of drawing together or “synthesizing” research findings to represent, as faithfully as possible, the meaning that participants ascribe to their life experiences. This synthesis is the aim of the final stage of qualitative research. For most readers, the synthesis of data presented by the researcher is of crucial significance—this is usually where “the story” of the participants can be distilled, summarized, and told in a manner that is both respectful to those participants and meaningful to readers. There are a number of ways in which researchers can synthesize and present their findings, but any conclusions drawn by the researchers must be supported by direct quotations from the participants. In this way, it is made clear to the reader that the themes under discussion have emerged from the participants’ interviews and not the mind of the researcher. The work of Latif and others 12 gives an example of how qualitative research findings might be presented.

Planning and Writing the Report

As has been suggested above, if researchers code and theme their material appropriately, they will naturally find the headings for sections of their report. Qualitative researchers tend to report “findings” rather than “results”, as the latter term typically implies that the data have come from a quantitative source. The final presentation of the research will usually be in the form of a report or a paper and so should follow accepted academic guidelines. In particular, the article should begin with an introduction, including a literature review and rationale for the research. There should be a section on the chosen methodology and a brief discussion about why qualitative methodology was most appropriate for the study question and why one particular methodology (e.g., interpretative phenomenological analysis rather than grounded theory) was selected to guide the research. The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper. The findings should be written as if a story is being told; as such, it is not necessary to have a lengthy discussion section at the end. This is because much of the discussion will take place around the participants’ quotes, such that all that is needed to close the report or paper is a summary, limitations of the research, and the implications that the research has for practice. As stated earlier, it is not the intention of qualitative research to allow the findings to be generalized, and therefore this is not, in itself, a limitation.

Planning out the way that findings are to be presented is helpful. It is useful to insert the headings of the sections (the themes) and then make a note of the codes that exemplify the thoughts and feelings of your participants. It is generally advisable to put in the quotations that you want to use for each theme, using each quotation only once. After all this is done, the telling of the story can begin as you give your voice to the experiences of the participants, writing around their quotations. Do not be afraid to draw assumptions from the participants’ narratives, as this is necessary to give an in-depth account of the phenomena in question. Discuss these assumptions, drawing on your participants’ words to support you as you move from one code to another and from one theme to the next. Finally, as appropriate, it is possible to include examples from literature or policy documents that add support for your findings. As an exercise, you may wish to code and theme the sample excerpt in Appendix 1 and tell the participant’s story in your own way. Further reading about “doing” qualitative research can be found at the end of this paper.

CONCLUSIONS

Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. It can be used in pharmacy practice research to explore how patients feel about their health and their treatment. Qualitative research has been used by pharmacists to explore a variety of questions and problems (see the “Further Reading” section for examples). An understanding of these issues can help pharmacists and other health care professionals to tailor health care to match the individual needs of patients and to develop a concordant relationship. Doing qualitative research is not easy and may require a complete rethink of how research is conducted, particularly for researchers who are more familiar with quantitative approaches. There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples’ thoughts and feelings to enable researchers to tell participants’ stories.

Appendix 1. Excerpt from a sample transcript

The participant (age late 50s) had suffered from a chronic mental health illness for 30 years. The participant had become a “revolving door patient,” someone who is frequently in and out of hospital. As the participant talked about past experiences, the researcher asked:

  • What was treatment like 30 years ago?
  • Umm—well it was pretty much they could do what they wanted with you because I was put into the er, the er kind of system er, I was just on
  • endless section threes.
  • Really…
  • But what I didn’t realize until later was that if you haven’t actually posed a threat to someone or yourself they can’t really do that but I didn’t know
  • that. So wh-when I first went into hospital they put me on the forensic ward ’cause they said, “We don’t think you’ll stay here we think you’ll just
  • run-run away.” So they put me then onto the acute admissions ward and – er – I can remember one of the first things I recall when I got onto that
  • ward was sitting down with a er a Dr XXX. He had a book this thick [gestures] and on each page it was like three questions and he went through
  • all these questions and I answered all these questions. So we’re there for I don’t maybe two hours doing all that and he asked me he said “well
  • when did somebody tell you then that you have schizophrenia” I said “well nobody’s told me that” so he seemed very surprised but nobody had
  • actually [pause] whe-when I first went up there under police escort erm the senior kind of consultants people I’d been to where I was staying and
  • ermm so er [pause] I . . . the, I can remember the very first night that I was there and given this injection in this muscle here [gestures] and just
  • having dreadful side effects the next day I woke up [pause]
  • . . . and I suffered that akathesia I swear to you, every minute of every day for about 20 years.
  • Oh how awful.
  • And that side of it just makes life impossible so the care on the wards [pause] umm I don’t know it’s kind of, it’s kind of hard to put into words
  • [pause]. Because I’m not saying they were sort of like not friendly or interested but then nobody ever seemed to want to talk about your life [pause]
  • nobody asked me any questions about my life. The only questions that came into was they asked me if I’d be a volunteer for these student exams
  • and things and I said “yeah” so all the questions were like “oh what jobs have you done,” er about your relationships and things and er but
  • nobody actually sat down and had a talk and showed some interest in you as a person you were just there basically [pause] um labelled and you
  • know there was there was [pause] but umm [pause] yeah . . .

This article is the 10th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.

Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.

Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.

Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.

Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.

Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.

Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.

Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.

Competing interests: None declared.

Further Reading

Examples of qualitative research in pharmacy practice.

  • Farrell B, Pottie K, Woodend K, Yao V, Dolovich L, Kennie N, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists integrated into family practice. J Interprof Care. 2010; 24 (1):80–9. [ PubMed ] [ Google Scholar ]
  • Gregory P, Austin Z. Postgraduation employment experiences of new pharmacists in Ontario in 2012–2013. Can Pharm J. 2014; 147 (5):290–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marks PZ, Jennnings B, Farrell B, Kennie-Kaulbach N, Jorgenson D, Pearson-Sharpe J, et al. “I gained a skill and a change in attitude”: a case study describing how an online continuing professional education course for pharmacists supported achievement of its transfer to practice outcomes. Can J Univ Contin Educ. 2014; 40 (2):1–18. [ Google Scholar ]
  • Nair KM, Dolovich L, Brazil K, Raina P. It’s all about relationships: a qualitative study of health researchers’ perspectives on interdisciplinary research. BMC Health Serv Res. 2008; 8 :110. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pojskic N, MacKeigan L, Boon H, Austin Z. Initial perceptions of key stakeholders in Ontario regarding independent prescriptive authority for pharmacists. Res Soc Adm Pharm. 2014; 10 (2):341–54. [ PubMed ] [ Google Scholar ]

Qualitative Research in General

  • Breakwell GM, Hammond S, Fife-Schaw C. Research methods in psychology. Thousand Oaks (CA): Sage Publications; 1995. [ Google Scholar ]
  • Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks (CA): Sage Publications; 2015. [ Google Scholar ]
  • Miles B, Huberman AM. Qualitative data analysis. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]
  • Patton M. Qualitative research and evaluation methods. Thousand Oaks (CA): Sage Publications; 2002. [ Google Scholar ]
  • Willig C. Introducing qualitative research in psychology. Buckingham (UK): Open University Press; 2001. [ Google Scholar ]

Group Dynamics in Focus Groups

  • Farnsworth J, Boon B. Analysing group dynamics within the focus group. Qual Res. 2010; 10 (5):605–24. [ Google Scholar ]

Social Constructivism

  • Social constructivism. Berkeley (CA): University of California, Berkeley, Berkeley Graduate Division, Graduate Student Instruction Teaching & Resource Center; [cited 2015 June 4]. Available from: http://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/social-constructivism/ [ Google Scholar ]

Mixed Methods

  • Creswell J. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]

Collecting Qualitative Data

  • Arksey H, Knight P. Interviewing for social scientists: an introductory resource with examples. Thousand Oaks (CA): Sage Publications; 1999. [ Google Scholar ]
  • Guest G, Namey EE, Mitchel ML. Collecting qualitative data: a field manual for applied research. Thousand Oaks (CA): Sage Publications; 2013. [ Google Scholar ]

Constructivist Grounded Theory

  • Charmaz K. Grounded theory: objectivist and constructivist methods. In: Denzin N, Lincoln Y, editors. Handbook of qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 2000. pp. 509–35. [ Google Scholar ]

Research-Methodology

Qualitative Data Analysis

Qualitative data refers to non-numeric information such as interview transcripts, notes, video and audio recordings, images and text documents. Qualitative data analysis can be divided into the following five categories:

1. Content analysis . This refers to the process of categorizing verbal or behavioural data to classify, summarize and tabulate the data.

2. Narrative analysis . This method involves the reformulation of stories presented by respondents taking into account context of each case and different experiences of each respondent. In other words, narrative analysis is the revision of primary qualitative data by researcher.

3. Discourse analysis . A method of analysis of naturally occurring talk and all types of written text.

4. Framework analysis . This is more advanced method that consists of several stages such as familiarization, identifying a thematic framework, coding, charting, mapping and interpretation.

5. Grounded theory . This method of qualitative data analysis starts with an analysis of a single case to formulate a theory. Then, additional cases are examined to see if they contribute to the theory.

Qualitative data analysis can be conducted through the following three steps:

Step 1: Developing and Applying Codes . Coding can be explained as categorization of data. A ‘code’ can be a word or a short phrase that represents a theme or an idea. All codes need to be assigned meaningful titles. A wide range of non-quantifiable elements such as events, behaviours, activities, meanings etc. can be coded.

There are three types of coding:

  • Open coding . The initial organization of raw data to try to make sense of it.
  • Axial coding . Interconnecting and linking the categories of codes.
  • Selective coding . Formulating the story through connecting the categories.

Coding can be done manually or using qualitative data analysis software such as

 NVivo,  Atlas ti 6.0,  HyperRESEARCH 2.8,  Max QDA and others.

When using manual coding you can use folders, filing cabinets, wallets etc. to gather together materials that are examples of similar themes or analytic ideas. Manual method of coding in qualitative data analysis is rightly considered as labour-intensive, time-consuming and outdated.

In computer-based coding, on the other hand, physical files and cabinets are replaced with computer based directories and files. When choosing software for qualitative data analysis you need to consider a wide range of factors such as the type and amount of data you need to analyse, time required to master the software and cost considerations.

Moreover, it is important to get confirmation from your dissertation supervisor prior to application of any specific qualitative data analysis software.

The following table contains examples of research titles, elements to be coded and identification of relevant codes:

 Qualitative data coding

Step 2: Identifying themes, patterns and relationships . Unlike quantitative methods , in qualitative data analysis there are no universally applicable techniques that can be applied to generate findings. Analytical and critical thinking skills of researcher plays significant role in data analysis in qualitative studies. Therefore, no qualitative study can be repeated to generate the same results.

Nevertheless, there is a set of techniques that you can use to identify common themes, patterns and relationships within responses of sample group members in relation to codes that have been specified in the previous stage.

Specifically, the most popular and effective methods of qualitative data interpretation include the following:

  • Word and phrase repetitions – scanning primary data for words and phrases most commonly used by respondents, as well as, words and phrases used with unusual emotions;
  • Primary and secondary data comparisons – comparing the findings of interview/focus group/observation/any other qualitative data collection method with the findings of literature review and discussing differences between them;
  • Search for missing information – discussions about which aspects of the issue was not mentioned by respondents, although you expected them to be mentioned;
  • Metaphors and analogues – comparing primary research findings to phenomena from a different area and discussing similarities and differences.

Step 3: Summarizing the data . At this last stage you need to link research findings to hypotheses or research aim and objectives. When writing data analysis chapter, you can use noteworthy quotations from the transcript in order to highlight major themes within findings and possible contradictions.

It is important to note that the process of qualitative data analysis described above is general and different types of qualitative studies may require slightly different methods of data analysis.

My  e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach  contains a detailed, yet simple explanation of qualitative data analysis methods . The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection. Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in simple words. John Dudovskiy

Qualitative Data Analysis

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

THE CDC FIELD EPIDEMIOLOGY MANUAL

Collecting and Analyzing Qualitative Data

Brent Wolff, Frank Mahoney, Anna Leena Lohiniva, and Melissa Corkum

  • Choosing When to Apply Qualitative Methods
  • Commonly Used Qualitative Methods in Field Investigations
  • Sampling and Recruitment for Qualitative Research
  • Managing, Condensing, Displaying, and Interpreting Qualitative Data
  • Coding and Analysis Requirements

Qualitative research methods are a key component of field epidemiologic investigations because they can provide insight into the perceptions, values, opinions, and community norms where investigations are being conducted ( 1,2 ). Open-ended inquiry methods, the mainstay of qualitative interview techniques, are essential in formative research for exploring contextual factors and rationales for risk behaviors that do not fit neatly into predefined categories. For example, during the 2014–2015 Ebola virus disease outbreaks in parts of West Africa, understanding the cultural implications of burial practices within different communities was crucial to designing and monitoring interventions for safe burials ( Box 10.1 ). In program evaluations, qualitative methods can assist the investigator in diagnosing what went right or wrong as part of a process evaluation or in troubleshooting why a program might not be working as well as expected. When designing an intervention, qualitative methods can be useful in exploring dimensions of acceptability to increase the chances of intervention acceptance and success. When performed in conjunction with quantitative studies, qualitative methods can help the investigator confirm, challenge, or deepen the validity of conclusions than either component might have yielded alone ( 1,2 ).

Qualitative research was used extensively in response to the Ebola virus disease outbreaks in parts of West Africa to understand burial practices and to design culturally appropriate strategies to ensure safe burials. Qualitative studies were also used to monitor key aspects of the response.

In October 2014, Liberia experienced an abrupt and steady decrease in case counts and deaths in contrast with predicted disease models of an increased case count. At the time, communities were resistant to entering Ebola treatment centers, raising the possibility that patients were not being referred for care and communities might be conducting occult burials.

To assess what was happening at the community level, the Liberian Emergency Operations Center recruited epidemiologists from the US Department of Health and Human Services/Centers for Disease Control and Prevention and the African Union to investigate the problem.

Teams conducted in-depth interviews and focus group discussions with community leaders, local funeral directors, and coffin makers and learned that communities were not conducting occult burials and that the overall number of burials was less than what they had experienced in previous years. Other key findings included the willingness of funeral directors to cooperate with disease response efforts, the need for training of funeral home workers, and considerable community resistance to cremation practices. These findings prompted the Emergency Operations Center to open a burial ground for Ebola decedents, support enhanced testing of burials in the private sector, and train private-sector funeral workers regarding safe burial practices.

Source: Melissa Corkum, personal communication.

Similar to quantitative approaches, qualitative research seeks answers to specific questions by using rigorous approaches to collecting and compiling information and producing findings that can be applicable beyond the study population. The fundamental difference in approaches lies in how they translate real-life complexities of initial observations into units of analysis. Data collected in qualitative studies typically are in the form of text or visual images, which provide rich sources of insight but also tend to be bulky and time-consuming to code and analyze. Practically speaking, qualitative study designs tend to favor small, purposively selected samples ideal for case studies or in-depth analysis ( 1 ). The combination of purposive sampling and open-ended question formats deprive qualitative study designs of the power to quantify and generalize conclusions, one of the key limitations of this approach.

Qualitative scientists might argue, however, that the generalizability and precision possible through probabilistic sampling and categorical outcomes are achieved at the cost of enhanced validity, nuance, and naturalism that less structured approaches offer ( 3 ). Open-ended techniques are particularly useful for understanding subjective meanings and motivations underlying behavior. They enable investigators to be equally adept at exploring factors observed and unobserved, intentions as well as actions, internal meanings as well as external consequences, options considered but not taken, and unmeasurable as well as measurable outcomes. These methods are important when the source of or solution to a public health problem is rooted in local perceptions rather than objectively measurable characteristics selected by outside observers ( 3 ). Ultimately, such approaches have the ability to go beyond quantifying questions of how much or how many to take on questions of how or why from the perspective and in the words of the study subjects themselves ( 1,2 ).

Another key advantage of qualitative methods for field investigations is their flexibility ( 4 ). Qualitative designs not only enable but also encourage flexibility in the content and flow of questions to challenge and probe for deeper meanings or follow new leads if they lead to deeper understanding of an issue (5). It is not uncommon for topic guides to be adjusted in the course of fieldwork to investigate emerging themes relevant to answering the original study question. As discussed herein, qualitative study designs allow flexibility in sample size to accommodate the need for more or fewer interviews among particular groups to determine the root cause of an issue (see the section on Sampling and Recruitment in Qualitative Research). In the context of field investigations, such methods can be extremely useful for investigating complex or fast-moving situations where the dimensions of analysis cannot be fully anticipated.

Ultimately, the decision whether to include qualitative research in a particular field investigation depends mainly on the nature of the research question itself. Certain types of research topics lend themselves more naturally to qualitative rather than other approaches ( Table 10.1 ). These include exploratory investigations when not enough is known about a problem to formulate a hypothesis or develop a fixed set of questions and answer codes. They include research questions where intentions matter as much as actions and “why?” or “why not?” questions matter as much as precise estimation of measured outcomes. Qualitative approaches also work well when contextual influences, subjective meanings, stigma, or strong social desirability biases lower faith in the validity of responses coming from a relatively impersonal survey questionnaire interview.

The availability of personnel with training and experience in qualitative interviewing or observation is critical for obtaining the best quality data but is not absolutely required for rapid assessment in field settings. Qualitative interviewing requires a broader set of skills than survey interviewing. It is not enough to follow a topic guide like a questionnaire, in order, from top to bottom. A qualitative interviewer must exercise judgment to decide when to probe and when to move on, when to encourage, challenge, or follow relevant leads even if they are not written in the topic guide. Ability to engage with informants, connect ideas during the interview, and think on one’s feet are common characteristics of good qualitative interviewers. By far the most important qualification in conducting qualitative fieldwork is a firm grasp of the research objectives; with this qualification, a member of the research team armed with curiosity and a topic guide can learn on the job with successful results.

Semi-Structured Interviews

Semi-structured interviews can be conducted with single participants (in-depth or individual key informants) or with groups (focus group discussions [FGDs] or key informant groups). These interviews follow a suggested topic guide rather than a fixed questionnaire format. Topic guides typically consist of a limited number ( 10– 15 ) of broad, open-ended questions followed by bulleted points to facilitate optional probing. The conversational back-and-forth nature of a semi-structured format puts the researcher and researched (the interview participants) on more equal footing than allowed by more structured formats. Respondents, the term used in the case of quantitative questionnaire interviews, become informants in the case of individual semi-structured in-depth interviews (IDIs) or participants in the case of FGDs. Freedom to probe beyond initial responses enables interviewers to actively engage with the interviewee to seek clarity, openness, and depth by challenging informants to reach below layers of self-presentation and social desirability. In this respect, interviewing is sometimes compared with peeling an onion, with the first version of events accessible to the public, including survey interviewers, and deeper inner layers accessible to those who invest the time and effort to build rapport and gain trust. (The theory of the active interview suggests that all interviews involve staged social encounters where the interviewee is constantly assessing interviewer intentions and adjusting his or her responses accordingly [ 1 ]. Consequently good rapport is important for any type of interview. Survey formats give interviewers less freedom to divert from the preset script of questions and formal probes.)

Individual In-Depth Interviews and Key-Informant Interviews

The most common forms of individual semi-structured interviews are IDIs and key informant interviews (KIIs). IDIs are conducted among informants typically selected for first-hand experience (e.g., service users, participants, survivors) relevant to the research topic. These are typically conducted as one-on-one face-to-face interviews (two-on-one if translators are needed) to maximize rapport-building and confidentiality. KIIs are similar to IDIs but focus on individual persons with special knowledge or influence (e.g., community leaders or health authorities) that give them broader perspective or deeper insight into the topic area ( Box 10.2 ). Whereas IDIs tend to focus on personal experiences, context, meaning, and implications for informants, KIIs tend to steer away from personal questions in favor of expert insights or community perspectives. IDIs enable flexible sampling strategies and represent the interviewing reference standard for confidentiality, rapport, richness, and contextual detail. However, IDIs are time-and labor-intensive to collect and analyze. Because confidentiality is not a concern in KIIs, these interviews might be conducted as individual or group interviews, as required for the topic area.

Focus Group Discussions and Group Key Informant Interviews

FGDs are semi-structured group interviews in which six to eight participants, homogeneous with respect to a shared experience, behavior, or demographic characteristic, are guided through a topic guide by a trained moderator ( 6 ). (Advice on ideal group interview size varies. The principle is to convene a group large enough to foster an open, lively discussion of the topic, and small enough to ensure all participants stay fully engaged in the process.) Over the course of discussion, the moderator is expected to pose questions, foster group participation, and probe for clarity and depth. Long a staple of market research, focus groups have become a widely used social science technique with broad applications in public health, and they are especially popular as a rapid method for assessing community norms and shared perceptions.

Focus groups have certain useful advantages during field investigations. They are highly adaptable, inexpensive to arrange and conduct, and often enjoyable for participants. Group dynamics effectively tap into collective knowledge and experience to serve as a proxy informant for the community as a whole. They are also capable of recreating a microcosm of social norms where social, moral, and emotional dimensions of topics are allowed to emerge. Skilled moderators can also exploit the tendency of small groups to seek consensus to bring out disagreements that the participants will work to resolve in a way that can lead to deeper understanding. There are also limitations on focus group methods. Lack of confidentiality during group interviews means they should not be used to explore personal experiences of a sensitive nature on ethical grounds. Participants may take it on themselves to volunteer such information, but moderators are generally encouraged to steer the conversation back to general observations to avoid putting pressure on other participants to disclose in a similar way. Similarly, FGDs are subject by design to strong social desirability biases. Qualitative study designs using focus groups sometimes add individual interviews precisely to enable participants to describe personal experiences or personal views that would be difficult or inappropriate to share in a group setting. Focus groups run the risk of producing broad but shallow analyses of issues if groups reach comfortable but superficial consensus around complex topics. This weakness can be countered by training moderators to probe effectively and challenge any consensus that sounds too simplistic or contradictory with prior knowledge. However, FGDs are surprisingly robust against the influence of strongly opinionated participants, highly adaptable, and well suited to application in study designs where systematic comparisons across different groups are called for.

Like FGDs, group KIIs rely on positive chemistry and the stimulating effects of group discussion but aim to gather expert knowledge or oversight on a particular topic rather than lived experience of embedded social actors. Group KIIs have no minimum size requirements and can involve as few as two or three participants.

Egypt’s National Infection Prevention and Control (IPC) program undertook qualitative research to gain an understanding of the contextual behaviors and motivations of healthcare workers in complying with IPC guidelines. The study was undertaken to guide the development of effective behavior change interventions in healthcare settings to improve IPC compliance.

Key informant interviews and focus group discussions were conducted in two governorates among cleaning staff, nursing staff, and physicians in different types of healthcare facilities. The findings highlighted social and cultural barriers to IPC compliance, enabling the IPC program to design responses. For example,

  • Informants expressed difficulty in complying with IPC measures that forced them to act outside their normal roles in an ingrained hospital culture. Response: Role models and champions were introduced to help catalyze change.
  • Informants described fatalistic attitudes that undermined energy and interest in modifying behavior. Response: Accordingly, interventions affirming institutional commitment to change while challenging fatalistic assumptions were developed.
  • Informants did not perceive IPC as effective. Response: Trainings were amended to include scientific evidence justifying IPC practices.
  • Informants perceived hygiene as something they took pride in and were judged on. Response: Public recognition of optimal IPC practice was introduced to tap into positive social desirability and professional pride in maintaining hygiene in the work environment.

Qualitative research identified sources of resistance to quality clinical practice in Egypt’s healthcare settings and culturally appropriate responses to overcome that resistance.

____________________ Source: Anna Leena Lohiniva, personal communication.

Visualization Methods

Visualization methods have been developed as a way to enhance participation and empower interviewees relative to researchers during group data collection ( 7 ). Visualization methods involve asking participants to engage in collective problem- solving of challenges expressed through group production of maps, diagrams, or other images. For example, participants from the community might be asked to sketch a map of their community and to highlight features of relevance to the research topic (e.g., access to health facilities or sites of risk concentrations). Body diagramming is another visualization tool in which community members are asked to depict how and where a health threat affects the human body as a way of understanding folk conceptions of health, disease, treatment, and prevention. Ensuing debate and dialogue regarding construction of images can be recorded and analyzed in conjunction with the visual image itself. Visualization exercises were initially designed to accommodate groups the size of entire communities, but they can work equally well with smaller groups corresponding to the size of FGDs or group KIIs.

Selecting a Sample of Study Participants

Fundamental differences between qualitative and quantitative approaches to research emerge most clearly in the practice of sampling and recruitment of study participants. Qualitative samples are typically small and purposive. In-depth interview informants are usually selected on the basis of unique characteristics or personal experiences that make them exemplary for the study, if not typical in other respects. Key informants are selected for their unique knowledge or influence in the study domain. Focus group mobilization often seeks participants who are typical with respect to others in the community having similar exposure or shared characteristics. Often, however, participants in qualitative studies are selected because they are exceptional rather than simply representative. Their value lies not in their generalizability but in their ability to generate insight into the key questions driving the study.

Determining Sample Size

Sample size determination for qualitative studies also follows a different logic than that used for probability sample surveys. For example, whereas some qualitative methods specify ideal ranges of participants that constitute a valid observation (e.g., focus groups), there are no rules on how many observations it takes to attain valid results. In theory, sample size in qualitative designs should be determined by the saturation principle , where interviews are conducted until additional interviews yield no additional insights into the topic of research ( 8 ). Practically speaking, designing a study with a range in number of interviews is advisable for providing a level of flexibility if additional interviews are needed to reach clear conclusions.

Recruiting Study Participants

Recruitment strategies for qualitative studies typically involve some degree of participant self-selection (e.g., advertising in public spaces for interested participants) and purposive selection (e.g., identification of key informants). Purposive selection in community settings often requires authorization from local authorities and assistance from local mobilizers before the informed consent process can begin. Clearly specifying eligibility criteria is crucial for minimizing the tendency of study mobilizers to apply their own filters regarding who reflects the community in the best light. In addition to formal eligibility criteria, character traits (e.g., articulate and interested in participating) and convenience (e.g., not too far away) are legitimate considerations for whom to include in the sample. Accommodations to personality and convenience help to ensure the small number of interviews in a typical qualitative design yields maximum value for minimum investment. This is one reason why random sampling of qualitative informants is not only unnecessary but also potentially counterproductive.

Analysis of qualitative data can be divided into four stages: data management, data condensation, data display, and drawing and verifying conclusions ( 9 ).

Managing Qualitative Data

From the outset, developing a clear organization system for qualitative data is important. Ideally, naming conventions for original data files and subsequent analysis should be recorded in a data dictionary file that includes dates, locations, defining individual or group characteristics, interviewer characteristics, and other defining features. Digital recordings of interviews or visualization products should be reviewed to ensure fidelity of analyzed data to original observations. If ethics agreements require that no names or identifying characteristics be recorded, all individual names must be removed from final transcriptions before analysis begins. If data are analyzed by using textual data analysis software, maintaining careful version control over the data files is crucial, especially when multiple coders are involved.

Condensing Qualitative Data

Condensing refers to the process of selecting, focusing, simplifying, and abstracting the data available at the time of the original observation, then transforming the condensed data into a data set that can be analyzed. In qualitative research, most of the time investment required to complete a study comes after the fieldwork is complete. A single hour of taped individual interview can take a full day to transcribe and additional time to translate if necessary. Group interviews can take even longer because of the difficulty of transcribing active group input. Each stage of data condensation involves multiple decisions that require clear rules and close supervision. A typical challenge is finding the right balance between fidelity to the rhythm and texture of original language and clarity of the translated version in the language of analysis. For example, discussions among groups with little or no education should not emerge after the transcription (and translation) process sounding like university graduates. Judgment must be exercised about which terms should be translated and which terms should be kept in vernacular because there is no appropriate term in English to capture the richness of its meaning.

Displaying Qualitative Data

After the initial condensation, qualitative analysis depends on how the data are displayed. Decisions regarding how data are summarized and laid out to facilitate comparison influence the depth and detail of the investigation’s conclusions. Displays might range from full verbatim transcripts of interviews to bulleted summaries or distilled summaries of interview notes. In a field setting, a useful and commonly used display format is an overview chart in which key themes or research questions are listed in rows in a word processer table or in a spreadsheet and individual informant or group entry characteristics are listed across columns. Overview charts are useful because they allow easy, systematic comparison of results.

Drawing and Verifying Conclusions

Analyzing qualitative data is an iterative and ideally interactive process that leads to rigorous and systematic interpretation of textual or visual data. At least four common steps are involved:

  • Reading and rereading. The core of qualitative analysis is careful, systematic, and repeated reading of text to identify consistent themes and interconnections emerging from the data. The act of repeated reading inevitably yields new themes, connections, and deeper meanings from the first reading. Reading the full text of interviews multiple times before subdividing according to coded themes is key to appreciating the full context and flow of each interview before subdividing and extracting coded sections of text for separate analysis.
  • Coding. A common technique in qualitative analysis involves developing codes for labeling sections of text for selective retrieval in later stages of analysis and verification. Different approaches can be used for textual coding. One approach, structural coding , follows the structure of the interview guide. Another approach, thematic coding , labels common themes that appear across interviews, whether by design of the topic guide or emerging themes assigned based on further analysis. To avoid the problem of shift and drift in codes across time or multiple coders, qualitative investigators should develop a standard codebook with written definitions and rules about when codes should start and stop. Coding is also an iterative process in which new codes that emerge from repeated reading are layered on top of existing codes. Development and refinement of the codebook is inseparably part of the analysis.
  • Analyzing and writing memos. As codes are being developed and refined, answers to the original research question should begin to emerge. Coding can facilitate that process through selective text retrieval during which similarities within and between coding categories can be extracted and compared systematically. Because no p values can be derived in qualitative analyses to mark the transition from tentative to firm conclusions, standard practice is to write memos to record evolving insights and emerging patterns in the data and how they relate to the original research questions. Writing memos is intended to catalyze further thinking about the data, thus initiating new connections that can lead to further coding and deeper understanding.
  • Verifying conclusions. Analysis rigor depends as much on the thoroughness of the cross-examination and attempt to find alternative conclusions as on the quality of original conclusions. Cross-examining conclusions can occur in different ways. One way is encouraging regular interaction between analysts to challenge conclusions and pose alternative explanations for the same data. Another way is quizzing the data (i.e., retrieving coded segments by using Boolean logic to systematically compare code contents where they overlap with other codes or informant characteristics). If alternative explanations for initial conclusions are more difficult to justify, confidence in those conclusions is strengthened.

Above all, qualitative data analysis requires sufficient time and immersion in the data. Computer textual software programs can facilitate selective text retrieval and quizzing the data, but discerning patterns and arriving at conclusions can be done only by the analysts. This requirement involves intensive reading and rereading, developing codebooks and coding, discussing and debating, revising codebooks, and recoding as needed until clear patterns emerge from the data. Although quality and depth of analysis is usually proportional to the time invested, a number of techniques, including some mentioned earlier, can be used to expedite analysis under field conditions.

  • Detailed notes instead of full transcriptions. Assigning one or two note-takers to an interview can be considered where the time needed for full transcription and translation is not feasible. Even if plans are in place for full transcriptions after fieldwork, asking note-takers to submit organized summary notes is a useful technique for getting real-time feedback on interview content and making adjustments to topic guides or interviewer training as needed.
  • Summary overview charts for thematic coding. (See discussion under “Displaying Data.”) If there is limited time for full transcription and/or systematic coding of text interviews using textual analysis software in the field, an overview chart is a useful technique for rapid manual coding.
  • Thematic extract files. This is a slightly expanded version of manual thematic coding that is useful when full transcriptions of interviews are available. With use of a word processing program, files can be sectioned according to themes, or separate files can be created for each theme. Relevant extracts from transcripts or analyst notes can be copied and pasted into files or sections of files corresponding to each theme. This is particularly useful for storing appropriate quotes that can be used to illustrate thematic conclusions in final reports or manuscripts.
  • Teamwork. Qualitative analysis can be performed by a single analyst, but it is usually beneficial to involve more than one. Qualitative conclusions involve subjective judgment calls. Having more than one coder or analyst working on a project enables more interactive discussion and debate before reaching consensus on conclusions.
  • Systematic coding.
  • Selective retrieval of coded segments.
  • Verifying conclusions (“quizzing the data”).
  • Working on larger data sets with multiple separate files.
  • Working in teams with multiple coders to allow intercoder reliability to be measured and monitored.

The most widely used software packages (e.g., NVivo [QSR International Pty. Ltd., Melbourne, VIC, Australia] and ATLAS.ti [Scientific Software Development GmbH, Berlin, Germany]) evolved to include sophisticated analytic features covering a wide array of applications but are relatively expensive in terms of license cost and initial investment in time and training. A promising development is the advent of free or low-cost Web-based services (e.g., Dedoose [Sociocultural Research Consultants LLC, Manhattan Beach, CA]) that have many of the same analytic features on a more affordable subscription basis and that enable local research counterparts to remain engaged through the analysis phase (see Teamwork criteria). The start-up costs of computer-assisted analysis need to be weighed against their analytic benefits, which tend to decline with the volume and complexity of data to be analyzed. For rapid situational analyses or small scale qualitative studies (e.g. fewer than 30 observations as an informal rule of thumb), manual coding and analysis using word processing or spreadsheet programs is faster and sufficient to enable rigorous analysis and verification of conclusions.

Qualitative methods belong to a branch of social science inquiry that emphasizes the importance of context, subjective meanings, and motivations in understanding human behavior patterns. Qualitative approaches definitionally rely on open-ended, semistructured, non-numeric strategies for asking questions and recording responses. Conclusions are drawn from systematic visual or textual analysis involving repeated reading, coding, and organizing information into structured and emerging themes. Because textual analysis is relatively time-and skill-intensive, qualitative samples tend to be small and purposively selected to yield the maximum amount of information from the minimum amount of data collection. Although qualitative approaches cannot provide representative or generalizable findings in a statistical sense, they can offer an unparalleled level of detail, nuance, and naturalistic insight into the chosen subject of study. Qualitative methods enable investigators to “hear the voice” of the researched in a way that questionnaire methods, even with the occasional open-ended response option, cannot.

Whether or when to use qualitative methods in field epidemiology studies ultimately depends on the nature of the public health question to be answered. Qualitative approaches make sense when a study question about behavior patterns or program performance leads with why, why not , or how . Similarly, they are appropriate when the answer to the study question depends on understanding the problem from the perspective of social actors in real-life settings or when the object of study cannot be adequately captured, quantified, or categorized through a battery of closed-ended survey questions (e.g., stigma or the foundation of health beliefs). Another justification for qualitative methods occurs when the topic is especially sensitive or subject to strong social desirability biases that require developing trust with the informant and persistent probing to reach the truth. Finally, qualitative methods make sense when the study question is exploratory in nature, where this approach enables the investigator the freedom and flexibility to adjust topic guides and probe beyond the original topic guides.

Given that the conditions just described probably apply more often than not in everyday field epidemiology, it might be surprising that such approaches are not incorporated more routinely into standard epidemiologic training. Part of the answer might have to do with the subjective element in qualitative sampling and analysis that seems at odds with core scientific values of objectivity. Part of it might have to do with the skill requirements for good qualitative interviewing, which are generally more difficult to find than those required for routine survey interviewing.

For the field epidemiologist unfamiliar with qualitative study design, it is important to emphasize that obtaining important insights from applying basic approaches is possible, even without a seasoned team of qualitative researchers on hand to do the work. The flexibility of qualitative methods also tends to make them forgiving with practice and persistence. Beyond the required study approvals and ethical clearances, the basic essential requirements for collecting qualitative data in field settings start with an interviewer having a strong command of the research question, basic interactive and language skills, and a healthy sense of curiosity, armed with a simple open-ended topic guide and a tape recorder or note-taker to capture the key points of the discussion. Readily available manuals on qualitative study design, methods, and analysis can provide additional guidance to improve the quality of data collection and analysis.

  • Patton MQ. Qualitative research and evaluation methods: integrating theory and practice . 4th ed. Thousand Oaks, CA: Sage; 2015.
  • Hennink M, Hutter I, Bailey A. Qualitative research methods . Thousand Oaks, CA: Sage; 2010.
  • Lincoln YS, Guba EG. The constructivist credo . Walnut Creek, CA: Left Coast Press; 2013.
  • Mack N, Woodsong C, MacQueen KM, Guest G, Namey E. Qualitative research methods: a data collectors field guide. https://www.fhi360.org/sites/default/files/media/documents/Qualitative%20Research%20Methods%20-%20A%20Data%20Collector%27s%20Field%20Guide.pdf
  • Kvale S, Brinkmann S. Interviews: learning the craft of qualitative research . Thousand Oaks, CA: Sage; 2009:230–43.
  • Krueger RA, Casey MA. Focus groups: a practical guide for applied research . Thousand Oaks, CA: Sage; 2014.
  • Margolis E, Pauwels L. The Sage handbook of visual research methods . Thousand Oaks, CA: Sage; 2011.
  • Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum : Qualitative Social Research/Sozialforschung. 2010;11(3).
  • Miles MB, Huberman AM, Saldana J. Qualitative data analysis: a methods sourcebook . 3rd ed. Thousand Oaks, CA: Sage; 2014.
  • Silver C, Lewins A. Using software in qualitative research: a step-by-step guide . Thousand Oaks, CA; Sage: 2014.

< Previous Chapter 9: Optimizing Epidemiology– Laboratory Collaborations

Next Chapter 11: Developing Interventions >

The fellowship application period will be open March 1-June 5, 2024.

The host site application period is closed.

For questions about the EIS program, please contact us directly at [email protected] .

  • Laboratory Leadership Service (LLS)
  • Fellowships and Training Opportunities
  • Division of Workforce Development

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.
  • Python For Data Analysis
  • Data Science
  • Data Analysis with R
  • Data Analysis with Python
  • Data Visualization with Python
  • Data Analysis Examples
  • Math for Data Analysis
  • Data Analysis Interview questions
  • Artificial Intelligence
  • Data Analysis Projects
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • What is Data Analytics?
  • What is Statistical Analysis in Data Science?
  • What Is Spatial Analysis, and How Does It Work
  • What is Data Analysis?
  • What is Data Munging in Analysis?
  • What is Geospatial Data Analysis?
  • What is Exploratory Data Analysis ?
  • Qualitative and Quantitative Data
  • What are Descriptive Analytics?
  • What is Prescriptive Analytics in Data Science?
  • Qualitative Data
  • Data Analysis Tutorial
  • Why Data Visualization Matters in Data Analytics?
  • What is Data Lineage?
  • Data analysis using R
  • What is Data Organization?
  • What is a Data Science Platform?

What is Qualitative Data Analysis?

Understanding qualitative information analysis is important for researchers searching for to uncover nuanced insights from non-numerical statistics. By exploring qualitative statistics evaluation, you can still draw close its importance in studies, understand its methodologies, and determine while and the way to apply it successfully to extract meaningful insights from qualitative records.

The article goals to provide a complete manual to expertise qualitative records evaluation, masking its significance, methodologies, steps, advantages, disadvantages, and applications.

What-is-Qualitative-Data-Analysis

Table of Content

Understanding Qualitative Data Analysis

Importance of qualitative data analysis, steps to perform qualitative data analysis, 1. craft clear research questions, 2. gather rich customer insights, 3. organize and categorize data, 4. uncover themes and patterns : coding, 5. make hypotheses and validating, methodologies in qualitative data analysis, advantages of qualitative data analysis, disadvantages of qualitative data analysis, when qualitative data analysis is used, applications of qualitative data analysis.

Qualitative data analysis is the process of systematically examining and deciphering qualitative facts (such as textual content, pix, motion pictures, or observations) to discover patterns, themes, and meanings inside the statistics· Unlike quantitative statistics evaluation, which focuses on numerical measurements and statistical strategies, qualitative statistics analysis emphasizes know-how the context, nuances, and subjective views embedded inside the information.

Qualitative facts evaluation is crucial because it is going past the bloodless hard information and numbers to provide a richer expertise of why and the way things appear. Qualitative statistics analysis is important for numerous motives:

  • Understanding Complexity and unveils the “Why” : Quantitative facts tells you “what” came about (e· g·, sales figures), however qualitative evaluation sheds light on the motives in the back of it (e·g·, consumer comments on product features).
  • Contextual Insight : Numbers don’t exist in a vacuum. Qualitative information affords context to quantitative findings, making the bigger photo clearer· Imagine high customer churn – interviews would possibly monitor lacking functionalities or perplexing interfaces.
  • Uncovers Emotions and Opinions: Qualitative records faucets into the human element· Surveys with open ended questions or awareness companies can display emotions, critiques, and motivations that can’t be captured by using numbers on my own.
  • Informs Better Decisions: By understanding the “why” and the “how” at the back of customer behavior or employee sentiment, companies can make greater knowledgeable decisions about product improvement, advertising techniques, and internal techniques.
  • Generates New Ideas : Qualitative analysis can spark clean thoughts and hypotheses· For example, via analyzing consumer interviews, commonplace subject matters may emerge that cause totally new product features.
  • Complements Quantitative Data : While each facts sorts are precious, they paintings quality collectively· Imagine combining website site visitors records (quantitative) with person comments (qualitative) to apprehend user revel in on a particular webpage.

In essence, qualitative data evaluation bridges the gap among the what and the why, providing a nuanced know-how that empowers better choice making·

Steps-to-Perform-Qualitative-Data-Analysis

Qualitative data analysis process, follow the structure in below steps:

Qualitative information evaluation procedure, comply with the shape in underneath steps:

Before diving into evaluation, it is critical to outline clear and particular studies questions. These questions ought to articulate what you want to study from the data and manual your analysis towards actionable insights. For instance, asking “How do employees understand the organizational culture inside our agency?” helps makes a speciality of know-how personnel’ perceptions of the organizational subculture inside a selected business enterprise. By exploring employees’ perspectives, attitudes, and stories related to organizational tradition, researchers can find valuable insights into workplace dynamics, communication patterns, management patterns, and worker delight degrees.

There are numerous methods to acquire qualitative information, each offering specific insights into client perceptions and reviews.

  • User Feedback: In-app surveys, app rankings, and social media feedback provide direct remarks from users approximately their studies with the products or services.
  • In-Depth Interviews : One-on-one interviews allow for deeper exploration of particular topics and offer wealthy, special insights into individuals’ views and behaviors.
  • Focus Groups : Facilitating group discussions allows the exploration of numerous viewpoints and permits individuals to construct upon every different’s ideas.
  • Review Sites : Analyzing purchaser critiques on systems like Amazon, Yelp, or app shops can monitor not unusual pain points, pride levels, and areas for improvement.
  • NPS Follow-Up Questions : Following up on Net Promoter Score (NPS) surveys with open-ended questions allows customers to elaborate on their rankings and provides qualitative context to quantitative ratings.

Efficient facts below is crucial for powerful analysis and interpretation.

  • Centralize: Gather all qualitative statistics, along with recordings, notes, and transcripts, right into a valuable repository for smooth get admission to and control.
  • Categorize through Research Question : Group facts primarily based at the specific studies questions they deal with. This organizational structure allows maintain consciousness in the course of analysis and guarantees that insights are aligned with the research objectives.

Coding is a scientific manner of assigning labels or categories to segments of qualitative statistics to uncover underlying issues and patterns.

  • Theme Identification : Themes are overarching principles or ideas that emerge from the records· During coding, researchers perceive and label segments of statistics that relate to those themes, bearing in mind the identification of vital principles in the dataset.
  • Pattern Detection : Patterns seek advice from relationships or connections between exceptional elements in the statistics. By reading coded segments, researchers can locate trends, repetitions, or cause-and-effect relationships, imparting deeper insights into patron perceptions and behaviors.

Based on the identified topics and styles, researchers can formulate hypotheses and draw conclusions about patron experiences and choices.

  • Hypothesis Formulation: Hypotheses are tentative causes or predictions based on found styles within the information. Researchers formulate hypotheses to provide an explanation for why certain themes or styles emerge and make predictions approximately their effect on patron behavior.
  • Validation : Researchers validate hypotheses by means of segmenting the facts based on one-of-a-kind standards (e.g., demographic elements, usage patterns) and analyzing variations or relationships inside the records. This procedure enables enhance the validity of findings and offers proof to assist conclusions drawn from qualitative evaluation.

There are five common methodologies utilized in Qualitative Data Analysis·

  • Thematic Analysis : Thematic Analysis involves systematically figuring out and reading habitual subject matters or styles within qualitative statistics. Researchers begin with the aid of coding the facts, breaking it down into significant segments, and then categorizing these segments based on shared traits. Through iterative analysis, themes are advanced and refined, permitting researchers to benefit insight into the underlying phenomena being studied.
  • Content Analysis: Content Analysis focuses on reading textual information to pick out and quantify particular styles or issues. Researchers code the statistics primarily based on predefined classes or subject matters, taking into consideration systematic agency and interpretation of the content. By analyzing how frequently positive themes occur and the way they’re represented inside the data, researchers can draw conclusions and insights relevant to their research objectives.
  • Narrative Analysis: Narrative Analysis delves into the narrative or story within qualitative statistics, that specialize in its structure, content, and meaning. Researchers examine the narrative to understand its context and attitude, exploring how individuals assemble and speak their reports thru storytelling. By analyzing the nuances and intricacies of the narrative, researchers can find underlying issues and advantage a deeper know-how of the phenomena being studied.
  • Grounded Theory : Grounded Theory is an iterative technique to growing and checking out theoretical frameworks primarily based on empirical facts. Researchers gather, code, and examine information without preconceived hypotheses, permitting theories to emerge from the information itself. Through constant assessment and theoretical sampling, researchers validate and refine theories, main to a deeper knowledge of the phenomenon under investigation.
  • Phenomenological Analysis : Phenomenological Analysis objectives to discover and recognize the lived stories and views of people. Researchers analyze and interpret the meanings, essences, and systems of these reviews, figuring out not unusual topics and styles across individual debts. By immersing themselves in members’ subjective stories, researchers advantage perception into the underlying phenomena from the individuals’ perspectives, enriching our expertise of human behavior and phenomena.
  • Richness and Depth: Qualitative records evaluation lets in researchers to discover complex phenomena intensive, shooting the richness and complexity of human stories, behaviors, and social processes.
  • Flexibility : Qualitative techniques offer flexibility in statistics collection and evaluation, allowing researchers to conform their method based on emergent topics and evolving studies questions.
  • Contextual Understanding: Qualitative evaluation presents perception into the context and meaning of information, helping researchers recognize the social, cultural, and historic elements that form human conduct and interactions.
  • Subjective Perspectives : Qualitative methods allow researchers to explore subjective perspectives, beliefs, and reviews, offering a nuanced know-how of people’ mind, emotions, and motivations.
  • Theory Generation : Qualitative information analysis can cause the generation of recent theories or hypotheses, as researchers uncover patterns, themes, and relationships in the records that might not were formerly recognized.
  • Subjectivity: Qualitative records evaluation is inherently subjective, as interpretations can be stimulated with the aid of researchers’ biases, views, and preconceptions .
  • Time-Intensive : Qualitative records analysis may be time-consuming, requiring giant data collection, transcription, coding, and interpretation.
  • Generalizability: Findings from qualitative studies might not be effortlessly generalizable to larger populations, as the focus is often on know-how unique contexts and reviews in preference to making statistical inferences.
  • Validity and Reliability : Ensuring the validity and reliability of qualitative findings may be difficult, as there are fewer standardized methods for assessing and establishing rigor in comparison to quantitative studies.
  • Data Management : Managing and organizing qualitative information, together with transcripts, subject notes, and multimedia recordings, can be complicated and require careful documentation and garage.
  • Exploratory Research: Qualitative records evaluation is nicely-suited for exploratory studies, wherein the aim is to generate hypotheses, theories, or insights into complex phenomena.
  • Understanding Context : Qualitative techniques are precious for knowledge the context and which means of statistics, in particular in studies wherein social, cultural, or ancient factors are vital.
  • Subjective Experiences : Qualitative evaluation is good for exploring subjective stories, beliefs, and views, providing a deeper knowledge of people’ mind, feelings, and behaviors.
  • Complex Phenomena: Qualitative strategies are effective for studying complex phenomena that can not be effortlessly quantified or measured, allowing researchers to seize the richness and depth of human stories and interactions.
  • Complementary to Quantitative Data: Qualitative information analysis can complement quantitative research by means of offering context, intensity, and insight into the meanings at the back of numerical statistics, enriching our knowledge of studies findings.
  • Social Sciences: Qualitative information analysis is widely utilized in social sciences to apprehend human conduct, attitudes, and perceptions. Researchers employ qualitative methods to delve into the complexities of social interactions, cultural dynamics, and societal norms. By analyzing qualitative records which include interviews, observations, and textual resources, social scientists benefit insights into the elaborate nuances of human relationships, identity formation, and societal structures.
  • Psychology : In psychology, qualitative data evaluation is instrumental in exploring and deciphering person reports, emotions, and motivations. Qualitative methods along with in-depth interviews, cognizance businesses, and narrative evaluation allow psychologists to delve deep into the subjective stories of individuals. This approach facilitates discover underlying meanings, beliefs, and emotions, dropping light on psychological processes, coping mechanisms, and personal narratives.
  • Anthropology : Anthropologists use qualitative records evaluation to look at cultural practices, ideals, and social interactions inside various groups and societies. Through ethnographic research strategies such as player statement and interviews, anthropologists immerse themselves within the cultural contexts of different agencies. Qualitative analysis permits them to find the symbolic meanings, rituals, and social systems that form cultural identification and behavior.
  • Qualitative Market Research : In the sphere of marketplace research, qualitative statistics analysis is vital for exploring consumer options, perceptions, and behaviors. Qualitative techniques which include consciousness groups, in-depth interviews, and ethnographic research permit marketplace researchers to gain a deeper understanding of customer motivations, choice-making methods, and logo perceptions· By analyzing qualitative facts, entrepreneurs can identify emerging developments, discover unmet wishes, and tell product development and advertising and marketing techniques.
  • Healthcare: Qualitative statistics analysis plays a important function in healthcare studies via investigating patient experiences, delight, and healthcare practices. Researchers use qualitative techniques which includes interviews, observations, and patient narratives to explore the subjective reviews of people inside healthcare settings. Qualitative evaluation helps find affected person perspectives on healthcare services, treatment consequences, and pleasant of care, facilitating enhancements in patient-targeted care delivery and healthcare policy.

Qualitative data evaluation offers intensity, context, and know-how to investigate endeavors, enabling researchers to find wealthy insights and discover complicated phenomena via systematic examination of non-numerical information.

Please Login to comment...

Similar reads.

  • Data Analysis

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Not all data are created equal; some are structured, but most of them are unstructured. Structured and unstructured data are sourced, collected and scaled in different ways and each one resides in a different type of database.

In this article, we will take a deep dive into both types so that you can get the most out of your data.

Structured data—typically categorized as quantitative data—is highly organized and easily decipherable by  machine learning algorithms .  Developed by IBM® in 1974 , structured query language (SQL) is the programming language used to manage structured data. By using a  relational (SQL) database , business users can quickly input, search and manipulate structured data.

Examples of structured data include dates, names, addresses, credit card numbers, among others. Their benefits are tied to ease of use and access, while liabilities revolve around data inflexibility:

  • Easily used by machine learning (ML) algorithms:  The specific and organized architecture of structured data eases the manipulation and querying of ML data.
  • Easily used by business users:  Structured data do not require an in-depth understanding of different types of data and how they function. With a basic understanding of the topic relative to the data, users can easily access and interpret the data.
  • Accessible by more tools:  Since structured data predates unstructured data, there are more tools available for using and analyzing structured data.
  • Limited usage:  Data with a predefined structure can only be used for its intended purpose, which limits its flexibility and usability.
  • Limited storage options:  Structured data are usually stored in data storage systems with rigid schemas (for example, “ data warehouses ”). Therefore, changes in data requirements necessitate an update of all structured data, which leads to a massive expenditure of time and resources.
  • OLAP :  Performs high-speed, multidimensional data analysis from unified, centralized data stores.
  • SQLite : (link resides outside ibm.com)  Implements a self-contained,  serverless , zero-configuration, transactional relational database engine.
  • MySQL :  Embeds data into mass-deployed software, particularly mission-critical, heavy-load production system.
  • PostgreSQL :  Supports SQL and JSON querying as well as high-tier programming languages (C/C+, Java,  Python , among others.).
  • Customer relationship management (CRM):  CRM software runs structured data through analytical tools to create datasets that reveal customer behavior patterns and trends.
  • Online booking:  Hotel and ticket reservation data (for example, dates, prices, destinations, among others.) fits the “rows and columns” format indicative of the pre-defined data model.
  • Accounting:  Accounting firms or departments use structured data to process and record financial transactions.

Unstructured data, typically categorized as qualitative data, cannot be processed and analyzed through conventional data tools and methods. Since unstructured data does not have a predefined data model, it is best managed in  non-relational (NoSQL) databases . Another way to manage unstructured data is to use  data lakes  to preserve it in raw form.

The importance of unstructured data is rapidly increasing.  Recent projections  (link resides outside ibm.com) indicate that unstructured data is over 80% of all enterprise data, while 95% of businesses prioritize unstructured data management.

Examples of unstructured data include text, mobile activity, social media posts, Internet of Things (IoT) sensor data, among others. Their benefits involve advantages in format, speed and storage, while liabilities revolve around expertise and available resources:

  • Native format:  Unstructured data, stored in its native format, remains undefined until needed. Its adaptability increases file formats in the database, which widens the data pool and enables data scientists to prepare and analyze only the data they need.
  • Fast accumulation rates:  Since there is no need to predefine the data, it can be collected quickly and easily.
  • Data lake storage:  Allows for massive storage and pay-as-you-use pricing, which cuts costs and eases scalability.
  • Requires expertise:  Due to its undefined or non-formatted nature, data science expertise is required to prepare and analyze unstructured data. This is beneficial to data analysts but alienates unspecialized business users who might not fully understand specialized data topics or how to utilize their data.
  • Specialized tools:  Specialized tools are required to manipulate unstructured data, which limits product choices for data managers.
  • MongoDB :  Uses flexible documents to process data for cross-platform applications and services.
  • DynamoDB :  (link resides outside ibm.com) Delivers single-digit millisecond performance at any scale through built-in security, in-memory caching and backup and restore.
  • Hadoop :  Provides distributed processing of large data sets using simple programming models and no formatting requirements.
  • Azure :  Enables agile cloud computing for creating and managing apps through Microsoft’s data centers.
  • Data mining :  Enables businesses to use unstructured data to identify consumer behavior, product sentiment and purchasing patterns to better accommodate their customer base.
  • Predictive data analytics :  Alert businesses of important activity ahead of time so they can properly plan and accordingly adjust to significant market shifts.
  • Chatbots :  Perform text analysis to route customer questions to the appropriate answer sources.

While structured (quantitative) data gives a “birds-eye view” of customers, unstructured (qualitative) data provides a deeper understanding of customer behavior and intent. Let’s explore some of the key areas of difference and their implications:

  • Sources:  Structured data is sourced from GPS sensors, online forms, network logs, web server logs,  OLTP systems , among others; whereas unstructured data sources include email messages, word-processing documents, PDF files, and others.
  • Forms:  Structured data consists of numbers and values, whereas unstructured data consists of sensors, text files, audio and video files, among others.
  • Models:  Structured data has a predefined data model and is formatted to a set data structure before being placed in data storage (for example, schema-on-write), whereas unstructured data is stored in its native format and not processed until it is used (for example, schema-on-read).
  • Storage:  Structured data is stored in tabular formats (for example, excel sheets or SQL databases) that require less storage space. It can be stored in data warehouses, which makes it highly scalable. Unstructured data, on the other hand, is stored as media files or NoSQL databases, which require more space. It can be stored in data lakes, which makes it difficult to scale.
  • Uses:  Structured data is used in machine learning (ML) and drives its algorithms, whereas unstructured data is used in  natural language processing  (NLP) and text mining.

Semi-structured data (for example, JSON, CSV, XML) is the “bridge” between structured and unstructured data. It does not have a predefined data model and is more complex than structured data, yet easier to store than unstructured data.

Semi-structured data uses “metadata” (for example, tags and semantic markers) to identify specific data characteristics and scale data into records and preset fields. Metadata ultimately enables semi-structured data to be better cataloged, searched and analyzed than unstructured data.

  • Example of metadata usage:  An online article displays a headline, a snippet, a featured image, image alt-text, slug, among others, which helps differentiate one piece of web content from similar pieces.
  • Example of semi-structured data vs. structured data:  A tab-delimited file containing customer data versus a database containing CRM tables.
  • Example of semi-structured data vs. unstructured data:  A tab-delimited file versus a list of comments from a customer’s Instagram.

Recent developments in  artificial intelligence  (AI) and machine learning (ML) are driving the future wave of data, which is enhancing business intelligence and advancing industrial innovation. In particular, the data formats and models that are covered in this article are helping business users to do the following:

  • Analyze digital communications for compliance:  Pattern recognition and email threading analysis software that can search email and chat data for potential noncompliance.
  • Track high-volume customer conversations in social media:  Text analytics and sentiment analysis that enables monitoring of marketing campaign results and identifying online threats.
  • Gain new marketing intelligence:  ML analytics tools that can quickly cover massive amounts of data to help businesses analyze customer behavior.

Furthermore, smart and efficient usage of data formats and models can help you with the following:

  • Understand customer needs at a deeper level to better serve them
  • Create more focused and targeted marketing campaigns
  • Track current metrics and create new ones
  • Create better product opportunities and offerings
  • Reduce operational costs

Whether you are a seasoned data expert or a novice business owner, being able to handle all forms of data is conducive to your success. By using structured, semi-structured and unstructured data options, you can perform optimal data management that will ultimately benefit your mission.

Get the latest tech insights and expert thought leadership in your inbox.

To better understand data storage options for whatever kind of data best serves you, check out IBM Cloud Databases

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

IMAGES

  1. Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic

    manual data analysis qualitative research

  2. Understanding Qualitative Research: An In-Depth Study Guide

    manual data analysis qualitative research

  3. Qualitative Infographic

    manual data analysis qualitative research

  4. Coding Qualitative Data: A Beginner’s How-To + Examples

    manual data analysis qualitative research

  5. Qualitative data method map

    manual data analysis qualitative research

  6. CHOOSING A QUALITATIVE DATA ANALYSIS (QDA) PLAN

    manual data analysis qualitative research

VIDEO

  1. Data Analysis

  2. ChurnZero launches Customer Briefs, powered by Customer Success AI™

  3. PR 1 Qualitative Data Analysis part 1- Coding

  4. Qualitative Data Analysis Procedures in Linguistics

  5. Qualitative Data Analysis Workshop

  6. Qualitative data it's interpretation and analysis

COMMENTS

  1. Qualitative Data Analysis: Step-by-Step Guide (Manual vs ...

    Qualitative data analysis is a process of structuring & interpreting data to understand what it represents. Learn the qualitative analysis process in 5 steps. ... Since the move away from manual research is generally about reducing the human element, adding human input to the technology might sound counter-intuitive. However, this is mostly to ...

  2. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  3. PDF The SAGE Handbook of Qualitative Data Analysis

    Aims of Qualitative Data Analysis The analysis of qualitative data can have several aims. The first aim may be to describe a phenomenon in some or greater detail. The phenomenon can be the subjective experi-ences of a specific individual or group (e.g. the way people continue to live after a fatal diagnosis). This can focus on the case (indi-

  4. PDF A Step-by-Step Guide to Qualitative Data Analysis

    Step 1: Organizing the Data. "Valid analysis is immensely aided by data displays that are focused enough to permit viewing of a full data set in one location and are systematically arranged to answer the research question at hand." (Huberman and Miles, 1994, p. 432) The best way to organize your data is to go back to your interview guide.

  5. Qualitative Data Analysis Strategies

    This chapter provides an overview of selected qualitative data analysis strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding ...

  6. Learning to Do Qualitative Data Analysis: A Starting Point

    In this article, we take up this open question as a point of departure and offer the-matic analysis, an analytic method commonly used to identify patterns across lan-guage-based data (Braun & Clarke, 2006), as a useful starting point for learning about the qualitative analysis process.

  7. Approaches to Analysis of Qualitative Research Data: A ...

    1. Introduction. Qualitative data analysis has a long history in the social sciences. Reflecting this, a substantial literature has developed to guide the researcher through the process of qualitative data analysis (e.g. Bryman & Burgess, 1994; Harding, 2018; Saunders et al., 2019; Silverman, 2017).While earlier literature focuses on the manual approach to qualitative data analysis (Bogdan ...

  8. Qualitative Data Analysis : A Methods Sourcebook

    Miles, Huberman, and Saldaña's Qualitative Data Analysis: A Methods Sourcebook is the authoritative text for analyzing and displaying qualitative research data. The Fourth Edition maintains the analytic rigor of previous editions while showcasing a variety of new visual display models for qualitative inquiry. Graphics are added to the now-classic matrix and network illustrations of the ...

  9. Research Guides: Qualitative Data Analysis: Getting started

    The SAGE handbook of qualitative data analysis.Los Angeles: SAGE Publications ; Friese, S. (2011) Qualitative Data Analysis with Atlas Ti. London: Sage. Glaser, B.G. & Strauss, A.L. (1967) The discovery of grounded theory: Strategies for qualitative research. New York Aldine de Grutyter.

  10. Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

    Data analysis in qualitative research is defined as the process of systematically searching and arranging the interview transcripts, ... The software certainly removes the tremendous amount of manual tasks and allows more time for the researcher to explore trends, identify themes, and make conclusions. Ultimately, analysis of qualitative data ...

  11. Manual Qualitative Data Analysis Tutorial Using Creswell & Poth's Data

    Qualitative data analysis involves extracting meaning and insights from qualitative research. While various software programs are available for data analysis, using Microsoft Word can be cost-effective, especially for researchers who prefer a manual approach. This tutorial will use Microsoft Word's features to analyze qualitative data ...

  12. Qualitative Data Analysis

    Computer-Assisted Qualitative Data Analysis Software (CAQDAS) should be only treated as data management packages to aid in data analysis process (Zamawe, 2015). Therefore, researchers need to balance the needs of using these types of software with the research question, research design, and analytic methods, critically considering the cost and ...

  13. (PDF) Qualitative Data Analysis and Interpretation: Systematic Search

    An understanding of qualitative data analysis is fundamental to their "systematic search for meaning" (Hatch, 2002:148) in their data. Qualitative data analysis in one of the most important ...

  14. Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic

    There are five steps involved in manual qualitative data analysis: 1. Get Your Data Ready. Before beginning with the research, it is vital to gather the notes, documents, and other resources that can give you a headstart. Mark the source of the data, any data point you may have collected, or any other information that will help you examine your ...

  15. The Coding Manual for Qualitative Researchers

    This coding manual is the best go-to text for qualitative data analysis, both for a manual approach and for computer-assisted analysis. It offers a range of coding strategies applicable to any research projects, written in accessible language, making this text highly practical as well as theoretically comprehensive.

  16. The SAGE Handbook of Qualitative Research

    The substantially updated and revised Fifth Edition of this landmark handbook presents the state-of-the-art theory and practice of qualitative inquiry. Representing top scholars from around the world, the editors and contributors continue the tradition of synthesizing existing literature, defining the present, and shaping the future of qualitative research.

  17. Qualitative Research: Data Collection, Analysis, and Management

    INTRODUCTION. In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area.

  18. Approaches to Analysis of Qualitative Research Data: A Reflection of

    It aims to add insights into how researchers conduct qualitative data analysis using different approaches and the lessons learnt. Discover the world's research 25+ million members

  19. PDF Manual for Qualitative

    tive data analysis; and • to provide readers with sources, descriptions, recommended applications, examples, and exercises for coding and further analyzing qualitative data. This manual serves as a reference to supplement existing works in qualitative research design and fieldwork. It focuses exclusively on codes and coding and how they play ...

  20. Qualitative Data Analysis

    5. Grounded theory. This method of qualitative data analysis starts with an analysis of a single case to formulate a theory. Then, additional cases are examined to see if they contribute to the theory. Qualitative data analysis can be conducted through the following three steps: Step 1: Developing and Applying Codes.

  21. Collecting and Analyzing Qualitative Data

    Qualitative research methods are a key component of field epidemiologic investigations because they can provide insight into the perceptions, values, opinions, and community norms where investigations are being conducted ().Open-ended inquiry methods, the mainstay of qualitative interview techniques, are essential in formative research for exploring contextual factors and rationales for risk ...

  22. What is Qualitative Data Analysis?

    Understanding Qualitative Data Analysis. Qualitative data analysis is the process of systematically examining and deciphering qualitative facts (such as textual content, pix, motion pictures, or observations) to discover patterns, themes, and meanings inside the statistics· Unlike quantitative statistics evaluation, which focuses on numerical measurements and statistical strategies ...

  23. Quantitative vs. Qualitative User Research

    Validating qualitative data. By providing a subjective view of visitor experiences, qualitative research can help validate findings from quantitative data. This ensures that optimization efforts are grounded in visitor insights. Qualitative research examples. Here are some examples of how qualitative data can be used to infer the underlying ...

  24. Manual or electronic? The role of coding in qualitative data analysis

    Coding is one of the significant steps taken during analysis to organize and make. sense of textual data. This paper examines the use of manual and electronic. methods to code data in two rather ...

  25. Enhance Data Analysis with Qualitative Research Software

    In the world of data analytics, qualitative research software is a game-changer. It allows you to delve deep into non-numeric data, such as interview transcripts, social media comments, or open ...

  26. Structured vs. unstructured data: What's the difference?

    Unstructured data, typically categorized as qualitative data, cannot be processed and analyzed through conventional data tools and methods. Since unstructured data does not have a predefined data model, it is best managed in non-relational (NoSQL) databases.Another way to manage unstructured data is to use data lakes to preserve it in raw form. ...

  27. The Data that Didn't Bark: Finding What Isn't There in Qualitative Research

    Abstract. Analysis of qualitative data is generally approached as a positive affair in the sense that researchers look for the presence of interesting patterns or themes. Yet, situations can arise ...

  28. Approaches to Analysis of Qualitative Research Data: A Reflection on

    This paper addresses a gap in the literature by providing reflective and critical insights into the experiences during two PhD qualitative studies which adopted different approaches to data analysis.

  29. Decoding international Solo women travelers' experience: A qualitative

    Solo female travel represents a growing market segment, with an increasing number of females preferring to travel alone. This paper aims to explore the international solo women's traveler experience; it uses a novel approach by studying the comprehensive experience of solo women travelers through analyzing user-generated content. It explores the opportunities, motivations, and constraints ...