To read this content please select one of the options below:

Please note you do not have access to teaching notes, document analysis as a qualitative research method.

Qualitative Research Journal

ISSN : 1443-9883

Article publication date: 3 August 2009

This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research experiences. Targeted to research novices, the article takes a nuts‐and‐bolts approach to document analysis. It describes the nature and forms of documents, outlines the advantages and limitations of document analysis, and offers specific examples of the use of documents in the research process. The application of document analysis to a grounded theory study is illustrated.

  • Content analysis
  • Grounded theory
  • Thematic analysis
  • Triangulation

Bowen, G.A. (2009), "Document Analysis as a Qualitative Research Method", Qualitative Research Journal , Vol. 9 No. 2, pp. 27-40. https://doi.org/10.3316/QRJ0902027

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Document Analysis as a Qualitative Research Method

Profile image of Glenn A Bowen

2009, Qualitative Research Journal

This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research experiences. Targeted to research novices, the article takes a nuts-and-bolts approach to document analysis. It describes the nature and forms of documents, outlines the advantages and limitations of document analysis, and offers specific examples of the use of documents in the research process. The application of document analysis to a grounded theory study is illustrated.

Related Papers

Kong Makara

document analysis as a qualitative research method. qualitative research journal

Devajit Mohajan

This paper develops memo writing techniques within the framework of grounded theory methodology of qualitative research in social sciences. In grounded theory, memoing is one of the most important processes to develop and enrich theory. Memo is the written record of the researcher's thinking. It is an analytical strategy that facilitates the researcher to achieve clear concept and truth from the data. It is considered as the tool of all kinds of notes taken by the researchers in grounded theory during their research. But yet there is a limited use of memo writing in other qualitative researches. Memoing increases investigation, inspection and continuity of data during the research analysis. In this study an attempt has been taken to discuss the aspects of memoing along with its benefits.

Nerina Vecchio

This paper probes functions and processes of qualitative document analysis (QDA), a method widely used in case study research. It firstly demonstrates the application of a QDA framework to inform a case study of women entrepreneurs in rural Australia; and provides insights into the lessons learnt, including strengths and limitations of QDA. Secondly, the paper provides guidelines for novice researchers seeking to use thematic analysis in a QDA process, arguing for rigour in naming assumptions and explicitness about the procedures employed. The paper contributes to discussion in the literature that positions QDA not only as a convenient tool, but as a method embedded in a conceptual framework integral to the credibility and rigour of the qualitative “story” and what makes that story feel “right” to both researcher and reader (Corbin & Strauss, 2008).

Ugochukwu Ugwu

Qualitative research method is adjudged a veritable tool in generating data aimed at subjective understanding of sociocultural issues facing humanity. Using archival research method, this review study strives to understand qualitative research paying particular attention to grounded theory method. I pay particular attention to the grounded theory research process. I examine the methodological strength of grounded theory research method as option in qualitative research. Finally, I route for criteria for evaluation of qualitative research.

Beh Yng Yng

Sheng Villamor

Rifat Kamasak , Meltem Yavuz Sercekman , sibel baykut

This chapter aims to elaborate different research methods that can be employed in organizational studies. Since the complex and indivisible relationships between the constructs and nature of the social content about the phenomena can be understood better through qualitative methods, importance of qualitative investigation is mentioned and a detailed explanation of grounded theory data analysis as a qualitative method is provided. Grounded Theory mainly suggests that theory can be discovered in qualitative data. The theory employs a specific method that follows symbolic interactionism in viewing humans as active agents in their own lives who create meaning in the processes of action and interaction. Grounded Theory which deems researchers as active participants in the construction of knowledge leading to generation of theory has been used in organizational research widely. Therefore, the chapter also offers an example of the application of grounded theory by using several extracts from the sample transcripts of interviewees.

Chong Ho Yu

The objective of this article is to illustrate that text mining and qualitative research are epistemologically compatible. First, like many qualitative research approaches, such as grounded theory, text mining encourages open-mindedness and discourages preconceptions. Contrary to the popular belief that text mining is a linear and fully automated procedure, the text miner might add, delete, and revise the initial categories in an iterative fashion. Second, text mining is similar to content analysis, which also aims to extract common themes and threads by counting words. Although both of them utilize computer algorithms, text mining is characterized by its capability of processing natural languages. Last, the criteria of sound text mining adhere to those in qualitative research in terms of consistency and replicability.

Vocational Training: Research And Realities

Marjan Masoodi

The purpose of this article is to compare two qualitative approaches that can be used in different researches: phenomenology and grounded theory. This overview is done to (1) summarize similarities and differences between these two approaches, with attention to their historical development, goals, methods, audience, and products (2) familiarize the researchers with the origins and details of these approaches in the way that they can make better matches between their research question(s) and the goals and products of the study (3) discuss a brief outline of each methodology along with their origin, essence and procedural steps undertaken (4) illustrate how the procedures of data analysis (coding), theoretical memoing and sampling are applied to systematically generate a grounded theory (5) briefly examine the major challenges for utilizing two approaches in grounded theory, the Glaserian and Straussian. As a conclusion, this overview reveals that it is essential to ensure that the me...

Field Methods

Fiona Gough

The promise of theory and model development makes grounded theory an attractive methodology to follow. However, it has been argued that many researchers fall short and provide a detailed description of only the research area or simply a quantitative content analysis rather than an explanatory model. This article illustrates how the researchers used a computer-assisted qualitative data analysis software program (CAQDAS) as a tool for moving beyond a thick description of swimming coaches' perceptions of sexual relationships in sport to an explanatory model grounded in the data. Grounded theory is an iterative process whereby the researchers move between data collection and analysis, writing memos, coding, and creating models. The nonlinear design of the selected CAQDAS program, NVIVO, facilitates such iterative approaches. Although the examples provided in this project focus on NVIVO, the concepts presented here could be applied to the use of other CAQDAS programs. Examples are pr...

RELATED PAPERS

Anna Jodejko-pietruczuk

International Journal of Renewable Energy Technology

mohamed azab

Journal of Clinical Anesthesia

Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım ve Teknoloji

Muhittin Bilgili

Camper, identidad total

Mariana Lozada

Revista Brasileira de Cartografia

Marcelo Cosme da Silva Maria

Revy Safitri

Journal of Nuclear Medicine

Helmut Maecke

Yosi Darma Sagita

Antonio Pignalosa

Jocelma Rios

Process Biochemistry

Arthur Chiou

PD Dr. Patricia Gwozdz

Acta Amazonica

Edilson Matos

IV Workshop de Desenvolvimento Distribuído de Software

Rodrigo Clos Rocha

Publications of Darmstadt Technical University, Institute for Business Studies (BWL)

Markus Hunkel

National Academy of Music “Prof. Pantcho Vladigerov”

Moriel Georgieva

Gazi Üniversitesi Gazi Eğitim Fakültesi Dergisi

Handan Demircioğlu

John Stoffel

Enrique Sánchez Rivas

Journal of Engineering

Michael Commeh

Journal of Garmian University

Dr. Younis Ibrahim Al-Dalawi

Czech Journal of Food Sciences

Ilija Klarić

Helqat Lašon 56. 5-8

Shmuel Bolozky

Jurnal Luminous: Riset Ilmiah Pendidikan Fisika

Sulistiawati Sulistiawati

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Contents

Introduction, what is document analysis, the read approach, supplementary data, acknowledgements.

  • < Previous

Document analysis in health policy research: the READ approach

ORCID logo

  • Article contents
  • Figures & tables

Sarah L Dalglish, Hina Khalid, Shannon A McMahon, Document analysis in health policy research: the READ approach, Health Policy and Planning , Volume 35, Issue 10, December 2020, Pages 1424–1431, https://doi.org/10.1093/heapol/czaa064

  • Permissions Icon Permissions

Document analysis is one of the most commonly used and powerful methods in health policy research. While existing qualitative research manuals offer direction for conducting document analysis, there has been little specific discussion about how to use this method to understand and analyse health policy. Drawing on guidance from other disciplines and our own research experience, we present a systematic approach for document analysis in health policy research called the READ approach: (1) ready your materials, (2) extract data, (3) analyse data and (4) distil your findings. We provide practical advice on each step, with consideration of epistemological and theoretical issues such as the socially constructed nature of documents and their role in modern bureaucracies. We provide examples of document analysis from two case studies from our work in Pakistan and Niger in which documents provided critical insight and advanced empirical and theoretical understanding of a health policy issue. Coding tools for each case study are included as Supplementary Files to inspire and guide future research. These case studies illustrate the value of rigorous document analysis to understand policy content and processes and discourse around policy, in ways that are either not possible using other methods, or greatly enrich other methods such as in-depth interviews and observation. Given the central nature of documents to health policy research and importance of reading them critically, the READ approach provides practical guidance on gaining the most out of documents and ensuring rigour in document analysis.

Rigour in qualitative research is judged partly by the use of deliberate, systematic procedures; however, little specific guidance is available for analysing documents, a nonetheless common method in health policy research.

Document analysis is useful for understanding policy content across time and geographies, documenting processes, triangulating with interviews and other sources of data, understanding how information and ideas are presented formally, and understanding issue framing, among other purposes.

The READ (Ready materials, Extract data, Analyse data, Distil) approach provides a step-by-step guide to conducting document analysis for qualitative policy research.

The READ approach can be adapted to different purposes and types of research, two examples of which are presented in this article, with sample tools in the Supplementary Materials .

Document analysis (also called document review) is one of the most commonly used methods in health policy research; it is nearly impossible to conduct policy research without it. Writing in early 20th century, Weber (2015) identified the importance of formal, written documents as a key characteristic of the bureaucracies by which modern societies function, including in public health. Accordingly, critical social research has a long tradition of documentary review: Marx analysed official reports, laws, statues, census reports and newspapers and periodicals over a nearly 50-year period to come to his world-altering conclusions ( Harvey, 1990 ). Yet in much of social science research, ‘documents are placed at the margins of consideration,’ with privilege given to the spoken word via methods such as interviews, possibly due to the fact that many qualitative methods were developed in the anthropological tradition to study mainly pre-literate societies ( Prior, 2003 ). To date, little specific guidance is available to help health policy researchers make the most of these wells of information.

The term ‘documents’ is defined here broadly, following Prior, as physical or virtual artefacts designed by creators, for users, to function within a particular setting ( Prior, 2003 ). Documents exist not as standalone objects of study but must be understood in the social web of meaning within which they are produced and consumed. For example, some analysts distinguish between public documents (produced in the context of public sector activities), private documents (from business and civil society) and personal documents (created by or for individuals, and generally not meant for public consumption) ( Mogalakwe, 2009 ). Documents can be used in a number of ways throughout the research process ( Bowen, 2009 ). In the planning or study design phase, they can be used to gather background information and help refine the research question. Documents can also be used to spark ideas for disseminating research once it is complete, by observing the ways those who will use the research speak to and communicate ideas with one another.

Documents can also be used during data collection and analysis to help answer research questions. Recent health policy research shows that this can be done in at least four ways. Frequently, policy documents are reviewed to describe the content or categorize the approaches to specific health problems in existing policies, as in reviews of the composition of drowning prevention resources in the United States or policy responses to foetal alcohol spectrum disorder in South Africa ( Katchmarchi et al. , 2018 ; Adebiyi et al. , 2019 ). In other cases, non-policy documents are used to examine the implementation of health policies in real-world settings, as in a review of web sources and newspapers analysing the functioning of community health councils in New Zealand ( Gurung et al. , 2020 ). Perhaps less frequently, document analysis is used to analyse policy processes, as in an assessment of multi-sectoral planning process for nutrition in Burkina Faso ( Ouedraogo et al. , 2020 ). Finally, and most broadly, document analysis can be used to inform new policies, as in one study that assessed cigarette sticks as communication and branding ‘documents,’ to suggest avenues for further regulation and tobacco control activities ( Smith et al. , 2017 ).

This practice paper provides an overarching method for conducting document analysis, which can be adapted to a multitude of research questions and topics. Document analysis is used in most or all policy studies; the aim of this article is to provide a systematized method that will enhance procedural rigour. We provide an overview of document analysis, drawing on guidance from disciplines adjacent to public health, introduce the ‘READ’ approach to document analysis and provide two short case studies demonstrating how document analysis can be applied.

Document analysis is a systematic procedure for reviewing or evaluating documents, which can be used to provide context, generate questions, supplement other types of research data, track change over time and corroborate other sources ( Bowen, 2009 ). In one commonly cited approach in social research, Bowen recommends first skimming the documents to get an overview, then reading to identify relevant categories of analysis for the overall set of documents and finally interpreting the body of documents ( Bowen, 2009 ). Document analysis can include both quantitative and qualitative components: the approach presented here can be used with either set of methods, but we emphasize qualitative ones, which are more adapted to the socially constructed meaning-making inherent to collaborative exercises such as policymaking.

The study of documents as a research method is common to a number of social science disciplines—yet in many of these fields, including sociology ( Mogalakwe, 2009 ), anthropology ( Prior, 2003 ) and political science ( Wesley, 2010 ), document-based research is described as ill-considered and underutilized. Unsurprisingly, textual analysis is perhaps most developed in fields such as media studies, cultural studies and literary theory, all disciplines that recognize documents as ‘social facts’ that are created, consumed, shared and utilized in socially organized ways ( Atkinson and Coffey, 1997 ). Documents exist within social ‘fields of action,’ a term used to designate the environments within which individuals and groups interact. Documents are therefore not mere records of social life, but integral parts of it—and indeed can become agents in their own right ( Prior, 2003 ). Powerful entities also manipulate the nature and content of knowledge; therefore, gaps in available information must be understood as reflecting and potentially reinforcing societal power relations ( Bryman and Burgess, 1994 ).

Document analysis, like any research method, can be subject to concerns regarding validity, reliability, authenticity, motivated authorship, lack of representativity and so on. However, these can be mitigated or avoided using standard techniques to enhance qualitative rigour, such as triangulation (within documents and across methods and theoretical perspectives), ensuring adequate sample size or ‘engagement’ with the documents, member checking, peer debriefing and so on ( Maxwell, 2005 ).

Document analysis can be used as a standalone method, e.g. to analyse the contents of specific types of policy as they evolve over time and differ across geographies, but document analysis can also be powerfully combined with other types of methods to cross-validate (i.e. triangulate) and deepen the value of concurrent methods. As one guide to public policy research puts it, ‘almost all likely sources of information, data, and ideas fall into two general types: documents and people’ ( Bardach and Patashnik, 2015 ). Thus, researchers can ask interviewees to address questions that arise from policy documents and point the way to useful new documents. Bardach and Patashnik suggest alternating between documents and interviews as sources as information, as one tends to lead to the other, such as by scanning interviewees’ bookshelves and papers for titles and author names ( Bardach and Patashnik, 2015 ). Depending on your research questions, document analysis can be used in combination with different types of interviews ( Berner-Rodoreda et al. , 2018 ), observation ( Harvey, 2018 ), and quantitative analyses, among other common methods in policy research.

The READ approach to document analysis is a systematic procedure for collecting documents and gaining information from them in the context of health policy studies at any level (global, national, local, etc.). The steps consist of: (1) ready your materials, (2) extract data, (3) analyse data and (4) distil your findings. We describe each of these steps in turn.

Step 1. Ready your materials

At the outset, researchers must set parameters in terms of the nature and number (approximately) of documents they plan to analyse, based on the research question. How much time will you allocate to the document analysis, and what is the scope of your research question? Depending on the answers to these questions, criteria should be established around (1) the topic (a particular policy, programme, or health issue, narrowly defined according to the research question); (2) dates of inclusion (whether taking the long view of several decades, or zooming in on a specific event or period in time); and (3) an indicative list of places to search for documents (possibilities include databases such as Ministry archives; LexisNexis or other databases; online searches; and particularly interview subjects). For difficult-to-obtain working documents or otherwise non-public items, bringing a flash drive to interviews is one of the best ways to gain access to valuable documents.

For research focusing on a single policy or programme, you may review only a handful of documents. However, if you are looking at multiple policies, health issues, or contexts, or reviewing shorter documents (such as newspaper articles), you may look at hundreds, or even thousands of documents. When considering the number of documents you will analyse, you should make notes on the type of information you plan to extract from documents—i.e. what it is you hope to learn, and how this will help answer your research question(s). The initial criteria—and the data you seek to extract from documents—will likely evolve over the course of the research, as it becomes clear whether they will yield too few documents and information (a rare outcome), far too many documents and too much information (a much more common outcome) or documents that fail to address the research question; however, it is important to have a starting point to guide the search. If you find that the documents you need are unavailable, you may need to reassess your research questions or consider other methods of inquiry. If you have too many documents, you can either analyse a subset of these ( Panel 1 ) or adopt more stringent inclusion criteria.

Exploring the framing of diseases in Pakistani media

In Table 1 , we present a non-exhaustive list of the types of documents that can be included in document analyses of health policy issues. In most cases, this will mean written sources (policies, reports, articles). The types of documents to be analysed will vary by study and according to the research question, although in many cases, it will be useful to consult a mix of formal documents (such as official policies, laws or strategies), ‘gray literature’ (organizational materials such as reports, evaluations and white papers produced outside formal publication channels) and, whenever possible, informal or working documents (such as meeting notes, PowerPoint presentations and memoranda). These latter in particular can provide rich veins of insight into how policy actors are thinking through the issues under study, particularly for the lucky researcher who obtains working documents with ‘Track Changes.’ How you prioritize documents will depend on your research question: you may prioritize official policy documents if you are studying policy content, or you may prioritize informal documents if you are studying policy process.

Types of documents that can be consulted in studies of health policy

During this initial preparatory phase, we also recommend devising a file-naming system for your documents (e.g. Author.Date.Topic.Institution.PDF), so that documents can be easily retrieved throughout the research process. After extracting data and processing your documents the first time around, you will likely have additional ‘questions’ to ask your documents and need to consult them again. For this reason, it is important to clearly name source files and link filenames to the data that you are extracting (see sample naming conventions in the Supplementary Materials ).

Step 2. Extract data

Data can be extracted in a number of ways, and the method you select for doing so will depend on your research question and the nature of your documents. One simple way is to use an Excel spreadsheet where each row is a document and each column is a category of information you are seeking to extract, from more basic data such as the document title, author and date, to theoretical or conceptual categories deriving from your research question, operating theory or analytical framework (Panel 2). Documents can also be imported into thematic coding software such as Atlas.ti or NVivo, and data extracted that way. Alternatively, if the research question focuses on process, documents can be used to compile a timeline of events, to trace processes across time. Ask yourself, how can I organize these data in the most coherent manner? What are my priority categories? We have included two different examples of data extraction tools in the Supplementary Materials to this article to spark ideas.

Case study Documents tell part of the story in Niger

Document analyses are first and foremost exercises in close reading: documents should be read thoroughly, from start to finish, including annexes, which may seem tedious but which sometimes produce golden nuggets of information. Read for overall meaning as you extract specific data related to your research question. As you go along, you will begin to have ideas or build working theories about what you are learning and observing in the data. We suggest capturing these emerging theories in extended notes or ‘memos,’ as used in Grounded Theory methodology ( Charmaz, 2006 ); these can be useful analytical units in themselves and can also provide a basis for later report and article writing.

As you read more documents, you may find that your data extraction tool needs to be modified to capture all the relevant information (or to avoid wasting time capturing irrelevant information). This may require you to go back and seek information in documents you have already read and processed, which will be greatly facilitated by a coherent file-naming system. It is also useful to keep notes on other documents that are mentioned that should be tracked down (sometimes you can write the author for help). As a general rule, we suggest being parsimonious when selecting initial categories to extract from data. Simply reading the documents takes significant time in and of itself—make sure you think about how, exactly, the specific data you are extracting will be used and how it goes towards answering your research questions.

Step 3. Analyse data

As in all types of qualitative research, data collection and analysis are iterative and characterized by emergent design, meaning that developing findings continually inform whether and how to obtain and interpret data ( Creswell, 2013 ). In practice, this means that during the data extraction phase, the researcher is already analysing data and forming initial theories—as well as potentially modifying document selection criteria. However, only when data extraction is complete can one see the full picture. For example, are there any documents that you would have expected to find, but did not? Why do you think they might be missing? Are there temporal trends (i.e. similarities, differences or evolutions that stand out when documents are ordered chronologically)? What else do you notice? We provide a list of overarching questions you should think about when viewing your body of document as a whole ( Table 2 ).

Questions to ask your overall body of documents

HIV and viral hepatitis articles by main frames (%). Note: The percentage of articles is calculated by dividing the number of articles appearing in each frame for viral hepatitis and HIV by the respectivenumber of sampled articles for each disease (N = 137 for HIV; N = 117 for hepatitis). Time frame: 1 January 2006 to 30 September 2016

HIV and viral hepatitis articles by main frames (%). Note: The percentage of articles is calculated by dividing the number of articles appearing in each frame for viral hepatitis and HIV by the respectivenumber of sampled articles for each disease (N = 137 for HIV; N = 117 for hepatitis). Time frame: 1 January 2006 to 30 September 2016

Representations of progress toward Millennium Development Goal 4 in Nigerien policy documents. Sources: clockwise from upper left: (WHO 2006); (Institut National de la Statistique 2010); (Ministè re de la Santé Publique 2010); (Unicef 2010)

Representations of progress toward Millennium Development Goal 4 in Nigerien policy documents. Sources: clockwise from upper left: ( WHO 2006 ); ( Institut National de la Statistique 2010 ); ( Ministè re de la Santé Publique 2010 ); ( Unicef 2010 )

In addition to the meaning-making processes you are already engaged in during the data extraction process, in most cases, it will be useful to apply specific analysis methodologies to the overall corpus of your documents, such as policy analysis ( Buse et al. , 2005 ). An array of analysis methodologies can be used, both quantitative and qualitative, including case study methodology, thematic content analysis, discourse analysis, framework analysis and process tracing, which may require differing levels of familiarity and skills to apply (we highlight a few of these in the case studies below). Analysis can also be structured according to theoretical approaches. When it comes to analysing policies, process tracing can be particularly useful to combine multiple sources of information, establish a chronicle of events and reveal political and social processes, so as to create a narrative of the policy cycle ( Yin, 1994 ; Shiffman et al. , 2004 ). Practically, you will also want to take a holistic view of the documents’ ‘answers’ to the questions or analysis categories you applied during the data extraction phase. Overall, what did the documents ‘say’ about these thematic categories? What variation did you find within and between documents, and along which axes? Answers to these questions are best recorded by developing notes or memos, which again will come in handy as you write up your results.

As with all qualitative research, you will want to consider your own positionality towards the documents (and their sources and authors); it may be helpful to keep a ‘reflexivity’ memo documenting how your personal characteristics or pre-standing views might influence your analysis ( Watt, 2007 ).

Step 4. Distil your findings

You will know when you have completed your document review when one of the three things happens: (1) completeness (you feel satisfied you have obtained every document fitting your criteria—this is rare), (2) out of time (this means you should have used more specific criteria), and (3) saturation (you fully or sufficiently understand the phenomenon you are studying). In all cases, you should strive to make the third situation the reason for ending your document review, though this will not always mean you will have read and analysed every document fitting your criteria—just enough documents to feel confident you have found good answers to your research questions.

Now it is time to refine your findings. During the extraction phase, you did the equivalent of walking along the beach, noticing the beautiful shells, driftwood and sea glass, and picking them up along the way. During the analysis phase, you started sorting these items into different buckets (your analysis categories) and building increasingly detailed collections. Now you have returned home from the beach, and it is time to clean your objects, rinse them of sand and preserve only the best specimens for presentation. To do this, you can return to your memos, refine them, illustrate them with graphics and quotes and fill in any incomplete areas. It can also be illuminating to look across different strands of work: e.g. how did the content, style, authorship, or tone of arguments evolve over time? Can you illustrate which words, concepts or phrases were used by authors or author groups?

Results will often first be grouped by theoretical or analytic category, or presented as a policy narrative, interweaving strands from other methods you may have used (interviews, observation, etc.). It can also be helpful to create conceptual charts and graphs, especially as this corresponds to your analytical framework (Panels 1 and 2). If you have been keeping a timeline of events, you can seek out any missing information from other sources. Finally, ask yourself how the validity of your findings checks against what you have learned using other methods. The final products of the distillation process will vary by research study, but they will invariably allow you to state your findings relative to your research questions and to draw policy-relevant conclusions.

Document analysis is an essential component of health policy research—it is also relatively convenient and can be low cost. Using an organized system of analysis enhances the document analysis’s procedural rigour, allows for a fuller understanding of policy process and content and enhances the effectiveness of other methods such as interviews and non-participant observation. We propose the READ approach as a systematic method for interrogating documents and extracting study-relevant data that is flexible enough to accommodate many types of research questions. We hope that this article encourages discussion about how to make best use of data from documents when researching health policy questions.

Supplementary data are available at Health Policy and Planning online.

The data extraction tool in the Supplementary Materials for the iCCM case study (Panel 2) was conceived of by the research team for the multi-country study ‘Policy Analysis of Community Case Management for Childhood and Newborn Illnesses’. The authors thank Sara Bennett and Daniela Rodriguez for granting permission to publish this tool. S.M. was supported by The Olympia-Morata-Programme of Heidelberg University. The funders had no role in the decision to publish, or preparation of the manuscript. The content is the responsibility of the authors and does not necessarily represent the views of any funder.

Conflict of interest statement . None declared.

Ethical approval. No ethical approval was required for this study.

Abdelmutti N , Hoffman-Goetz L.   2009 . Risk messages about HPV, cervical cancer, and the HPV vaccine Gardasil: a content analysis of Canadian and U.S. national newspaper articles . Women & Health   49 : 422 – 40 .

Google Scholar

Adebiyi BO , Mukumbang FC , Beytell A-M.   2019 . To what extent is fetal alcohol spectrum disorder considered in policy-related documents in South Africa? A document review . Health Research Policy and Systems   17 :

Atkinson PA , Coffey A.   1997 . Analysing documentary realities. In: Silverman D (ed). Qualitative Research: Theory, Method and Practice . London : SAGE .

Google Preview

Bardach E , Patashnik EM.   2015 . Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving . Los Angeles : SAGE .

Bennett S , Dalglish SL , Juma PA , Rodríguez DC.   2015 . Altogether now… understanding the role of international organizations in iCCM policy transfer . Health Policy and Planning   30 : ii26 – 35 .

Berner-Rodoreda A , Bärnighausen T , Kennedy C  et al.    2018 . From doxastic to epistemic: a typology and critique of qualitative interview styles . Qualitative Inquiry   26 : 291 – 305 . 1077800418810724.

Bowen GA.   2009 . Document analysis as a qualitative research method . Qualitative Research Journal   9 : 27 – 40 .

Bryman A.   1994 . Analyzing Qualitative Data .

Buse K , Mays N , Walt G.   2005 . Making Health Policy . New York : Open University Press .

Charmaz K.   2006 . Constructing Grounded Theory: A Practical Guide through Qualitative Analysis . London : SAGE .

Claassen L , Smid T , Woudenberg F , Timmermans DRM.   2012 . Media coverage on electromagnetic fields and health: content analysis of Dutch newspaper articles and websites . Health, Risk & Society   14 : 681 – 96 .

Creswell JW.   2013 . Qualitative Inquiry and Research Design . Thousand Oaks, CA : SAGE .

Dalglish SL , Rodríguez DC , Harouna A , Surkan PJ.   2017 . Knowledge and power in policy-making for child survival in Niger . Social Science & Medicine   177 : 150 – 7 .

Dalglish SL , Surkan PJ , Diarra A , Harouna A , Bennett S.   2015 . Power and pro-poor policies: the case of iCCM in Niger . Health Policy and Planning   30 : ii84 – 94 .

Entman RM.   1993 . Framing: toward clarification of a fractured paradigm . Journal of Communication   43 : 51 – 8 .

Fournier G , Djermakoye IA.   1975 . Village health teams in Niger (Maradi Department). In: Newell KW (ed). Health by the People . Geneva : WHO .

Gurung G , Derrett S , Gauld R.   2020 . The role and functions of community health councils in New Zealand’s health system: a document analysis . The New Zealand Medical Journal   133 : 70 – 82 .

Harvey L.   1990 . Critical Social Research . London : Unwin Hyman .

Harvey SA.   2018 . Observe before you leap: why observation provides critical insights for formative research and intervention design that you’ll never get from focus groups, interviews, or KAP surveys . Global Health: Science and Practice   6 : 299 – 316 .

Institut National de la Statistique. 2010. Rapport National sur les Progrès vers l'atteinte des Objectifs du Millénaire pour le Développement. Niamey, Niger: INS.

Kamarulzaman A.   2013 . Fighting the HIV epidemic in the Islamic world . Lancet   381 : 2058 – 60 .

Katchmarchi AB , Taliaferro AR , Kipfer HJ.   2018 . A document analysis of drowning prevention education resources in the United States . International Journal of Injury Control and Safety Promotion   25 : 78 – 84 .

Krippendorff K.   2004 . Content Analysis: An Introduction to Its Methodology . SAGE .

Marten R.   2019 . How states exerted power to create the Millennium Development Goals and how this shaped the global health agenda: lessons for the sustainable development goals and the future of global health . Global Public Health   14 : 584 – 99 .

Maxwell JA.   2005 . Qualitative Research Design: An Interactive Approach , 2 nd edn. Thousand Oaks, CA : Sage Publications .

Mayring P.   2004 . Qualitative Content Analysis . In: Flick U, von Kardorff E, Steinke I (eds).   A Companion to Qualitative Research . SAGE .

Ministère de la Santé Publique. 2010. Enquête nationale sur la survie des enfants de 0 à 59 mois et la mortalité au Niger 2010. Niamey, Niger: MSP.

Mogalakwe M.   2009 . The documentary research method—using documentary sources in social research . Eastern Africa Social Science Research Review   25 : 43 – 58 .

Nelkin D.   1991 . AIDS and the news media . The Milbank Quarterly   69 : 293 – 307 .

Ouedraogo O , Doudou MH , Drabo KM  et al.    2020 . Policy overview of the multisectoral nutrition planning process: the progress, challenges, and lessons learned from Burkina Faso . The International Journal of Health Planning and Management   35 : 120 – 39 .

Prior L.   2003 . Using Documents in Social Research . London: SAGE .

Shiffman J , Stanton C , Salazar AP.   2004 . The emergence of political priority for safe motherhood in Honduras . Health Policy and Planning   19 : 380 – 90 .

Smith KC , Washington C , Welding K  et al.    2017 . Cigarette stick as valuable communicative real estate: a content analysis of cigarettes from 14 low-income and middle-income countries . Tobacco Control   26 : 604 – 7 .

Strömbäck J , Dimitrova DV.   2011 . Mediatization and media interventionism: a comparative analysis of Sweden and the United States . The International Journal of Press/Politics   16 : 30 – 49 .

UNICEF. 2010. Maternal, Newborn & Child Surival Profile. Niamey, Niger: UNICEF

Watt D.   2007 . On becoming a qualitative researcher: the value of reflexivity . Qualitative Report   12 : 82 – 101 .

Weber M.   2015 . Bureaucracy. In: Waters T , Waters D (eds). Rationalism and Modern Society: New Translations on Politics, Bureaucracy, and Social Stratification . London : Palgrave MacMillan .

Wesley JJ.   2010 . Qualitative Document Analysis in Political Science.

World Health Organization. 2006. Country Health System Fact Sheet 2006: Niger. Niamey, Niger: WHO.

Yin R.   1994 . Case Study Research: Design and Methods . Thousand Oaks, CA : Sage .

Supplementary data

Email alerts, citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 1460-2237
  • Copyright © 2024 The London School of Hygiene and Tropical Medicine and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

728k Accesses

294 Citations

85 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

document analysis as a qualitative research method. qualitative research journal

APS

New Content From Advances in Methods and Practices in Psychological Science

  • Advances in Methods and Practices in Psychological Science
  • Cognitive Dissonance
  • Meta-Analysis
  • Methodology
  • Preregistration
  • Reproducibility

document analysis as a qualitative research method. qualitative research journal

A Practical Guide to Conversation Research: How to Study What People Say to Each Other Michael Yeomans, F. Katelynn Boland, Hanne Collins, Nicole Abi-Esber, and Alison Wood Brooks  

Conversation—a verbal interaction between two or more people—is a complex, pervasive, and consequential human behavior. Conversations have been studied across many academic disciplines. However, advances in recording and analysis techniques over the last decade have allowed researchers to more directly and precisely examine conversations in natural contexts and at a larger scale than ever before, and these advances open new paths to understand humanity and the social world. Existing reviews of text analysis and conversation research have focused on text generated by a single author (e.g., product reviews, news articles, and public speeches) and thus leave open questions about the unique challenges presented by interactive conversation data (i.e., dialogue). In this article, we suggest approaches to overcome common challenges in the workflow of conversation science, including recording and transcribing conversations, structuring data (to merge turn-level and speaker-level data sets), extracting and aggregating linguistic features, estimating effects, and sharing data. This practical guide is meant to shed light on current best practices and empower more researchers to study conversations more directly—to expand the community of conversation scholars and contribute to a greater cumulative scientific understanding of the social world. 

Open-Science Guidance for Qualitative Research: An Empirically Validated Approach for De-Identifying Sensitive Narrative Data Rebecca Campbell, McKenzie Javorka, Jasmine Engleton, Kathryn Fishwick, Katie Gregory, and Rachael Goodman-Williams  

The open-science movement seeks to make research more transparent and accessible. To that end, researchers are increasingly expected to share de-identified data with other scholars for review, reanalysis, and reuse. In psychology, open-science practices have been explored primarily within the context of quantitative data, but demands to share qualitative data are becoming more prevalent. Narrative data are far more challenging to de-identify fully, and because qualitative methods are often used in studies with marginalized, minoritized, and/or traumatized populations, data sharing may pose substantial risks for participants if their information can be later reidentified. To date, there has been little guidance in the literature on how to de-identify qualitative data. To address this gap, we developed a methodological framework for remediating sensitive narrative data. This multiphase process is modeled on common qualitative-coding strategies. The first phase includes consultations with diverse stakeholders and sources to understand reidentifiability risks and data-sharing concerns. The second phase outlines an iterative process for recognizing potentially identifiable information and constructing individualized remediation strategies through group review and consensus. The third phase includes multiple strategies for assessing the validity of the de-identification analyses (i.e., whether the remediated transcripts adequately protect participants’ privacy). We applied this framework to a set of 32 qualitative interviews with sexual-assault survivors. We provide case examples of how blurring and redaction techniques can be used to protect names, dates, locations, trauma histories, help-seeking experiences, and other information about dyadic interactions. 

Impossible Hypotheses and Effect-Size Limits Wijnand van Tilburg and Lennert van Tilburg

Psychological science is moving toward further specification of effect sizes when formulating hypotheses, performing power analyses, and considering the relevance of findings. This development has sparked an appreciation for the wider context in which such effect sizes are found because the importance assigned to specific sizes may vary from situation to situation. We add to this development a crucial but in psychology hitherto underappreciated contingency: There are mathematical limits to the magnitudes that population effect sizes can take within the common multivariate context in which psychology is situated, and these limits can be far more restrictive than typically assumed. The implication is that some hypothesized or preregistered effect sizes may be impossible. At the same time, these restrictions offer a way of statistically triangulating the plausible range of unknown effect sizes. We explain the reason for the existence of these limits, illustrate how to identify them, and offer recommendations and tools for improving hypothesized effect sizes by exploiting the broader multivariate context in which they occur. 

document analysis as a qualitative research method. qualitative research journal

It’s All About Timing: Exploring Different Temporal Resolutions for Analyzing Digital-Phenotyping Data Anna Langener, Gert Stulp, Nicholas Jacobson, Andrea Costanzo, Raj Jagesar, Martien Kas, and Laura Bringmann  

The use of smartphones and wearable sensors to passively collect data on behavior has great potential for better understanding psychological well-being and mental disorders with minimal burden. However, there are important methodological challenges that may hinder the widespread adoption of these passive measures. A crucial one is the issue of timescale: The chosen temporal resolution for summarizing and analyzing the data may affect how results are interpreted. Despite its importance, the choice of temporal resolution is rarely justified. In this study, we aim to improve current standards for analyzing digital-phenotyping data by addressing the time-related decisions faced by researchers. For illustrative purposes, we use data from 10 students whose behavior (e.g., GPS, app usage) was recorded for 28 days through the Behapp application on their mobile phones. In parallel, the participants actively answered questionnaires on their phones about their mood several times a day. We provide a walk-through on how to study different timescales by doing individualized correlation analyses and random-forest prediction models. By doing so, we demonstrate how choosing different resolutions can lead to different conclusions. Therefore, we propose conducting a multiverse analysis to investigate the consequences of choosing different temporal resolutions. This will improve current standards for analyzing digital-phenotyping data and may help combat the replications crisis caused in part by researchers making implicit decisions. 

Calculating Repeated-Measures Meta-Analytic Effects for Continuous Outcomes: A Tutorial on Pretest–Posttest-Controlled Designs David R. Skvarc, Matthew Fuller-Tyszkiewicz  

Meta-analysis is a statistical technique that combines the results of multiple studies to arrive at a more robust and reliable estimate of an overall effect or estimate of the true effect. Within the context of experimental study designs, standard meta-analyses generally use between-groups differences at a single time point. This approach fails to adequately account for preexisting differences that are likely to threaten causal inference. Meta-analyses that take into account the repeated-measures nature of these data are uncommon, and so this article serves as an instructive methodology for increasing the precision of meta-analyses by attempting to estimate the repeated-measures effect sizes, with particular focus on contexts with two time points and two groups (a between-groups pretest–posttest design)—a common scenario for clinical trials and experiments. In this article, we summarize the concept of a between-groups pretest–posttest meta-analysis and its applications. We then explain the basic steps involved in conducting this meta-analysis, including the extraction of data and several alternative approaches for the calculation of effect sizes. We also highlight the importance of considering the presence of within-subjects correlations when conducting this form of meta-analysis.   

Reliability and Feasibility of Linear Mixed Models in Fully Crossed Experimental Designs Michele Scandola, Emmanuele Tidoni  

The use of linear mixed models (LMMs) is increasing in psychology and neuroscience research In this article, we focus on the implementation of LMMs in fully crossed experimental designs. A key aspect of LMMs is choosing a random-effects structure according to the experimental needs. To date, opposite suggestions are present in the literature, spanning from keeping all random effects (maximal models), which produces several singularity and convergence issues, to removing random effects until the best fit is found, with the risk of inflating Type I error (reduced models). However, defining the random structure to fit a nonsingular and convergent model is not straightforward. Moreover, the lack of a standard approach may lead the researcher to make decisions that potentially inflate Type I errors. After reviewing LMMs, we introduce a step-by-step approach to avoid convergence and singularity issues and control for Type I error inflation during model reduction of fully crossed experimental designs. Specifically, we propose the use of complex random intercepts (CRIs) when maximal models are overparametrized. CRIs are multiple random intercepts that represent the residual variance of categorical fixed effects within a given grouping factor. We validated CRIs and the proposed procedure by extensive simulations and a real-case application. We demonstrate that CRIs can produce reliable results and require less computational resources. Moreover, we outline a few criteria and recommendations on how and when scholars should reduce overparametrized models. Overall, the proposed procedure provides clear solutions to avoid overinflated results using LMMs in psychology and neuroscience.   

Understanding Meta-Analysis Through Data Simulation With Applications to Power Analysis Filippo Gambarota, Gianmarco Altoè  

Meta-analysis is a powerful tool to combine evidence from existing literature. Despite several introductory and advanced materials about organizing, conducting, and reporting a meta-analysis, to our knowledge, there are no introductive materials about simulating the most common meta-analysis models. Data simulation is essential for developing and validating new statistical models and procedures. Furthermore, data simulation is a powerful educational tool for understanding a statistical method. In this tutorial, we show how to simulate equal-effects, random-effects, and metaregression models and illustrate how to estimate statistical power. Simulations for multilevel and multivariate models are available in the Supplemental Material available online. All materials associated with this article can be accessed on OSF ( https://osf.io/54djn/ ).   

Feedback on this article? Email  [email protected]  or login to comment.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

Privacy Overview

This paper is in the following e-collection/theme issue:

Published on 15.5.2024 in Vol 26 (2024)

Rapport Building in Written Crisis Services: Qualitative Content Analysis

Authors of this article:

Author Orcid Image

Original Paper

  • Laura Schwab-Reese 1 , BSc, MA, PhD   ; 
  • Caitlyn Short 1 , BSc, MPH   ; 
  • Larel Jacobs 2 , MEd, MSc   ; 
  • Michelle Fingerman 2 , MSc  

1 Department of Public Health, Purdue University, West Lafayette, IN, United States

2 Childhelp, Scottsdale, AZ, United States

Corresponding Author:

Laura Schwab-Reese, BSc, MA, PhD

Department of Public Health

Purdue University

West Lafayette, IN, 47907

United States

Phone: 1 765 496 6723

Email: [email protected]

Background: Building therapeutic relationships and social presence are challenging in digital services and maybe even more difficult in written services. Despite these difficulties, in-person care may not be feasible or accessible in all situations.

Objective: This study aims to categorize crisis counselors’ efforts to build rapport in written conversations by using deidentified conversation transcripts from the text and chat arms of the National Child Abuse Hotline. Using these categories, we identify the common characteristics of successful conversations. We defined success as conversations where help-seekers reported the hotline was a good way to seek help and that they were a lot more hopeful, a lot more informed, a lot more prepared to address the situation, and experiencing less stress, as reported by help-seekers.

Methods: The sample consisted of transcripts from 314 purposely selected conversations from of the 1153 text and chat conversations during July 2020. Hotline users answered a preconversation survey (ie, demographics) and a postconversation survey (ie, their perceptions of the conversation). We used qualitative content analysis to process the conversations.

Results: Active listening skills, including asking questions, paraphrasing, reflecting feelings, and interpreting situations, were commonly used by counselors. Validation, unconditional positive regard, and evaluation-based language, such as praise and apologies, were also often used. Compared with less successful conversations, successful conversations tended to include fewer statements that attend to the emotional dynamics. There were qualitative differences in how the counselors applied these approaches. Generally, crisis counselors in positive conversations tended to be more specific and tailor their comments to the situation.

Conclusions: Building therapeutic relationships and social presence are essential to digital interventions involving mental health professionals. Prior research demonstrates that they can be challenging to develop in written conversations. Our work demonstrates characteristics associated with successful conversations that could be adopted in other written help-seeking interventions.

Introduction

Since the 1990s, mental health providers have explored how to support clients via internet-based communication [ 1 ]. Prior work suggests that young people may be particularly interested in these approaches, as digital communication feels more private and emotionally safe [ 2 ]. However, internet-based communication, particularly written communication, may have significant barriers for providers and clients, including the inability to express emotion and challenges in communicating clearly [ 2 ]. Currently, there is limited evidence on how to overcome these communication issues in counseling settings [ 3 ]. Understanding how to do so may help telehealth providers build stronger therapeutic relationships, thus improving the help-seeking process. Further, this understanding may help agencies improve services and training for providers.

Technology-Based Mental Health Interventions

Technology-based (ie, telehealth) mental health services may not be as effective as in-person services. One recent meta-analysis suggests that videoconferencing-based therapeutic relationships may be inferior to those developed during in-person therapy [ 4 ]. However, there may be times when in-person care is not accessible or feasible. Nearly half of people in the United States live in a mental health shortage area, and there are areas with less than 2 psychiatrists per 100,000 residents [ 5 , 6 ]. As a result, increasing access to mental health care may depend on telehealth approaches. Within telehealth studies, interventions retaining elements of human contact are more effective than entirely computer-based interventions [ 7 , 8 ].

Two critical aspects of the helping relationship, therapeutic relationship and social presence, may be challenged when engaging digitally. A therapeutic relationship based on mutual trust, respect, empathy, and positive regard is essential in counseling [ 9 ]. Hundreds of studies have confirmed the importance of this collaborative relationship [ 10 ]. For most help-seekers, confidence in the provider, including perceptions of empathy and expertise, is key to developing a strong relationship [ 11 - 13 ].

Social presence [ 14 ], the sense of connecting and being with another, is another element that may be compromised during digital communication. Social presence may also be defined as the degree to which the other person feels “real” [ 15 ]. Although it is a natural element of face-to-face counseling, telehealth providers may have to be intentional in building a social presence. When conversing with unknown entities through written technology, it is common to question whether the other person is a human or a bot [ 16 ], in part because people are not reliably able to differentiate between the two [ 17 ]. Some prior work suggests that social presence is an important aspect of digital helping relationships because it assists in building therapeutic partnerships, professional bonds, and open communication [ 18 , 19 ].

Much of the literature on telehealth counseling focuses on verbal communication via videoconferencing or phone [ 20 ]. Few studies examine written mental health counseling services, and there is reason to believe that spoken and written communication are substantially different. In a recent review of the digital therapeutic relationship, Bantjes and Slabbert [ 20 ] suggest practical strategies for establishing rapport in digital spaces, such as maintaining eye contact, having high-speed internet to avoid lags, and attending to lighting and microphone placement. These strategies improve the audio and visual cues, which are not applicable to written communication.

Written Interventions

The literature on written counseling is limited [ 21 , 22 ]. In the 1990s, a small group of mental health providers offered therapy via email [ 23 ]. This early work identified several possible strengths and limitations. It was helpful for clients to write about their feelings, and the anonymity of email allowed them to share more openly. This asynchronous approach also increased many individuals’ sense of control, as they could choose when and where to engage with the therapist. Conversely, building a relationship and understanding nuances could be difficult without the usual social cues [ 23 ]. Two more recent literature reviews support many of the impressions formed by the early adopters, although most of the studies had very small sample sizes [ 22 , 24 ].

Written interventions may be challenging for the provider and patient, and both experience similar challenges. One randomized controlled trial of a chat-based cognitive behavioral therapy demonstrated reduced depression symptoms after 10 sessions [ 25 ]. In a parallel qualitative study, participants reported mixed perceptions of the experience [ 26 ]. Some reported feeling more able to share openly and process because of the anonymous platform. Others felt it was challenging to develop a relationship and express complex feelings and thoughts via writing [ 26 ]. Another study assessed differences between email-based cognitive behavioral therapy and unguided treatment. The email and unguided programs had better outcomes than the wait-list control group for some, but not all, outcomes [ 27 , 28 ]. Other studies, with and without in-person or telephone comparison groups, showed similarly mixed results [ 29 , 30 ]. In one unpublished dissertation, counselors who provided email services reported feeling substantial anxiety due to uncertainty, limited sensory information, and concerns about misunderstanding clients’ intentions [ 31 ]. The lack of visual, verbal, and social cues was particularly challenging [ 31 ]. They often focused more on the tasks and transactional aspects of helping to manage these uncertain dynamics [ 31 ]. Many also talked about needing much more time than usual to build the therapeutic relationship, although it did eventually happen for most [ 31 ].

Beyond mental health counseling, some recent work has examined written communication for brief counseling and advocacy [ 3 , 32 , 33 ]. Overall, privacy, autonomy, control, anonymity, and accessibility are seen as benefits of written services [ 34 - 36 ]. Building social presence and connection is an important aspect of the experience [ 34 ]. Often this professional connection builds over time, but because the help-seeker and crisis counselor or advocate do not have an ongoing relationship, it may be particularly difficult to communicate adequately and build a relationship [ 2 , 30 , 37 , 38 ]. Correctly understanding sarcasm, humor, and other nuanced language is particularly challenging in these brief interventions [ 3 , 39 ]. Like mental health counseling, the impact of written crisis and advocacy services is unclear in the current literature and may depend on geographical location, counselor training, and the help-seekers’ situations [ 33 , 40 - 46 ].

Study Purpose

Overall, establishing a human connection based on a strong therapeutic relationship and social presence will likely result in more effective, acceptable interventions. Providing crisis services is complex, and the confines of written communication create additional challenges. Rapport-building is particularly difficult, and mistakes may cause the help-seeker to feel worse [ 47 ]. However, there are not yet best practices for building rapport in these conversations, as existing approaches to rapport-building often depend on verbal and nonverbal cues [ 48 ]. As part of a larger study focused on building best practices for written hotlines, we worked with a child maltreatment-focused text or chat hotline. This analysis aims to categorize crisis counselors’ efforts to build rapport and convey active listening in written conversations. Using these categories, we identified characteristics associated with successful conversations, as reported by help-seekers. This work provides an important foundation for how to build therapeutic relationships in written mental health and hotline services.

Data Source

The data for this study are from the PACTECH (Prevent Abuse of Children Text and Chat Hotline), the text and chat arm of the Childhelp National Child Abuse Hotline [ 49 ]. Since 1982, Childhelp has offered 24/7 phone-based hotline services focused on support and resources related to child maltreatment. In 2018, the hotline expanded to include text and chat capabilities. Crisis counselors are employees rather than volunteers. Most are master-level professionals with specialized training in hotline services and child maltreatment. After conducting a quantitative pilot evaluation for 2 years, hotline leadership partnered with the lead author to use qualitative and mixed method approaches to identify best practices for services. As part of the data sharing agreement, the lead author and her research team received access to deidentified transcripts and metadata from conversations that were purposefully selected to represent a wide range of experiences and perceived outcomes.

Ethical Considerations

The Purdue University Institutional Review Board approved the research protocol (IRB-2020-965). The service terms and conditions disclosed that data may be shared with researchers. As a secondary data analysis of deidentified data, additional consent from participants was not required by the Institutional Review Board. The contract teams from Purdue University and Childhelp negotiated the terms of the data sharing agreement, including data security and access. As a result of the data sharing agreement, the data may not be released publicly.

The sample consists of 314 purposely selected conversations out of the 1153 text and chat conversations during July 2020. In addition to maintaining the written transcript of the conversation for 60 days, Childhelp collects preconversation and postconversation surveys from the help-seekers. The preconversation surveys focus on help-seeker characteristics (ie, age, gender, state of residence, and referral source), while the postconversation survey focuses on their perceptions of the conversation (eg, do they feel more hopeful, less stressed, and more prepared). We used maximum variation sampling to capture diverse help-seekers and outcomes, although not necessarily in the proportions present in the overall data [ 50 ]. This approach is particularly useful when looking for diverse perspectives, as was the case for our study. We sampled based on the preconversation and postconversation surveys. In our sample, 297 (94.6%) help-seekers answered at least 1 presurvey question, and 263 (83.8%) answered at least 1 postconversation survey question. First, we selected conversations where help-seekers reported that they were satisfied, unsatisfied, or mixed. We also included some conversations without surveys to reduce survey response bias. Then, we reviewed the demographic characteristics of the selected conversations to ensure help-seekers of different ages, races or ethnicities, and genders were included in the sample. For example, most help-seekers are girls, so there were relatively few conversations with boys in our initial sample. We added additional conversations with boys to ensure the results were not only relevant to girls.

We analyzed and reported the findings from all 314 conversations. When reporting quotes, however, we were particularly interested in the 45 conversations where help-seekers reported in the postconversation survey that the hotline was a good way to seek help and that they were a lot more hopeful, a lot more informed, a lot more prepared to address the situation, and experiencing less stress. Except when specifically referencing less successful conversations, all example quotes come from these conversations, as they represent those most successful from the help-seekers’ perspectives. All quotes are reported verbatim from the conversations, including any errors.

We used qualitative content analysis to process the conversations. We used both inductive and deductive processes to develop the codes. The first draft of the coding frame was based on our work with child maltreatment–related conversations within the Crisis Text Line [ 51 - 53 ]. Then, we revised the framework based on the content of the conversations.

Our development process followed the adaptation of grounded theory described by Schreier [ 54 ]. The lead author and her graduate research assistant reviewed all the conversations. During a second review of the conversations, we took notes on commonalities within the conversations, emphasizing material not captured in the first draft of the codebook. As we refined the codebook, all team members met weekly to discuss emerging materials and define and develop codes. After completing the framework and definitions, we coded 30 conversations and met to compare the code applications. We discussed differences in coding and refined the framework with the entire team. Then, we coded 30 additional conversations and assessed the coder agreement. After the second round of pilot coding, we reached 95% agreement on the codes and moved to code the full data set. In sum, we had 127 codes in the codebook, which were applied 22,326 times. After coding all the conversations, we reviewed the materials within each code. This process followed the segmentation process described by Schreier [ 54 ], where coded materials are decontextualized and reviewed to identify commonalities and themes. Through this process, we also assessed whether we met saturation, which occurs when all categories have been identified in the data set. Schrier’s [ 54 ] definition of qualitative content analysis saturation is different from other forms of qualitative methods. In other forms of qualitative analysis, saturation refers to the point at which reviewing additional material does not provide new information. We informally assessed this type of saturation by examining whether all codes were used if we considered only half of the sample. We found that all codes were used when we reviewed 2 different randomly selected split samples, which suggests that few new insights would be gained if we added additional conversations to our sample. After conducting these checks, we categorized the conversations by the outcomes and focused on similarities and differences across the groups.

For this analysis, we focused on the codes related to rapport building and active listening conversations. There were two main types of approaches used by crisis counselors: (1) counseling approaches and (2) evaluation-based language. Active listening skills, otherwise known as attending skills, are how counselors build connections with clients, express empathy, and convey that they are listening [ 48 ]. These skills may be defined slightly differently; asking questions, paraphrasing, reflecting feelings, and interpreting or summarizing the situation are generally recognized skills. We added validation [ 55 ] and unconditional positive regard [ 56 ], which are also commonly incorporated into helping relationships. Evaluation-based language, such as praise and apologies, is commonly used by adults when talking with children [ 57 ]. These statements differ from other approaches because the counselor’s evaluation of the situation is included.

We also examined how these approaches differed between the help-seekers most satisfied with the conversation (ie, answered all after-conversation survey questions as “Yes”) and those who were least satisfied with the conversation. We intended to define the least satisfied as those who answered all the after-conversation survey questions as “No.” However, only 4 people fit that criterion, so we included all help-seekers who answered most of the questions negatively.

Research Team

The research team included the lead author, a graduate research assistant, and 2 collaborators at Childhelp. The lead author is a family violence prevention researcher with a PhD in public health and an MA in counseling. She has experience conducting qualitative analyses of written hotline transcripts. The graduate research assistant was a master of public health student and had worked on the lead author’s research team for 3 years. She had experience with qualitative child maltreatment research. The Childhelp collaborators have substantial experience in hotline counseling and leadership. One has an MS in counseling psychology. The second has an MS in family and human development and an MEd in guidance counseling.

Help-Seeker Characteristics

Overall, our sample of help-seekers was generally similar to Childhelp’s overall text and chat users ( Table 1 ) [ 58 , 59 ]. Help-seekers tended to be female, young, and seeking help for themselves. Overall, they were generally at least a little more hopeful, informed, and prepared to deal with the situation after the conversation ( Table 2 ).

a Includes children who were distressed but did not necessarily describe events consistent with maltreatment.

a Do you feel more positive or hopeful after this chat/text session?

b Did you get the information you needed from this chat or text session?

c Do you feel better prepared to deal with the situation after this chat or text session?

d Do you feel less stress after the chat or text session?

e Was using chat or text a good way for you to get help?

Active Listening Skills

Paraphrasing information and feelings.

When paraphrasing (387 times across 170 conversations), the crisis counselor repeated what was said by the help-seeker in a way that honed the focus of the conversation. Often, it included the most important words shared by the help-seeker, along with a shortened, clarified version of the essential information or feelings. For example, when seeking to understand the situation, a crisis counselor said, “It does not sound like she is able to listen to your needs and wants at this time.” At other times, the crisis counselor wanted to convey that they have been listening. Saying, “...you mentioned that they are screaming at him and from what you have said it sounds like they might be being really aggressive with him” demonstrated that they have been paying attention to the information shared.

Sometimes, the crisis counselors reflected the feelings shared by the help-seeker, saying things like, “That sounds like it can be frustrating from what you shared,” “it sounds very overwhelming and scary,” or “I can see how stressful this is.” In these situations, the crisis counselor was often distilling the feelings to support the help-seeker in identifying what is most bothering them about the situation or what feeling is driving their response to the situation. Once the help-seeker recognized the most troubling aspect of the situation, they were often more able to brainstorm ways to address it with the crisis counselor.

Interpretation

Interpreting the situation was also common (236 times across 125 conversations). Often, help-seekers were confused or had ambivalent thoughts about the situation. In these cases, they usually struggled to identify the next steps or reduce their emotional activation. By interpreting the situation, the crisis counselors offered a coherent overview of the situation and a different perspective. In most active listening skills, crisis counselors stayed quite close to the information provided by the help-seeker (eg, paraphrasing or reflecting what was said). When interpreting the situation, crisis counselors often included their perspectives on the situation with the intent of supporting the help-seeker to see themes or new ideas. For example, one help-seeker shared that their caregivers regularly say hurtful things about their gender identity and sexual orientation, scream and yell, and tell the help-seeker that they are a disappointment. In response, the crisis counselor said, “Sounds like it would be very hard to be happy living with people who treat you like that.” Although the help-seeker had not overtly shared about their unhappiness, this interpretation led to the help-seeker sharing about active suicidal ideation.

Open Questions

Open questions (208 across 124 conversations) served multiple purposes. At the beginning of the conversations, they invited the help-seeker to share about the experience. For example, “Could you tell me what’s going on?” or “What’s making you feel unsafe?” was used to begin the conversation in a nonthreatening way. As the conversation moved to explore the issues, open questions could elicit specific details (eg, “What’s happened since then?” and “What does that mean?”) or focus attention on feelings (eg, “How does it make you feel when your mom lashes out?” and “How are you feeling about all this happening?”).

Other Common Counseling Approaches

Validation was the most used approach to active listening (647 times across 226 conversations), and it took many forms depending on the situation. Throughout the conversations, it was used to affirm the help-seeker, their feelings, and their thoughts. For example, one counselor said, “It can be hard living in a house where you don’t feel supported and respected.” In this situation, the help-seeker had a difficult relationship with a father, who regularly called the help-seeker “overdramatic or a crybaby.” By validating the difficulty of feeling unsupported, the crisis counselor communicated that the help-seeker and their feelings were important.

In other instances, the crisis counselor validated the help-seeker’s perspectives about what was or was not appropriate behavior within families. In one instance, a help-seeker shared concerns about an older sibling’s treatment of an infant. The brother was rough with the infant and burned the infant with hot milk. In response, the crisis counselor said, “I can see why you would be concerned for the baby’s safety.” In doing so, the crisis counselor communicated that the help-seeker’s feelings were valid but without confirming that the infant was being maltreated. The crisis counselor had not seen evidence of the situation, so they could not accurately validate whether the infant was being maltreated. Simple phrases, such as “I hear you. This is difficult,” “That must be really hard for you,” and “It’s okay to feel stressed that is normal,” also validated the help-seeker and their perspectives.

Unconditional Positive Regard

Unconditional positive regard (102 times across 66 conversations) occurred when crisis counselors provided basic acceptance and support of the help-seeker, regardless of their behavior or things that have been done to them. Unconditional positive regard primarily focused on the abuse experience. It was common for counselors to say things like, “No one deserves to be abused” or “No one deserves to be treated like that.” These statements were often particularly well received by help-seekers, like this example:

You don’t deserve to be emotionally abused. It’s not o.k. [Counselor]
Thank you for saying that. You are the first person ive ever talked about this personally with. [Help-seeker]

Evaluation-Based Language

Evaluation-based language involved a judgment by the crisis counselor about whether an aspect of the help-seekers’ experiences was good (eg, behavior worthy of praise) or bad (eg, an apology for something that happened to the help-seeker). Evaluation and judgment are generally not a part of helping relationships [ 48 , 60 , 61 ] but are quite common when adults speak with children [ 57 , 62 ]. Although these approaches are not generally part of counseling relationships, there is nothing inherently wrong with using them intentionally.

Praise (268 times across 145 conversations) occurred when the crisis counselor conveyed that they approved of the help-seekers or their behavior. Sometimes, praise focused on the behaviors occurring during the conversations, like “Thank you for sharing with me” and “I’m glad you reached out today.” At other times, the praise centered on behaviors that they would do in the future, such as “Yes, I believe you’re doing the right thing by calling)” and “I think that will be a good move for you.”

Apologies (372 times across 213 conversations) tended to focus on the help-seeker’s situation or issues with the hotline. Apologies for the hotline were usually about a technical difficulty (eg, “sorry, our system is not working well”). Apologies about the help-seeker’s situation could be very broad, such as “I’m so sorry to hear about all of this” and “I’m so sorry that you’re having to go through this.” Apologies could also be specific to the situation, like “I am sorry to hear Mom yelled at you yesterday too.”

Differences Between Successful and Less Successful Conversations

There were some differences in active listening skills, other counseling skills, and evaluation-based language between successful and less successful conversations. Although the sample of successful and less successful conversations was too small for formal statistical analysis, some commonalities emerged. First, although conversations were approximately the same length, less successful conversations tended to have more statements that attended to rapport building. Second, there were also differences in how the counselors applied these approaches. Unlike the preceding sections, this section includes quotes from both successful and less successful conversations.

Overall, counselors in less successful conversations tended to be vague or to directly repeat what was said by the help-seeker. These differences were particularly apparent when counselors were paraphrasing, asking open questions, or apologizing. For example, paraphrasing in less successful conversations tended to be either very vague (eg, “It sounds like you are being hurt already”) or very specific (eg, “I am hearing you have some future plans to get a job and earn your own money...”). In the last example, the help-seeker used the same phrasing in their previous statement. Conversely, successful conversations tended to be specific without direct repetition (eg, “Sounds like they are something to help you cope”). Similarly, less successful conversations tended to include open questions that were either broad (eg, “What’s happening?”) or focused on clarifying how the crisis counselor could help (eg, “How are you hoping that I can help?”). Some successful conversations also included questions clarifying how the crisis counselor could help, but it was more common to ask more specific questions, like “What is it that you would like to vent about?” and “What are their thoughts on CPS involvement?” Finally, crisis counselors used generally vague apologies about the situation in less successful conversations. Saying things like “I am so sorry this happened to you” or “I’m sorry to hear that” was common. Although some successful conversations also included these types of apologies, it was more common to pair the apology with a specific reason, such as “I’m so sorry that you have been experiencing this for so long” or “I am sorry to hear Mama is sick.”

Principal Results

Overall, our study suggests that it is possible to build therapeutic relationships via a text and chat hotline with individuals seeking child maltreatment–related information and support. Approximately 15% (n=45) of our sample reported that the hotline was a good way to seek help and that they were a lot more hopeful, a lot more informed, a lot more prepared to address the situation, and experiencing less stress. However, our sample was intentionally selected to represent a wide range of help-seeker perceptions, so this does not indicate that 15% of the hotline’s help-seekers felt this way. Based on the 2022 Childhelp data report, about 85% of help-seekers reported getting the information they needed, 80% of help-seekers reported feeling more hopeful after the conversation, and 75% reported feeling better prepared to deal with the situation [ 59 ]. These percentages suggest that the hotline provides a well-received service.

Generally, counselors built rapport through active listening skills, other counseling techniques, and evaluation-based language (ie, apologies, praise). Through active listening skills and other counseling techniques, counselors often expressed that they were listening, wanted to understand the help-seekers, and cared for them. They expressed their approval or disapproval of the help-seekers and aspects of their experiences through evaluation-based language. Although there is nothing inherently wrong with using apologies and praise, they tend to be avoided in many therapeutic approaches. Praise may undermine intrinsic motivation (ie, internal drive) and reduce engagement in the process [ 63 - 65 ]. Further, these types of evaluation-based language are rooted in control, as they are given based on something that another individual (ie, the crisis counselor) deems desirable [ 63 ]. As a result, the help-seeker might seek praise by giving answers that they believe the crisis counselor wants to receive instead of accurate answers, which may reduce the benefit of the conversation. However, praise and compliments may be a quick way to build encouraging feelings [ 66 ]. As it is challenging to build relationships via writing, praise may be one way to build a relationship quickly. Additional research into the impact of evaluation-based language is necessary to understand its role in written crisis counseling.

There were some differences between successful and less successful conversations. Surprisingly, less successful conversations tended to include more attending language than successful conversations. However, there were differences in the ways that crisis counselors apply these techniques. Overall, the crisis counselors in successful conversations tended to be more specific and tailor their responses to the help-seekers. Possibly, counselors who gave tailored responses built rapport more quickly; thus, fewer attending statements were required. If this is the case, they could move to problem-solving more quickly, which may also contribute to help-seekers’ perceptions that they were more prepared to address the situation and were more informed. These tailored responses may increase help-seekers’ perceptions that the crisis counselor is invested in the conversation. Several help-seekers explicitly asked if they were speaking with a bot in this sample. Having tailored responses may increase crisis counselors’ social presence and reduce help-seekers’ concerns about whether they are “real.” As organizations consider using large language models and chatbots in these types of services, careful attention should be given to help-seekers’ perceptions about the service and its appropriateness for the audience. As the National Eating Disorders Association learned when its wellness chatbot began providing diet information, large language models trained on outside data may not be a good fit for conversations with help-seekers [ 67 ].

Limitations

Our work has several limitations, including some inherent to secondary data analysis. First, we could not speak with the help-seekers or the counselors about the conversations. Although we were able to identify similarities across well-received conversations, it is possible that other aspects of the conversations contributed to help-seekers’ perceptions. Second, we do not know how these conversations shaped long-term outcomes. Moreover, it is difficult to follow up with help-seekers, as evidenced by the 6% response rate to a 2-week follow-up survey conducted by the National Domestic Violence Hotline [ 68 ]. Further, many of the help-seekers in this sample indicated that it is unsafe to speak aloud about their experiences, so qualitative data collection with this sample would likely have an even lower response rate. It would be more feasible to speak with counselors about their experiences, but their perspectives may be disconnected from those of the help-seekers. Despite this limitation, we incorporated the help-seekers’ perspectives through the postconversation survey, which is more than is usually possible in secondary data analysis.

Our work may not generalize to conversations unrelated to child maltreatment. As a child maltreatment–specific hotline, all conversations included elements of child maltreatment. Conversations about other topics may require other approaches. However, our results are consistent with prior work on building rapport in other forms of counseling [ 48 , 55 , 56 ], so it is reasonable to expect these findings would translate to written conversations about other topics.

Comparison With Prior Work

To the best of our knowledge, there is no other work examining specific ways to build a therapeutic relationship in written mental health counseling or crisis counseling. However, the ways that crisis counselors attended to the dynamics of the conversations were generally like those found in in-person counseling [ 48 , 55 , 56 ].

Telehealth approaches to counseling may be particularly important for young people experiencing maltreatment. Other formal resources, such as law enforcement, schools, and child protection systems, often fail to respond adequately [ 53 , 69 , 70 ]. Further, internet-based approaches, particularly written approaches, are highly acceptable to young people experiencing maltreatment [ 69 ]. In our sample and past research, children shared that they could not call resources because an audible conversation would cause parents to know they were seeking help. In work conducted with Crisis Text Line, it was common for young people sharing child maltreatment to report that the abuse escalated when parents discovered their attempts to seek help. Written, anonymous communication that is available 24/7 may be a safer way for these young people to seek help. Thus, written communication may be particularly important for children in unsafe homes.

There is also limited evidence on how to respond when young people share maltreatment experiences. Regardless of the ability to impact or end the maltreatment, individuals who receive a child maltreatment disclosure need to receive an appropriate, supportive response [ 71 - 74 ]. Supportive responses encourage the young person experiencing maltreatment to reframe their experience, which substantially reduces the likelihood of poor outcomes otherwise associated with maltreatment [ 75 ]. Conversely, unsupportive experiences often have long-lasting consequences [ 74 , 76 ]. Receiving a hurtful or unsupportive response increases the likelihood that the young person will experience more significant physical and mental health issues [ 74 , 76 , 77 ]. Unfortunately, many young people receive unsupportive responses to their disclosures [ 53 , 70 ]. Often, they report that others, particularly adults, do not believe them and are unwilling to help [ 70 , 78 ]. These experiences reduce their willingness to seek help or share their experiences in the future [ 70 ]. Our work suggests that responding to these disclosures adequately in written conversations is possible.

Our work also contributes to a small body of literature on using text and chat hotlines to provide services to people experiencing violence more generally. Michigan State University added chat services to its existing sexual assault support and advocacy hotline. Their evaluation was consistent with many of the benefits and limitations of other forms of written counseling, including challenges with nuance, misunderstanding written language, and communicating empathy [ 3 ]. However, the format also gave help-seekers a greater sense of control [ 3 ]. Another study focused on agencies providing digital violence-related support and advocacy services [ 32 ]. This work also emphasized the importance of clear communication and building rapport, although help-seeker perceptions of these factors were not assessed [ 32 ].

Conclusions

Building therapeutic relationships and social presence are important components of digital interventions involving mental health professionals. Prior research suggests that they can be challenging to develop in written conversations. Our work demonstrates characteristics of conversations associated with greater satisfaction among help-seekers. These findings may be adopted by other organizations building mental health or support interventions that include written communication. However, additional research is needed to identify how to train providers to adopt these strategies while also tailoring their approach to the help-seeker. Further, our findings may inform future work with large language models, including how large language models could contribute to these interventions. However, future research is needed to understand how help-seekers would interface with these methods and to ensure that the models consistently convey appropriate, supportive information.

Acknowledgments

This project was supported by the Children’s Bureau (CB) and Administration for Children and Families (ACF) of the US Department of Health and Human Services (HHS) as part of a financial assistance award in the amount of US $6,000,000 that was 100% funded by the CB and ACF of the HHS. The contents are those of the authors and do not necessarily represent the official views of, nor an endorsement by, the CB and ACF of the HHS or the US government. For more information, please visit administrative and national policy requirements.

Conflicts of Interest

None declared.

  • Murphy LJ, Mitchell DL. When writing helps to heal: e-mail as therapy. Br J Guid Counc. 1998;26(1):21-32. [ CrossRef ]
  • King R, Bambling M, Lloyd C, Gomurra R, Smith S, Reid W, et al. Online counselling: the motives and experiences of young people who choose the internet instead of face to face or telephone counselling. Couns Psychother Res. 2006;6(3):169-174. [ CrossRef ]
  • Moylan CA, Carlson ML, Campbell R, Fedewa T. "It's hard to show empathy in a text": developing a web-based sexual assault hotline in a college setting. J Interpers Violence. 2022;37(17-18):NP16037-NP16059. [ CrossRef ] [ Medline ]
  • Norwood C, Moghaddam NG, Malins S, Sabin-Farrell R. Working alliance and outcome effectiveness in videoconferencing psychotherapy: a systematic review and noninferiority meta-analysis. Clin Psychol Psychother. 2018;25(6):797-808. [ CrossRef ] [ Medline ]
  • Cheng N, Mohiuddin S. Addressing the nationwide shortage of child and adolescent psychiatrists: determining factors that influence the decision for psychiatry residents to pursue child and adolescent psychiatry training. Acad Psychiatry. 2022;46(1):18-24. [ CrossRef ] [ Medline ]
  • Morales DA, Barksdale CL, Beckel-Mitchener AC. A call to action to address rural mental health disparities. J Clin Transl Sci. 2020;4(5):463-467. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Richards D, Richardson T. Computer-based psychological treatments for depression: a systematic review and meta-analysis. Clin Psychol Rev. 2012;32(4):329-342. [ CrossRef ] [ Medline ]
  • Spek V, Cuijpers P, Nyklícek I, Riper H, Keyzer J, Pop V. Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis. Psychol Med. 2007;37(3):319-328. [ CrossRef ] [ Medline ]
  • Torous J, Hsin H. Empowering the digital therapeutic relationship: virtual clinics for digital health interventions. NPJ Digit Med. 2018;1:16. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Karver MS, Handelsman JB, Fields S, Bickman L. Meta-analysis of therapeutic relationship variables in youth and family therapy: the evidence for different relationship variables in the child and adolescent treatment outcome literature. Clin Psychol Rev. 2006;26(1):50-65. [ CrossRef ] [ Medline ]
  • Finsrud I, Nissen-Lie HA, Vrabel K, Høstmælingen A, Wampold BE, Ulvenes PG. It's the therapist and the treatment: the structure of common therapeutic relationship factors. Psychother Res. 2022;32(2):139-150. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elliott R, Bohart AC, Watson JC, Murphy D. Therapist empathy and client outcome: an updated meta-analysis. Psychotherapy (Chic). 2018;55(4):399-410. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nienhuis JB, Owen J, Valentine JC, Black SW, Halford TC, Parazak SE, et al. Therapeutic alliance, empathy, and genuineness in individual adult psychotherapy: a meta-analytic review. Psychother Res. 2018;28(4):593-605. [ CrossRef ] [ Medline ]
  • Short J, Williams E, Christie B. The Social Psychology of Telecommunications. Hoboken, NJ. Jon Wiley & Sons; 1976.
  • Gunawardena CN. Social presence theory and implications for interaction and collaborative learning in computer conferences. Int J Educ Telecommun. 1995;1(2):147-166. [ FREE Full text ]
  • Batish R. Voicebot and Chatbot Design: Flexible Conversational Interfaces with Amazon Alexa, Google Home, and Facebook Messenger. Birmingham, UK. Packt Publishing Ltd; 2018.
  • Warwick K, Shah H. The importance of a human viewpoint on computer natural language capabilities: a turing test perspective. AI Soc. 2016;31(2):207-221. [ CrossRef ]
  • Lopez A. An investigation of the use of internet based resources in support of the therapeutic alliance. Clin Soc Work J. 2014;43(2):189-200. [ CrossRef ]
  • Holmes C, Foster V. A preliminary comparison study of online and face-to-face counseling: client perceptions of three factors. J Technol Hum Serv. 2012;30(1):14-31. [ CrossRef ]
  • Bantjes J, Slabbert P. The digital therapeutic relationship: retaining humanity in the digital age. In: Stein DJ, Fineberg NA, Chamberlain SR, editors. Mental Health in a Digital World. Amsterdam. Elsevier; 2022;223-237.
  • Berger T. The therapeutic alliance in internet interventions: a narrative review and suggestions for future research. Psychother Res. 2017;27(5):511-524. [ CrossRef ] [ Medline ]
  • Richards D, Viganó N. Online counseling: a narrative and critical review of the literature. J Clin Psychol. 2013;69(9):994-1011. [ CrossRef ] [ Medline ]
  • Chechele PJ, Stofle G. Individual therapy online via email and internet relay chat. In: Anthony K, editor. Technology in Counselling and Psychotherapy: A Practitioner's Guide. London. Palgrave Macmillan; 2003;39-58.
  • Stoll J, Müller JA, Trachsel M. Ethical issues in online psychotherapy: a narrative review. Front Psychiatry. 2019;10:993. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kessler D, Lewis G, Kaur S, Wiles N, King M, Weich S, et al. Therapist-delivered internet psychotherapy for depression in primary care: a randomised controlled trial. Lancet. 2009;374(9690):628-634. [ CrossRef ] [ Medline ]
  • Beattie A, Shaw A, Kaur S, Kessler D. Primary-care patients' expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study. Health Expect. 2009;12(1):45-59. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vernmark K, Lenndin J, Bjärehed J, Carlsson M, Karlsson J, Oberg J, et al. Internet administered guided self-help versus individualized e-mail therapy: a randomized trial of two versions of CBT for major depression. Behav Res Ther. 2010;48(5):368-376. [ CrossRef ] [ Medline ]
  • Andersson G, Paxling B, Roch-Norlund P, Östman G, Norgren A, Almlöv J, et al. Internet-based psychodynamic versus cognitive behavioral guided self-help for generalized anxiety disorder: a randomized controlled trial. Psychother Psychosom. 2012;81(6):344-355. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dowling M, Rickwood D. Investigating individual online synchronous chat counselling processes and treatment outcomes for young people. Adv Ment Health. 2015;12(3):216-224. [ CrossRef ]
  • King R, Bambling M, Reid W, Thomas I. Telephone and online counselling for young people: a naturalistic comparison of session outcome, session impact and therapeutic alliance. Couns Psychother Res. 2006;6(3):175-181. [ CrossRef ]
  • Francis-Smith C. Email counselling and the therapeutic relationship: a grounded theory analysis of therapists' experiences [dissertation]. University of the West of England. 2014. URL: https:/​/uwe-repository.​worktribe.com/​index.php/​preview/​806312/​Thesis%20amended%20for%20repository.​pdf [accessed 2024-04-17]
  • Wood L, Hairston D, Schrag RV, Clark E, Parra-Cardona R, Temple JR. Creating a digital trauma informed space: chat and text advocacy for survivors of violence. J Interpers Violence. 2022;37(19-20):NP18960-NP18987. [ CrossRef ] [ Medline ]
  • Gould MS, Chowdhury S, Lake AM, Galfalvy H, Kleinman M, Kuchuk M, et al. National suicide prevention lifeline crisis chat interventions: evaluation of chatters' perceptions of effectiveness. Suicide Life Threat Behav. 2021;51(6):1126-1137. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gibson K, Cartwright C. Young people's experiences of mobile phone text counselling: balancing connection and control. Child Youth Serv Rev. 2014;43:96-104. [ CrossRef ]
  • Evans WP, Davidson L, Sicafuse L. Someone to listen: increasing youth help-seeking behavior through a text-based crisis line for youth. J Community Psychol. 2013;41(4):471-487. [ CrossRef ]
  • Predmore Z, Ramchand R, Ayer L, Kotzias V, Engel C, Ebener P, et al. Expanding suicide crisis services to text and chat. Crisis. 2017;38(4):255-260. [ CrossRef ] [ Medline ]
  • Chardon L, Bagraith KS, King RJ. Counseling activity in single-session online counseling with adolescents: an adherence study. Psychother Res. 2011;21(5):583-592. [ CrossRef ] [ Medline ]
  • Bambling M, King R, Reid W, Wegner K. Online counselling: the experience of counsellors providing synchronous single-session counselling to young people. Couns Psychother Res. 2008;8(2):110-116. [ CrossRef ]
  • Rodda SN, Lubman DI, Cheetham A, Dowling NA, Jackson AC. Single session web-based counselling: a thematic analysis of content from the perspective of the client. Br J Guid Counc. 2015;43(1):117-130. [ CrossRef ]
  • Fukkink RG, Hermanns JMA. Children's experiences with chat support and telephone support. J Child Psychol Psychiatry. 2009;50(6):759-766. [ CrossRef ] [ Medline ]
  • Fukkink R, Hermanns J. Counseling children at a helpline: chatting or calling? Am J Community Psychol. 2009;37(8):939-948. [ CrossRef ]
  • Sindahl TN, van Dolen W. Texting at a child helpline: how text volume, session length and duration, response latency, and waiting time are associated with counseling impact. Cyberpsychol Behav Soc Netw. 2020;23(4):210-217. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • van Dolen W, Weinberg CB. Child helplines: how social support and controllability influence service quality and well-being. J Serv Mark. 2017;31(4/5):385-396. [ CrossRef ]
  • van Dolen W, Weinberg CB. An empirical investigation of factors affecting perceived quality and well-being of children using an online child helpline. Int J Environ Res Public Health. 2019;16(12):2193. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Navarro P, Bambling M, Sheffield J, Edirippulige S. Exploring young people's perceptions of the effectiveness of text-based online counseling: mixed methods pilot study. JMIR Ment Health. 2019;6(7):e13152. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Thompson LK, Sugg MM, Runkle JR. Adolescents in crisis: a geographic exploration of help-seeking behavior using data from crisis text line. Soc Sci Med. 2018;215:69-79. [ CrossRef ] [ Medline ]
  • Fildes D, Williams K, Bradford S, Grootemaat P, Kobel C, Gordon R. Implementation of a pilot SMS-based crisis support service in Australia. Crisis. 2022;43(1):46-52. [ CrossRef ] [ Medline ]
  • Ivey AE, Packard NG, Ivey MB. Basic Attending Skills. San Diego, CA. Cognella; 2018.
  • The Childhelp National Child Abuse Hotline. Childhelp. 2020. URL: https://www.childhelp.org/hotline/ [accessed 2024-04-17]
  • Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533-544. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cash SJ, Murfree L, Schwab-Reese L. "I'm here to listen and want you to know I am a mandated reporter": understanding how text message-based crisis counselors facilitate child maltreatment disclosures. Child Abuse Negl. 2020;102:104414. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schwab-Reese L, Kanuri N, Cash S. Child maltreatment disclosure to a text messaging-based crisis service: content analysis. JMIR Mhealth Uhealth. 2019;7(3):e11306. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schwab-Reese LM, Cash SJ, Lambert NJ, Lansford JE. "They aren't going to do jack shit": text-based crisis service users' perceptions of seeking child maltreatment-related support from formal systems. J Interpers Violence. 2022;37(19-20):NP19066-NP19083. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schreier M. Qualitative Content Analysis in Practice. Thousand Oaks, CA. Sage; 2012.
  • Linehan MM. Validation and psychotherapy. In: Bohart AC, Greenberg LS, editors. Empathy Reconsidered: New Directions in Psychotherapy. Washington, DC. American Psychological Association; 1997;353-392.
  • Wilkins P. Unconditional positive regard reconsidered. Br J Guid Counc. 2010;28(1):23-36. [ CrossRef ]
  • Brummelman E, Crocker J, Bushman BJ. The praise paradox: when and why praise backfires in children with low self-esteem. Child Dev Perspect. 2016;10(2):111-115. [ CrossRef ]
  • Wolfersteig W, Moreland D, Diaz M, Gotlieb E. Prevent Abuse of Children Text and Chat Hotline (PACTECH) project: semi-annual data report. Childhelp. Scottsdale, Arizona.; 2022. URL: https://www.childhelphotline.org/wp-content/uploads/2022/05/PACTECH-Data-Report-April-2022.pdf [accessed 2024-04-17]
  • Hotline impact report. Childhelp. 2022. URL: https://www.childhelphotline.org/wp-content/uploads/2022/10/Hotline-Impact-Report-FY22.pdf [accessed 2024-04-17]
  • Nicholas A, Pirkis J, Reavley N. What responses do people at risk of suicide find most helpful and unhelpful from professionals and non-professionals? J Ment Health. 2022;31(4):496-505. [ CrossRef ] [ Medline ]
  • Rogers CR. A Way of Being. Boston, MA. Houghton Mifflin Harcourt; 1980.
  • Brummelman E, Nelemans SA, Thomaes S, de Castro BO. When parents' praise inflates, children's self-esteem deflates. Child Dev. 2017;88(6):1799-1809. [ CrossRef ] [ Medline ]
  • Kelsey J. The negative impact of rewards and ineffective praise on student motivation. ESSAI. 2011;8(1):24. [ FREE Full text ]
  • Kakinuma K, Nishiguti F, Sonoda K, Tajiri H, Tanaka A. The negative effect of ability-focused praise on the "praiser's" intrinsic motivation: face-to-face interaction. Front Psychol. 2020;11:562081. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pellecchia M, Nuske HJ, Straiton D, Hassrick ME, Gulsrud A, Iadarola S, et al. Strategies to engage underrepresented parents in child intervention services: a review of effectiveness and co-occurring use. J Child Fam Stud. 2018;27(10):3141-3154. [ CrossRef ]
  • Landrum RE, Gurung RA, Nolan SA, McCarthy MA, Dunn DS. Everyday Applications of Psychological Science: Hacks to Happiness and Health. Milton Park, UK. Routledge; 2022.
  • McCarthy L. A wellness chatbot is offline after its 'harmful' focus on weight loss. The New York Times. 2023. URL: https://www.nytimes.com/2023/06/08/us/ai-chatbot-tessa-eating-disorders-association.html [accessed 2024-04-17]
  • McDonnell K, Nagaraj N, Fuerst M. Short-term outcomes following contact with the national domestic violence hotline and loveisrespect. U.S. Department of Health & Human Services. 2020. URL: https:/​/www.​acf.hhs.gov/​opre/​report/​short-term-outcomes-following-contact-national-domestic-violence-hotline-and [accessed 2024-04-17]
  • Al-Eissa MA. Utilization of child helpline (CHL) among adolescents in Saudi Arabia: results from a national survey. Child Fam Soc Work. 2019;24(1):84-89. [ CrossRef ]
  • Tucker S. Listening and believing: an examination of young people's perceptions of why they are not believed by professionals when they report abuse and neglect. Child Soc. 2011;25(6):458-469. [ CrossRef ]
  • Collin-Vézina D, De La Sablonnière-Griffin M, Palmer AM, Milne L. A preliminary mapping of individual, relational, and social factors that impede disclosure of childhood sexual abuse. Child Abuse Negl. 2015;43:123-134. [ CrossRef ] [ Medline ]
  • Goodman-Brown TB, Edelstein RS, Goodman GS, Jones DPH, Gordon DS. Why children tell: a model of children's disclosure of sexual abuse. Child Abuse Negl. 2003;27(5):525-540. [ CrossRef ] [ Medline ]
  • Jensen TK, Gulbrandsen W, Mossige S, Reichelt S, Tjersland OA. Reporting possible sexual abuse: a qualitative study on children's perspectives and the context for disclosure. Child Abuse Negl. 2005;29(12):1395-1413. [ CrossRef ] [ Medline ]
  • Palmer SE, Brown RA, Rae-Grant NI, Loughlin MJ. Responding to children's disclosure of familial abuse: what survivors tell us. Child Welfare. 1999;78(2):259-282. [ Medline ]
  • Briere J, Jordan CE. Violence against women: outcome complexity and implications for assessment and treatment. J Interpers Violence. 2004;19(11):1252-1276. [ CrossRef ] [ Medline ]
  • Arata CM. To tell or not to tell: current functioning of child sexual abuse survivors who disclosed their victimization. Child Maltreatment. 1998;3(1):63-71. [ CrossRef ]
  • Palo AD, Gilbert BO. The relationship between perceptions of response to disclosure of childhood sexual abuse and later outcomes. J Child Sex Abus. 2015;24(5):445-463. [ CrossRef ] [ Medline ]
  • Cossar J, Belderson P, Brandon M. Recognition, telling and getting help with abuse and neglect: young people's perspectives. Child Youth Serv Rev. 2019;106:104469. [ CrossRef ]

Edited by T de Azevedo Cardoso; submitted 19.08.22; peer-reviewed by K Zhang, V Franzoni, B Li, Z Aghaei; comments to author 28.03.23; revised version received 21.08.23; accepted 26.03.24; published 15.05.24.

©Laura Schwab-Reese, Caitlyn Short, Larel Jacobs, Michelle Fingerman. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. Documents collected for qualitative document analysis

    document analysis as a qualitative research method. qualitative research journal

  2. [PDF] Conducting a Qualitative Document Analysis

    document analysis as a qualitative research method. qualitative research journal

  3. what is document analysis in qualitative research

    document analysis as a qualitative research method. qualitative research journal

  4. Qualitative Research Analysis Critique Paper Example : When To Do The Literature Review In

    document analysis as a qualitative research method. qualitative research journal

  5. Method Qualitative Research Document Analysis Learning, PNG, 1448x1012px, Method, Analysis, Area

    document analysis as a qualitative research method. qualitative research journal

  6. (PDF) Document Analysis as a Qualitative Research Technique in Assessing Oral Health Policy

    document analysis as a qualitative research method. qualitative research journal

VIDEO

  1. Qualitative Research Analysis Approaches

  2. Analytic Strategies for Qualitative Research

  3. Qualitative and Quantitative Data Analysis in Social Sciences

  4. Qualitative Research Tools

  5. How To do UX Research Using AI. #uiuxdesign #uiuxdesigner #productdesign #uxresearch

  6. 68 Content Analysis Research Method for Consumer Behavior and Marketing

COMMENTS

  1. Document Analysis as a Qualitative Research Method

    Abstract and Figures. This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research ...

  2. Document Analysis as a Qualitative Research Method

    This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research experiences. Targeted to research novices, the article takes a nuts‐and‐bolts approach to document analysis. It describes the nature and forms of documents, outlines the ...

  3. Document Analysis

    Although often neglected in methodological research, unobtrusive research methods, such as document analysis, are increasingly recognized as particularly interesting and innovative strategies for collecting and assessing data (Berg, 2001).The flexibility of this method allows documents to be analyzed in a standalone fashion or in combination with other qualitative and quantitative methods as ...

  4. Document Analysis as a Qualitative Research Method

    The nature and forms of documents are described, the advantages and limitations of document analysis are outlined, and specific examples of the use of documents in the research process are offered. This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research experiences. Targeted to research ...

  5. PDF Qualitative Research Journal

    In relation to other qualitative research methods, document analysis has both advantages and limitations. Let us look first at the advantages. Efficient method: Document analysis is less time-consuming and therefore more efficient than other research methods. It requires data selection, instead of data collection.

  6. Conducting a Qualitative Document Analysis

    Document analysis is a valuable research method that has frequently been an underused approach to qualitative research (Morgan 2022). For our study, we used a systematic approach for document ...

  7. PDF Document Analysis as a Qualitative Research Method

    In relation to other qualitative research methods, document analysis has both advantages and limitations. Let us look first at the advantages. Efficient method: Document analysis is less time ...

  8. Document Analysis as a Qualitative Research

    You may have access to the free features available through My Research. You can save searches, save documents, create alerts and more. ... Document Analysis as a Qualitative Research Method. Bowen, Glenn A. Qualitative Research Journal; Armidale Vol. 9, Iss. 2, (2009): 27-40.

  9. Document Analysis as a Qualitative Research Method

    This paper probes functions and processes of qualitative document analysis (QDA), a method widely used in case study research. It firstly demonstrates the application of a QDA framework to inform a case study of women entrepreneurs in rural Australia; and provides insights into the lessons learnt, including strengths and limitations of QDA.

  10. Increasing rigor and reducing bias in qualitative research: A document

    Qualitative research methods have traditionally been criticised for lacking rigor, and impressionistic and biased results. ... Increasing rigor and reducing bias in qualitative research: A document analysis of parliamentary debates using applied thematic analysis ... Bowen G (2009) Document analysis as a qualitative research method. Qualitative ...

  11. Document analysis in health policy research: the READ approach

    Document analysis is one of the most commonly used and powerful methods in health policy research. While existing qualitative research manuals offer direction for conducting document analysis, there has been little specific discussion about how to use this method to understand and analyse health policy.

  12. Conducting a Qualitative Document Analysis

    document analysis, qualitative inquiry, reflexive thematic analysis. Introduction . Document analysis is a valuable research method that has been used for many years. This method consists of analyzing various types of documents including books, newspaper articles, academic journal articles, and institutional reports. Any document containing ...

  13. "Conducting a Qualitative Document Analysis" by Hani Morgan

    Document analysis has been an underused approach to qualitative research. This approach can be valuable for various reasons. When used to analyze pre-existing texts, this method allows researchers to conduct studies they might otherwise not be able to complete. Some researchers may not have the resources or time needed to do field research. Although videoconferencing technology and other types ...

  14. Conducting a Qualitative Document Analysis

    Document analysis has been an underused approach to qualitative research. This approach can be valuable for various reasons. When used to analyze pre-existing texts, this method allows researchers to conduct studies they might otherwise not be able to complete. Some researchers may not have the resources or time needed to do field research. Although videoconferencing technology and other types ...

  15. Developing a Feasible and Credible Method for Analyzing ...

    To examine, and describe, document analysis as a research method: Discussion based on literature (n = 5) and perceptions: Social sciences: Analysis of documentary sources: Gibson and Brown (2011), ns, book chapter. To demonstrate the value of documentary sources as research data: Discussion of document analysis with examples of cases: ns ...

  16. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. ... be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. ... International Journal of Social Research Methodology. 2018; 21 (5):619-634 ...

  17. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  18. Key Methods Used in Qualitative Document Analysis

    The inclusion of document types helps in creating an awareness of what is required for analysis. The consideration of document analysis strategy and evaluation helps invalidate the appropriate strategy to use for a particular research method. While the importance and functions of document analysis help to appreciate the contributions of ...

  19. Document Analysis as a Qualitative Research Method

    (DOI: 10.3316/QRJ0902027) This article examines the function of documents as a data source in qualitative research and discusses document analysis procedure in the context of actual research experiences. Targeted to research novices, the article takes a nuts‐and‐bolts approach to document analysis. It describes the nature and forms of documents, outlines the advantages and limitations of ...

  20. ‪Glenn A. Bowen, Ph.D.‬

    Document analysis as a qualitative research method. GA Bowen. Qualitative research journal 9 (2), 27-40, 2009. ... Grounded theory and sensitizing concepts. GA Bowen. International journal of qualitative methods 5 (3), 12-23, 2006. 2711: 2006: Preparing a qualitative research-based dissertation: Lessons learned. GA Bowen. The qualitative report ...

  21. Document analysis in health policy research: the READ approach

    Rigour in qualitative research is judged partly by the use of deliberate, systematic procedures; however, little specific guidance is available for analysing documents, a nonetheless common method in health policy research. Document analysis is useful for understanding policy content across time and geographies, documenting processes ...

  22. How to use and assess qualitative research methods

    The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [1, 14, 16, 17]. Document study These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

  23. Exploring the significant problems confronting secondary schools

    The purpose of this baseline study is to determine the significant problems confronting history education in secondary school. The researchers employed qualitative research methods and case study design. The techniques that were employed to acquire credible data were document analysis, interviews, and classroom observation. Six experienced history education teachers and eight top-ten students ...

  24. Integrating qualitative research within a clinical trials unit

    The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University's Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

  25. Planning Qualitative Research: Design and Decision Making for New

    As faculty who regularly teach introductory qualitative research methods course, one of the most substantial hurdles we found is for the students to comprehend there are various approaches to qualitative research, and different sets of data collection and data analysis methods (Gonzalez & Forister, 2020).

  26. New Content From Advances in Methods and Practices in Psychological

    We recommend that accreditation standards emphasize (1) data skills, (2) research design, (3) descriptive statistics, (4) critical analysis, (5) qualitative methods, and (6) both parameter estimation and significance testing; as well as (7) give precedence to foundational skills, (8) promote transferable skills, and (9) create space in ...

  27. Journal of Medical Internet Research

    Methods: The sample consisted of transcripts from 314 purposely selected conversations from of the 1153 text and chat conversations during July 2020. Hotline users answered a preconversation survey (ie, demographics) and a postconversation survey (ie, their perceptions of the conversation). We used qualitative content analysis to process the ...

  28. Primary care transformation in Scotland: a qualitative evaluation of

    Introduction. The Scottish Government has a vision to transform primary care to help address the challenges of the ageing population and health inequalities. 1 - 3 They introduced a new Scottish GP contract in April 2018 4 following the abolition of the Quality and Outcomes Framework in April 2016. 5 The stated aims of the 2018 contract were: to improve access for patients, address health ...

  29. What do you think caused your ALS? An analysis of the CDC national

    Journal: Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration ... to assess the responses. The study found that traditional qualitative analysis methods resulted in a more comprehensive theme and subtheme development of the responses, but AI can provide a reinforcing check for traditional analysis methods. ... The team proposes the ...

  30. 'Listen to women as if they were your most cherished person

    Mixed-methods analysis was conducted on cross-sectional data collected from a large community sample. ... Young K, Fisher J, Kirkman M (2015) Women's experiences of endometriosis: A systematic review and synthesis of qualitative research. Journal of Family Planning and Reproductive Health Care 41: 225-234. Google Scholar. Zimet GD, Dahlem ...