Enago Academy

Research Recommendations – Guiding policy-makers for evidence-based decision making

' src=

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of exploration. In an era marked by rapid technological advancements and an ever-expanding knowledge base, refining the process of generating research recommendations becomes imperative.

But, what is a research recommendation?

Research recommendations are suggestions or advice provided to researchers to guide their study on a specific topic . They are typically given by experts in the field. Research recommendations are more action-oriented and provide specific guidance for decision-makers, unlike implications that are broader and focus on the broader significance and consequences of the research findings. However, both are crucial components of a research study.

Difference Between Research Recommendations and Implication

Although research recommendations and implications are distinct components of a research study, they are closely related. The differences between them are as follows:

Difference between research recommendation and implication

Types of Research Recommendations

Recommendations in research can take various forms, which are as follows:

These recommendations aim to assist researchers in navigating the vast landscape of academic knowledge.

Let us dive deeper to know about its key components and the steps to write an impactful research recommendation.

Key Components of Research Recommendations

The key components of research recommendations include defining the research question or objective, specifying research methods, outlining data collection and analysis processes, presenting results and conclusions, addressing limitations, and suggesting areas for future research. Here are some characteristics of research recommendations:

Characteristics of research recommendation

Research recommendations offer various advantages and play a crucial role in ensuring that research findings contribute to positive outcomes in various fields. However, they also have few limitations which highlights the significance of a well-crafted research recommendation in offering the promised advantages.

Advantages and limitations of a research recommendation

The importance of research recommendations ranges in various fields, influencing policy-making, program development, product development, marketing strategies, medical practice, and scientific research. Their purpose is to transfer knowledge from researchers to practitioners, policymakers, or stakeholders, facilitating informed decision-making and improving outcomes in different domains.

How to Write Research Recommendations?

Research recommendations can be generated through various means, including algorithmic approaches, expert opinions, or collaborative filtering techniques. Here is a step-wise guide to build your understanding on the development of research recommendations.

1. Understand the Research Question:

Understand the research question and objectives before writing recommendations. Also, ensure that your recommendations are relevant and directly address the goals of the study.

2. Review Existing Literature:

Familiarize yourself with relevant existing literature to help you identify gaps , and offer informed recommendations that contribute to the existing body of research.

3. Consider Research Methods:

Evaluate the appropriateness of different research methods in addressing the research question. Also, consider the nature of the data, the study design, and the specific objectives.

4. Identify Data Collection Techniques:

Gather dataset from diverse authentic sources. Include information such as keywords, abstracts, authors, publication dates, and citation metrics to provide a rich foundation for analysis.

5. Propose Data Analysis Methods:

Suggest appropriate data analysis methods based on the type of data collected. Consider whether statistical analysis, qualitative analysis, or a mixed-methods approach is most suitable.

6. Consider Limitations and Ethical Considerations:

Acknowledge any limitations and potential ethical considerations of the study. Furthermore, address these limitations or mitigate ethical concerns to ensure responsible research.

7. Justify Recommendations:

Explain how your recommendation contributes to addressing the research question or objective. Provide a strong rationale to help researchers understand the importance of following your suggestions.

8. Summarize Recommendations:

Provide a concise summary at the end of the report to emphasize how following these recommendations will contribute to the overall success of the research project.

By following these steps, you can create research recommendations that are actionable and contribute meaningfully to the success of the research project.

Download now to unlock some tips to improve your journey of writing research recommendations.

Example of a Research Recommendation

Here is an example of a research recommendation based on a hypothetical research to improve your understanding.

Research Recommendation: Enhancing Student Learning through Integrated Learning Platforms

Background:

The research study investigated the impact of an integrated learning platform on student learning outcomes in high school mathematics classes. The findings revealed a statistically significant improvement in student performance and engagement when compared to traditional teaching methods.

Recommendation:

In light of the research findings, it is recommended that educational institutions consider adopting and integrating the identified learning platform into their mathematics curriculum. The following specific recommendations are provided:

  • Implementation of the Integrated Learning Platform:

Schools are encouraged to adopt the integrated learning platform in mathematics classrooms, ensuring proper training for teachers on its effective utilization.

  • Professional Development for Educators:

Develop and implement professional programs to train educators in the effective use of the integrated learning platform to address any challenges teachers may face during the transition.

  • Monitoring and Evaluation:

Establish a monitoring and evaluation system to track the impact of the integrated learning platform on student performance over time.

  • Resource Allocation:

Allocate sufficient resources, both financial and technical, to support the widespread implementation of the integrated learning platform.

By implementing these recommendations, educational institutions can harness the potential of the integrated learning platform and enhance student learning experiences and academic achievements in mathematics.

This example covers the components of a research recommendation, providing specific actions based on the research findings, identifying the target audience, and outlining practical steps for implementation.

Using AI in Research Recommendation Writing

Enhancing research recommendations is an ongoing endeavor that requires the integration of cutting-edge technologies, collaborative efforts, and ethical considerations. By embracing data-driven approaches and leveraging advanced technologies, the research community can create more effective and personalized recommendation systems. However, it is accompanied by several limitations. Therefore, it is essential to approach the use of AI in research with a critical mindset, and complement its capabilities with human expertise and judgment.

Here are some limitations of integrating AI in writing research recommendation and some ways on how to counter them.

1. Data Bias

AI systems rely heavily on data for training. If the training data is biased or incomplete, the AI model may produce biased results or recommendations.

How to tackle: Audit regularly the model’s performance to identify any discrepancies and adjust the training data and algorithms accordingly.

2. Lack of Understanding of Context:

AI models may struggle to understand the nuanced context of a particular research problem. They may misinterpret information, leading to inaccurate recommendations.

How to tackle: Use AI to characterize research articles and topics. Employ them to extract features like keywords, authorship patterns and content-based details.

3. Ethical Considerations:

AI models might stereotype certain concepts or generate recommendations that could have negative consequences for certain individuals or groups.

How to tackle: Incorporate user feedback mechanisms to reduce redundancies. Establish an ethics review process for AI models in research recommendation writing.

4. Lack of Creativity and Intuition:

AI may struggle with tasks that require a deep understanding of the underlying principles or the ability to think outside the box.

How to tackle: Hybrid approaches can be employed by integrating AI in data analysis and identifying patterns for accelerating the data interpretation process.

5. Interpretability:

Many AI models, especially complex deep learning models, lack transparency on how the model arrived at a particular recommendation.

How to tackle: Implement models like decision trees or linear models. Provide clear explanation of the model architecture, training process, and decision-making criteria.

6. Dynamic Nature of Research:

Research fields are dynamic, and new information is constantly emerging. AI models may struggle to keep up with the rapidly changing landscape and may not be able to adapt to new developments.

How to tackle: Establish a feedback loop for continuous improvement. Regularly update the recommendation system based on user feedback and emerging research trends.

The integration of AI in research recommendation writing holds great promise for advancing knowledge and streamlining the research process. However, navigating these concerns is pivotal in ensuring the responsible deployment of these technologies. Researchers need to understand the use of responsible use of AI in research and must be aware of the ethical considerations.

Exploring research recommendations plays a critical role in shaping the trajectory of scientific inquiry. It serves as a compass, guiding researchers toward more robust methodologies, collaborative endeavors, and innovative approaches. Embracing these suggestions not only enhances the quality of individual studies but also contributes to the collective advancement of human understanding.

Frequently Asked Questions

The purpose of recommendations in research is to provide practical and actionable suggestions based on the study's findings, guiding future actions, policies, or interventions in a specific field or context. Recommendations bridges the gap between research outcomes and their real-world application.

To make a research recommendation, analyze your findings, identify key insights, and propose specific, evidence-based actions. Include the relevance of the recommendations to the study's objectives and provide practical steps for implementation.

Begin a recommendation by succinctly summarizing the key findings of the research. Clearly state the purpose of the recommendation and its intended impact. Use a direct and actionable language to convey the suggested course of action.

Rate this article Cancel Reply

Your email address will not be published.

how to make recommendation in qualitative research

Enago Academy's Most Popular Articles

AI in journal selection

  • AI in Academia
  • Trending Now

Using AI for Journal Selection — Simplifying your academic publishing journey in the smart way

Strategic journal selection plays a pivotal role in maximizing the impact of one’s scholarly work.…

Understand Academic Burnout: Spot the Signs & Reclaim Your Focus

  • Career Corner

Recognizing the signs: A guide to overcoming academic burnout

As the sun set over the campus, casting long shadows through the library windows, Alex…

How to Promote an Inclusive and Equitable Lab Environment

  • Diversity and Inclusion

Reassessing the Lab Environment to Create an Equitable and Inclusive Space

The pursuit of scientific discovery has long been fueled by diverse minds and perspectives. Yet…

AI Summarization Tools

Simplifying the Literature Review Journey — A comparative analysis of 6 AI summarization tools

Imagine having to skim through and read mountains of research papers and books, only to…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Digital Citations: A comprehensive guide to citing of websites in APA, MLA, and CMOS…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Beyond the Podium: Understanding the differences in conference and academic…

how to make recommendation in qualitative research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

how to make recommendation in qualitative research

What should universities' stance be on AI tools in research and academic writing?

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

75k Accesses

27 Citations

72 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

how to make recommendation in qualitative research

Mixed methods research: what it is and what it could be

Rob Timans, Paul Wouters & Johan Heilbron

how to make recommendation in qualitative research

Saturation in qualitative research: exploring its conceptualization and operationalization

Benjamin Saunders, Julius Sim, … Clare Jinks

how to make recommendation in qualitative research

The potential of working hypotheses for deductive exploratory research

Mattia Casula, Nandhini Rangarajan & Patricia Shields

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to make recommendation in qualitative research

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved April 13, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Logo for Open Educational Resources

Chapter 21. Conclusion: The Value of Qualitative Research

Qualitative research is engaging research, in the best sense of the word.

A few of the meanings of engage = to attract or hold by influence or power; to hold the attention of; to induce to participate; to enter into contest with; to bring together or interlock; to deal with at length; to pledge oneself; to begin and carry on an enterprise; to take part or participate; to come together; engaged = to be actively involved in or committed; to greatly interest; to be embedded with. ( Merriam-Webster Unabridged Dictionary )

There really is no “cookbook” for conducting qualitative research. Each study is unique because the social world is rich and full of wonders, and those of us who are curious about it have our own position in that world and our own understandings and experiences we bring with us when we seek to explore it. And yet even though our reports may be subjective, we can do what we can to make them honest and intelligible to everyone else. Learning how to do that is learning how to be a qualitative researcher rather than simply an amateur observer. Helping you understand that and getting you ready for doing so have been the goal of this book.

how to make recommendation in qualitative research

According to Lareau ( 2021:36 ), excellent qualitative work must include all the following elements: a clear contribution to new knowledge, a succinct assessment of previous literature that shows the holes in the literature, a research question that can be answered with the data in hand, a breadth and depth in the data collection, a clear exposition of the results, a deep analysis that links the evidence to the interpretation, an acknowledgment of disconfirming evidence, a discussion that uses the case as a springboard to reflect on more general concerns, and a full discussion of implications for ideas and practices. The emphasis on rigor, the clear contribution to new knowledge, and the reflection on more general concerns place qualitative research within the “scientific” camp vis-à-vis the “humanistic inquiry” camp of pure description or ideographic approaches. The attention to previous literature and filling the holes in what we know about a phenomenon or case or situation set qualitative research apart from otherwise excellent journalism, which makes no pretensions of writing to or for a larger body of knowledge.

In the magnificently engaging untextbook Rocking Qualitative Social Science , Ashley Rubin ( 2021 ) notes, “Rigorous research does not have to be rigid” ( 3 ). I agree with her claim that there are many ways to get to the top of the mountain, and you can have fun doing so. An ardent rock climber, Rubin calls her approach the Dirtbagger approach, a way of climbing the mountain that is creative, flexible, and definitely outside proscribed methods. Here are eleven lessons offered by Rubin in paraphrase form with commentary and direct quotes noted:

  • There is no right way to do qualitative social science, “and people should choose the approach that works for them, for the particular project at hand, given whatever constraints and opportunities are happening in their life at the time. ( 252 )”
  • Disagreements about what is proper qualitative research are distracting and misleading.
  • Even though research questions are very important, they can and most likely will change during data collection or even data analysis—don’t worry about this.
  • Your findings will have a bigger impact if you’ve connected them to previous literature; this shows that you are part of the larger conversation. This “anchor” can be a policy issue or a theoretical debate in the literature, but it need not be either. Sometimes what we do is really novel (but rarely—so always poke around and check before proceeding as if you are inventing the wheel).
  • Although there are some rules you really must follow when designing your study (e.g., how to obtain informed consent, defining a sample), unexpected things often happen in the course of data collection that make a mockery of your original plans. Be flexible.
  • Sometimes you have chosen a topic for some reason you can’t yet articulate to yourself—the subject or site just calls to you in some way. That’s fine. But you will still need to justify your choice in some way (hint: see number 4 above).
  • Pay close attention to your sample: “Think about what you are leaving out, what your data allow you to observe, and what you can do to fill in some of those blanks” (252).  And when you can’t fill them in, be honest about this when writing about the limitations of your study.
  • Even if you are doing interviews, archival research, focus groups, or any other method of data collection that does not actually require “going into the field,” you can still approach your work as fieldwork. This means taking fieldnotes or memos about what you are observing and how you are reacting and processing those observations or interviews or interactions or documents. Remember that you yourself are the instrument of data collection, so keep a reflective eye on yourself throughout.
  • Memo, memo, memo. There is no magic about how data become findings. It takes a lot of work, a lot of reflection, a lot of writing. Analytic memos are the helpful bridge between all that raw data and the presented findings.
  • Rubin strongly rejects the idea that qualitative research cannot make causal claims. I would agree, but only to a point. We don’t make the kinds of predictive causal claims you see in quantitative research, and it can confuse you and lead you down some unpromising paths if you think you can. That said, qualitative research can help demonstrate the causal mechanisms by which something happens. Qualitative research is also helpful in exploring alternative explanations and counterfactuals. If you want to know more about qualitative research and causality, I encourage you to read chapter 10 of Rubin’s text.
  • Some people are still skeptical about the value of qualitative research because they don’t understand the rigor required of it and confuse it with journalism or even fiction writing. You are just going to have to deal with this—maybe even people sitting on your committee are going to question your research. So be prepared to defend qualitative research by knowing the common misconceptions and criticisms and how to respond to them. We’ve talked a bit about these in chapter 20, and I also encourage you to read chapter 10 of Rubin’s text for more.

Null

Hopefully, by the time you have reached the end of this book, you will have done a bit of your own qualitative research—maybe you’ve conducted an interview or practiced taking fieldnotes. You may have read some examples of excellent qualitative research and have (hopefully!) come to appreciate the value of this approach. This is a good time, then, to take a step back and think about the ways that qualitative research is valuable, distinct and different from both quantitative methods and humanistic (nonscientific) inquiry.

Researcher Note

Why do you employ qualitative research methods in your area of study?

Across all Western countries, we can observe a strong statistical relationship between young people’s educational attainment and their parent’s level of education. If you have at least one parent who went to university, your own chances of going to and graduating from university are much higher compared to not having university-educated parents. Why this happens is much less clear… This is where qualitative research becomes important: to help us get a clearer understanding of the dynamics that lead to this observed statistical relationship.

In my own research, I go a step further and look at young men and women who have crossed this barrier: they have become the first in their family to go to university. I am interested in finding out why and how first-in-family university students made it to university and how being at university is experienced. In-depth interviews allow me to learn about hopes, aspirations, fears, struggles, resilience and success. Interviews give participants an opportunity to tell their stories in their own words while also validating their experiences.

I often ask the young people I interview what being in my studies means to them. As one of my participants told me, it is good to know that “people like me are worth studying.” I cannot think of a better way to explain why qualitative research is important.

-Wolfgang Lehman, author of Education and Society: Canadian Perspectives

For me personally, the real value of the qualitative approach is that it helps me address the concerns I have about the social world—how people make sense of their lives, how they create strategies to deal with unfair circumstances or systems of oppression, and why they are motivated to act in some situations but not others. Surveys and other forms of large impersonal data collection simply do not allow me to get at these concerns. I appreciate other forms of research for other kinds of questions. This ecumenical approach has served me well in my own career as a sociologist—I’ve used surveys of students to help me describe classed pathways through college and into the workforce, supplemented by interviews and focus groups that help me explain and understand the patterns uncovered by quantitative methods ( Hurst 2019 ). My goal for this book has not been to convince you to become a qualitative researcher exclusively but rather to understand and appreciate its value under the right circumstances (e.g., with the right questions and concerns).

In the same way that we would not use a screwdriver to hammer a nail into the wall, we don’t want to misuse the tools we have at hand. Nor should we critique the screwdriver for its failure to do the hammer’s job. Qualitative research is not about generating predictions or demonstrating causality. We can never statistically generalize our findings from a small sample of people in a particular context to the world at large. But that doesn’t mean we can’t generate better understandings of how the world works, despite “small” samples. Excellent qualitative research does a great job describing (whether through “thick description” or illustrative quotes) a phenomenon, case, or setting and generates deeper insight into the social world through the development of new concepts or identification of patterns and relationships that were previously unknown to us. The two components—accurate description and theoretical insight—are generated together through the iterative process of data analysis, which itself is based on a solid foundation of data collection. And along the way, we can have some fun and meet some interesting people!

how to make recommendation in qualitative research

Supplement: Twenty Great (engaging, insightful) Books Based on Qualitative Research

Armstrong, Elizabeth A. and Laura T. Hamilton. 2015. Paying for the Party: How College Maintains Inequality . Cambridge: Harvard University Press.

Bourgois, Phillipe and Jeffrey Schonberg. 2009. Righteous Dopefiend . Berkeley, CA: University of California Press.

DiTomaso, Nancy. 2013. The American Non-dilemma: Racial Inequality without Racism . Thousand Oaks, CA; SAGE.

Ehrenreich, Barbara. 2010. Nickel and Dimed: On (Not) Getting By in America . New York: Metropolitan Books.

Fine, Gary Alan. 2018. Talking Art: The Culture of Practice and the Practice of Culture in MFA Education . Chicago: University of Chicago Press.

Ghodsee, Kristen Rogheh. 2011. Lost in Transition: Ethnographies of Everyday Life after Communism . Durham, NC: Duke University Press.

Gowan, Teresa. 2010. Hobos, Hustlers, and Backsliders: Homeless in San Francisco . Minneapolis: University of Minnesota Press.

Graeber, David. 2013. The Democracy Project: A History, a Crisis, a Movement . New York: Spiegel & Grau.

Grazian, David. 2015. American Zoo: A Sociological Safari . Princeton, NJ: Princeton University Press.

Hartigan, John. 1999. Racial Situations: Class Predicaments of Whiteness in Detroit . Princeton, N.J.: Princeton University Press.

Ho, Karen Zouwen. 2009. Liquidated: An Ethnography of Wall Street. Durham, NC: Duke University Press.

Hochschild, Arlie Russell. 2018. Strangers in Their Own Land: Anger and Mourning on the American Right . New York: New Press.

Lamont, Michèle. 1994. Money, Morals, and Manners: The Culture of the French and the American Upper-Middle Class . Chicago: University of Chicago Press.

Lareau, Annette. 2011. Unequal Childhoods: Class, Race, and Family Life. 2nd ed with an Update a Decade Later. Berkeley, CA: University of California Press.

Leondar-Wright, Betsy. 2014. Missing Class: Strengthening Social Movement Groups by Seeing Class Cultures . Ithaca, NY: ILR Press.

Macleod, Jay. 2008. Ain’t No Makin’ It: Aspirations and Attainment in a Low-Income Neighborhood . 3rd ed. New York: Routledge.

Newman, Katherine T. 2000. No Shame in My Game: The Working Poor in the Inner City . 3rd ed. New York: Vintage Press.

Sherman, Rachel. 2006. Class Acts: Service and Inequality in Luxury Hotels . Berkeley: University of California Press.

Streib, Jessi. 2015. The Power of the Past: Understanding Cross-Class Marriages . Oxford: Oxford University Press.

Stuber, Jenny M. 2011. Inside the College Gates: How Class and Culture Matter in Higher Education . Lanham, Md.: Lexington Books.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Evans D, Coad J, Cottrell K, et al. Public involvement in research: assessing impact through a realist evaluation. Southampton (UK): NIHR Journals Library; 2014 Oct. (Health Services and Delivery Research, No. 2.36.)

Cover of Public involvement in research: assessing impact through a realist evaluation

Public involvement in research: assessing impact through a realist evaluation.

Chapter 9 conclusions and recommendations for future research.

  • How well have we achieved our original aim and objectives?

The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8 . We have developed and tested this theory of public involvement in research in eight diverse case studies; this has highlighted important contextual factors, in particular PI leadership, which had not previously been prominent in the literature. We have identified how this critical contextual factor shapes key mechanisms of public involvement, including the identification of a senior lead for involvement, resource allocation for involvement and facilitation of research partners. These mechanisms then lead to specific outcomes in improving the quality of research, notably recruitment strategies and materials and data collection tools and methods. We have identified a ‘virtuous circle’ of feedback to research partners on their contribution leading to their improved confidence and motivation, which facilitates their continued contribution. Following feedback from the HS&DR Board on our original application we did not seek to assess the cost-effectiveness of different mechanisms of public involvement but we did cost the different types of public involvement as discussed in Chapter 7 . A key finding is that many research projects undercost public involvement.

In our original proposal we emphasised our desire to include case studies involving young people and families with children in the research process. We recruited two studies involving parents of young children aged under 5 years, and two projects involving ‘older’ young people in the 18- to 25-years age group. We recognise that in doing this we missed studies involving children and young people aged under 18 years; in principle we would have liked to have included studies involving such children and young people, but, given the resources at our disposal and the additional resource, ethical and governance issues this would have entailed, we regretfully concluded that this would not be feasible for our study. In terms of the four studies with parental and young persons’ involvement that we did include, we have not done a separate analysis of their data, but the themes emerging from those case studies were consistent with our other case studies and contributed to our overall analysis.

In terms of the initial objectives, we successfully recruited the sample of eight diverse case studies and collected and analysed data from them (objective 1). As intended, we identified the outcomes of involvement from multiple stakeholders‘ perspectives, although we did not get as many research partners‘ perspectives as we would have liked – see limitations below (objective 2). It was more difficult than expected to track the impact of public involvement from project inception through to completion (objective 3), as all of our projects turned out to have longer time scales than our own. Even to track involvement over a stage of a case study research project proved difficult, as the research usually did not fall into neatly staged time periods and one study had no involvement activity over the study period.

Nevertheless, we were able to track seven of the eight case studies prospectively and in real time over time periods of up to 9 months, giving us an unusual window on involvement processes that have previously mainly been observed retrospectively. We were successful in comparing the contextual factors, mechanisms and outcomes associated with public involvement from different stakeholders‘ perspectives and costing the different mechanisms for public involvement (objective 4). We only partly achieved our final objective of undertaking a consensus exercise among stakeholders to assess the merits of the realist evaluation approach and our approach to the measurement and valuation of economic costs of public involvement in research (objective 5). A final consensus event was held, where very useful discussion and amendment of our theory of public involvement took place, and the economic approach was discussed and helpfully critiqued by participants. However, as our earlier discussions developed more fully than expected, we decided to let them continue rather than interrupt them in order to run the final exercise to assess the merits of the realist evaluation approach. We did, however, test our analysis with all our case study participants by sending a draft of this final report for comment. We received a number of helpful comments and corrections but no disagreement with our overall analysis.

  • What were the limitations of our study?

Realist evaluation is a relatively new approach and we recognise that there were a number of limitations to our study. We sought to follow the approach recommended by Pawson, but we acknowledge that we were not always able to do so. In particular, our theory of public involvement in research evolved over time and initially was not as tightly framed in terms of a testable hypothesis as Pawson recommends. In his latest book Pawson strongly recommends that outcomes should be measured with quantitative data, 17 but we did not do so; we were not aware of the existence of quantitative data or tools that would enable us to collect such data to answer our research questions. Even in terms of qualitative data, we did not capture as much information on outcomes as we initially envisaged. There were several reasons for this. The most important was that capturing outcomes in public involvement is easier the more operational the focus of involvement, and more difficult the more strategic the involvement. Thus, it was relatively easy to see the impact of a patient panel on the redesign of a recruitment leaflet but harder to capture the impact of research partners in a multidisciplinary team discussion of research design.

We also found it was sometimes more difficult to engage research partners as participants in our research than researchers or research managers. On reflection this is not surprising. Research partners are generally motivated to take part in research relevant to their lived experience of a health condition or situation, whereas our research was quite detached from their lived experience; in addition people had many constraints on their time, so getting involved in our research as well as their own was likely to be a burden too far for some. Researchers clearly also face significant time pressures but they had a more direct interest in our research, as they are obliged to engage with public involvement to satisfy research funders such as the NIHR. Moreover, researchers were being paid by their employers for their time during interviews with us, while research partners were not paid by us and usually not paid by their research teams. Whatever the reasons, we had less response from research partners than researchers or research managers, particularly for the third round of data collection; thus we have fewer data on outcomes from research partners‘ perspectives and we need to be aware of a possible selection bias towards more engaged research partners. Such a bias could have implications for our findings; for example payment might have been a more important motivating factor for less engaged advisory group members.

There were a number of practical difficulties we encountered. One challenge was when to recruit the case studies. We recruited four of our eight case studies prior to the full application, but this was more than 1 year before our project started and 15 months or more before data collection began. In this intervening period, we found that the time scales of some of the case studies were no longer ideal for our project and we faced the choice of whether to continue with them, although this timing was not ideal, or seek at a late moment to recruit alternative ones. One of our case studies ultimately undertook no involvement activity over the study period, so we obtained fewer data from it, and it contributed relatively little to our analysis. Similarly, one of the four case studies we recruited later experienced some delays itself in beginning and so we had a more limited period for data collection than initially envisaged. Research governance approvals took much longer than expected, particularly as we had to take three of our research partners, who were going to collect data within NHS projects, through the research passport process, which essentially truncated our data collection period from 1 year to 9 months. Even if we had had the full year initially envisaged for data collection, our conclusion with hindsight was that this was insufficiently long. To compare initial plans and intentions for involvement with the reality of what actually happened required a longer time period than a year for most of our case studies.

In the light of the importance we have placed on the commitment of PIs, there is an issue of potential selection bias in the recruitment of our sample. As our sampling strategy explicitly involved a networking approach to PIs of projects where we thought some significant public involvement was taking place, we were likely (as we did) to recruit enthusiasts and, at worst, those non-committed who were at least open to the potential value of public involvement. There were, unsurprisingly, no highly sceptical PIs in our sample. We have no data therefore on how public involvement may work in research where the PI is sceptical but may feel compelled to undertake involvement because of funder requirements or other factors.

  • What would we do differently next time?

If we were to design this study again, there are a number of changes we would make. Most importantly we would go for a longer time period to be able to capture involvement through the whole research process from initial design through to dissemination. We would seek to recruit far more potential case studies in principle, so that we had greater choice of which to proceed with once our study began in earnest. We would include case studies from the application stage to capture the important early involvement of research partners in the initial design period. It might be preferable to research a smaller number of case studies, allowing a more in-depth ethnographic approach. Although challenging, it would be very informative to seek to sample sceptical PIs. This might require a brief screening exercise of a larger group of PIs on their attitudes to and experience of public involvement.

The economic evaluation was challenging in a number of ways, particularly in seeking to obtain completed resource logs from case study research partners. Having a 2-week data collection period was also problematic in a field such as public involvement, where activity may be very episodic and infrequent. Thus, collecting economic data alongside other case study data in a more integrated way, and particularly with interviews and more ethnographic observation of case study activities, might be advantageous. The new budgeting tool developed by INVOLVE and the MHRN may provide a useful resource for future economic evaluations. 23

We have learned much from the involvement of research partners in our research team and, although many aspects of our approach worked well, there are some things we would do differently in future. Even though we included substantial resources for research partner involvement in all aspects of our study, we underestimated how time-consuming such full involvement would be. We were perhaps overambitious in trying to ensure such full involvement with the number of research partners and the number and complexity of the case studies. We were also perhaps naive in expecting all the research partners to play the same role in the team; different research partners came with different experiences and skills, and, like most of our case studies, we might have been better to be less prescriptive and allow the roles to develop more organically within the project.

  • Implications for research practice and funding

If one of the objectives of R&D policy is to increase the extent and effectiveness of public involvement in research, then a key implication of this research is the importance of influencing PIs to value public involvement in research or to delegate to other senior colleagues in leading on involvement in their research. Training is unlikely to be the key mechanism here; senior researchers are much more likely to be influenced by peers or by their personal experience of the benefits of public involvement. Early career researchers may be shaped by training but again peer learning and culture may be more influential. For those researchers sceptical or agnostic about public involvement, the requirement of funders is a key factor that is likely to make them engage with the involvement agenda. Therefore, funders need to scrutinise the track record of research teams on public involvement to ascertain whether there is any evidence of commitment or leadership on involvement.

One of the findings of the economic analysis was that PIs have consistently underestimated the costs of public involvement in their grant applications. Clearly the field will benefit from the guidance and budgeting tool recently disseminated by MHRN and INVOLVE. It was also notable that there was a degree of variation in the real costs of public involvement and that effective involvement is not necessarily costly. Different models of involvement incur different costs and researchers need to be made aware of the costs and benefits of these different options.

One methodological lesson we learned was the impact that conducting this research had on some participants’ reflection on the impact of public involvement. Particularly for research staff, the questions we asked sometimes made them reflect upon what they were doing and change aspects of their approach to involvement. Thus, the more the NIHR and other funders can build reporting, audit and other forms of evaluation on the impact of public involvement directly into their processes with PIs, the more likely such questioning might stimulate similar reflection.

  • Recommendations for further research

There are a number of gaps in our knowledge around public involvement in research that follow from our findings, and would benefit from further research, including realist evaluation to extend and further test the theory we have developed here:

  • In-depth exploration of how PIs become committed to public involvement and how to influence agnostic or sceptical PIs would be very helpful. Further research might compare, for example, training with peer-influencing strategies in engendering PI commitment. Research could explore the leadership role of other research team members, including research partners, and how collective leadership might support effective public involvement.
  • More methodological work is needed on how to robustly capture the impact and outcomes of public involvement in research (building as well on the PiiAF work of Popay et al. 51 ), including further economic analysis and exploration of impact when research partners are integral to research teams.
  • Research to develop approaches and carry out a full cost–benefit analysis of public involvement in research would be beneficial. Although methodologically challenging, it would be very useful to conduct some longer-term studies which sought to quantify the impact of public involvement on such key indicators as participant recruitment and retention in clinical trials.
  • It would also be helpful to capture qualitatively the experiences and perspectives of research partners who have had mixed or negative experiences, since they may be less likely than enthusiasts to volunteer to participate in studies of involvement in research such as ours. Similarly, further research might explore the (relatively rare) experiences of marginalised and seldom-heard groups involved in research.
  • Payment for public involvement in research remains a contested issue with strongly held positions for and against; it would be helpful to further explore the value research partners and researchers place on payment and its effectiveness for enhancing involvement in and impact on research.
  • A final relatively narrow but important question that we identified after data collection had finished is: what is the impact of the long periods of relative non-involvement following initial periods of more intense involvement for research partners in some types of research, particularly clinical trials?

Included under terms of UK Non-commercial Government License .

  • Cite this Page Evans D, Coad J, Cottrell K, et al. Public involvement in research: assessing impact through a realist evaluation. Southampton (UK): NIHR Journals Library; 2014 Oct. (Health Services and Delivery Research, No. 2.36.) Chapter 9, Conclusions and recommendations for future research.
  • PDF version of this title (4.3M)

In this Page

Other titles in this collection.

  • Health Services and Delivery Research

Recent Activity

  • Conclusions and recommendations for future research - Public involvement in rese... Conclusions and recommendations for future research - Public involvement in research: assessing impact through a realist evaluation

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Turn your research insights into actionable recommendations

Turn your insights into actionable recommendations.

At the end of one presentation, my colleague approached me and asked what I recommended based on the research. I was a bit puzzled. I didn’t expect anyone to ask me this kind of question. By that point in my career, I wasn’t aware that I had to make recommendations based on the research insights. I could talk about the next steps regarding what other research we had to conduct. I could also relay the information that something wasn’t working in a prototype, but I had no idea what to suggest. 

how to make recommendation in qualitative research

How to move from qualitative data to actionable insights

Over time, more and more colleagues asked for these recommendations. Finally, I realized that one of the key pieces I was missing in my reports was the “so what?” The prototype isn’t working, so what do we do next? Because I didn’t include suggestions, my colleagues had a difficult time marrying actions to my insights. Sure, the team could see the noticeable changes, but the next steps were a struggle, especially for generative research. 

Without these suggestions, my insights started to fall flat. My colleagues were excited about them and loved seeing the video clips, but they weren’t working with the findings. With this, I set out to experiment on how to write recommendations within a user research report. 

.css-1nrevy2{position:relative;display:inline-block;} How to write recommendations 

For a while, I wasn’t sure how to write recommendations. And, even now, I believe there is no  one right way . When I first started looking into this, I started with two main questions:

What do recommendations mean to stakeholders?

How prescriptive should recommendations be?

When people asked me for recommendations, I had no idea what they were looking for. I was nervous I would step on people’s toes and give the impression I thought I knew more than I did. I wasn’t a designer and didn’t want to make whacky design recommendations or impractical suggestions that would get developers rolling their eyes. 

When in doubt, I dusted off my internal research cap and sat with stakeholders to understand what they meant by recommendations. I asked them for examples of what they expected and what made a suggestion “helpful” or “actionable.” I walked away with a list of “must-haves” for my recommendations. They had to be:

Flexible. Just because I made an initial recommendation did not mean it was the only path forward. Once I presented the recommendations, we could talk through other ideas and consider new information. There were a few times when I revised my recommendations based on conversations I had with colleagues.

Feasible.  At first, I started presenting my recommendations without any prior feedback. My worst nightmare came true. The designer and developer sat back, arms crossed, and said, “A lot of this is impossible.” I quickly learned to review some of my recommendations I was uncertain about with them beforehand. Alternatively, I came up with several recommendations for one solution to help combat this problem.

Prioritized (to my best abilities).  Since I am not entirely sure of the recommendation’s effort, I use a chart of impact and reach to prioritize suggestions. Then, once I present this list, it may get reprioritized depending on effort levels from the team (hey, flexibility!).

Detailed.  This point helped me a lot with my second question regarding how in-depth I should make my recommendations. Some of the best detail comes from photos, videos, or screenshots, and colleagues appreciated when I linked recommendations with this media. They also told me to put in as much detail as possible to avoid vagueness, misinterpretation, and endless debate. 

Think MVP. Think about the solution with the fewest changes instead of recommending complex changes to a feature or product. What are some minor changes that the team can make to improve the experience or product?

Justified.  This part was the hardest for me. When my research findings didn’t align with expectations or business goals, I had no idea what to say. When I receive results that highlight we are going in the wrong direction, my recommendations become even more critical. Instead of telling the team that the new product or feature sucks and we should stop working on it, I offer alternatives. I follow the concept of “no, but...” So, “no, this isn’t working, but we found that users value X and Y, which could lead to increased retention” (or whatever metric we were looking at.

Let’s look at some examples

Although this list was beneficial in guiding my recommendations, I still wasn’t well-versed in how to write them. So, after some time, I created a formula for writing recommendations:

Observed problem/pain point/unmet need + consequence + potential solution

Evaluative research

Let’s imagine we are testing a check-out page, and we found that users were having a hard time filling out the shipping and billing forms, especially when there were two different addresses.

A non-specific and unhelpful recommendation might look like :

Users get frustrated when filling out the shipping and billing form.

The reasons this recommendation is not ideal are :

It provides no context or detail of the problem 

There is no proposed solution 

It sounds a bit judgemental (focus on the problem!) 

There is no immediate movement forward with this

A redesign recommendation about the same problem might look like this :

Users overlook the mandatory fields in the shipping and billing form, causing them to go back and fill out the form again. With this, they become frustrated. Include markers of required fields and avoid deleting information when users submit if they haven’t filled out all required fields.

Let’s take another example :

We tested an entirely new concept for our travel company, allowing people to pay to become “prime” travel members. In our user base, no one found any value in having or paying for a membership. However, they did find value in several of the features, such as sharing trips with family members or splitting costs but could not justify paying for them.

A suboptimal recommendation could look like this :

Users would not sign-up or pay for a prime membership.

Again, there is a considerable lack of context and understanding here, as well as action. Instead, we could try something like:

Users do not find enough value in the prime membership to sign-up or pay for it. Therefore, they do not see themselves using the feature. However, they did find value in two features: sharing trips with friends and splitting the trip costs. Focusing, instead, on these features could bring more people to our platform and increase retention. 

Generative research

Generative research can look a bit trickier because there isn’t always an inherent problem you are solving. For example, you might not be able to point to a usability issue, so you have to look more broadly at pain points or unmet needs. 

For example, in our generative research, we found that people often forget to buy gifts for loved ones, making them feel guilty and rushed at the last minute to find something meaningful but quickly.

This finding is extremely broad and could go in so many directions. With suggestions, we don’t necessarily want to lead our teams down only one path (flexibility!), but we also don’t want to leave the recommendation too vague (detailed). I use  How Might We statements  to help me build generative research recommendations. 

Just reporting the above wouldn’t entirely be enough for a recommendation, so let’s try to put it in a more actionable format:

People struggled to remember to buy gifts for loved one’s birthdays or special days. By the time their calendar notified them, it was too late to get a gift, leaving them filled with guilt and rushing to purchase a meaningful gift to arrive on time. How might we help people remember birthdays early enough to find meaningful gifts for their loved ones?

A great follow-up to generative research recommendations can be  running an ideation workshop !

Researching the right thing versus researching the thing right

How to format recommendations in your report.

I always end with recommendations because people leave a presentation with their minds buzzing and next steps top of mind (hopefully!). My favorite way to format suggestions is in a chart. That way, I can link the recommendation back to the insight and priority. My recommendations look like this:

An example of recommendation formatting. Link your recommendation to evidence and prioritize it for your team (but remember to be flexible!).

Overall, play around with the recommendations that you give to your teams. The best thing you can do is ask for what they expect and then ask for feedback. By catering and iterating to your colleagues’ needs, you will help them make better decisions based on your research insights!

Written by Nikki Anderson, User Research Lead & Instructor. Nikki is a User Research Lead and Instructor with over eight years of experience. She has worked in all different sizes of companies, ranging from a tiny start-up called ALICE to large corporation Zalando, and also as a freelancer. During this time, she has led a diverse range of end-to-end research projects across the world, specializing in generative user research. Nikki also owns her own company, User Research Academy, a community and education platform designed to help people get into the field of user research, or learn more about how user research impacts their current role. User Research Academy hosts online classes, content, as well as personalized mentorship opportunities with Nikki. She is extremely passionate about teaching and supporting others throughout their journey in user research. To spread the word of research and help others transition and grow in the field, she writes as a writer at dscout and Dovetail. Outside of the world of user research, you can find Nikki (happily) surrounded by animals, including her dog and two cats, reading on her Kindle, playing old-school video games like Pokemon and World of Warcraft, and writing fiction novels.

Get started for free

Keep reading

how to make recommendation in qualitative research

Log in or sign up

Final Report

A close up image of a person's hand is clasped between the hands of another person.

Dr Hilary Cass has submitted her final report and recommendations to NHS England in her role as Chair of the Independent Review of gender identity services for children and young people.

The Review was commissioned by NHS England to make recommendations on how to improve NHS gender identity services, and ensure that children and young people who are questioning their gender identity or experiencing gender dysphoria receive a high standard of care, that meets their needs, is safe, holistic and effective. 

The report describes what is known about the young people who are seeking NHS support around their gender identity and sets out the recommended clinical approach to care and support they should expect, the interventions that should be available, and how services should be organised across the country.

It also makes recommendations on the quality improvement and research infrastructure required to ensure that the evidence base underpinning care is strengthened.

In making her recommendations, Dr Cass has had to rely on the currently available evidence and think about how the NHS can respond safely, effectively, and compassionately, leaving some issues for wider societal debate.

  • Download the Final Report

(NB to open the report in browser, right click and select ‘open in new tab’)

Exploration of identity is a completely natural process during childhood and adolescence and rarely requires clinical input. However, over the past five – ten years the number of children and young people being referred for NHS support around their gender identity has increased rapidly.

As a result, young people are waiting several years to receive clinical support and during this time they and their families are left to make sense of their individual situations, often dealing with considerable challenges and upheaval.

There has been a similar pattern in other Western countries, with clinicians noting not only the rising number but also a change in the case mix of the young people seeking support.

There have been many more birth-registered females being referred in adolescence, marking a shift from the cohort that these services have traditionally seen; that is, birth-registered males presenting in childhood, on whom the previous clinical approach to care was based.

Clinicians also noted that these young people often had other issues that they were having to manage alongside their gender-related distress.

The Independent Review set out to understand the reasons for the growth in referrals and the change in case-mix, and to identify the clinical approach and service model that would best serve this population. 

To provide an evidence base upon which to make its recommendations, the Review commissioned the University of York to conduct a series of independent systematic reviews of existing evidence and new qualitative and quantitative research to build on the evidence base.

Dr Cass also conducted an extensive programme of engagement with young people, parents, clinicians and other associated professionals.

Overview of key findings

  • There is no simple explanation for the increase in the numbers of predominantly young people and young adults who have a trans or gender diverse identity, but there is broad agreement that it is a result of a complex interplay between biological, psychological and social factors. This balance of factors will be different in each individual.
  • There are conflicting views about the clinical approach, with expectations of care at times being far from usual clinical practice. This has made some clinicians fearful of working with gender-questioning young people, despite their presentation being similar to many children and young people presenting to other NHS services.
  • An appraisal of international guidelines for care and treatment of children and young people with gender incongruence found that that no single guideline could be applied in its entirety to the NHS in England.
  • While a considerable amount of research has been published in this field, systematic evidence reviews demonstrated the poor quality of the published studies, meaning there is not a reliable evidence base upon which to make clinical decisions, or for children and their families to make informed choices. 
  • The strengths and weaknesses of the evidence base on the care of children and young people are often misrepresented and overstated, both in scientific publications and social debate.
  • The controversy surrounding the use of medical treatments has taken focus away from what the individualised care and treatment is intended to achieve for individuals seeking support from NHS gender services.
  • The rationale for early puberty suppression remains unclear, with weak evidence regarding the impact on gender dysphoria, mental or psychosocial health. The effect on cognitive and psychosexual development remains unknown.
  • The use of masculinising / feminising hormones in those under the age of 18 also presents many unknowns, despite their longstanding use in the adult transgender population. The lack of long-term follow-up data on those commencing treatment at an earlier age means we have inadequate information about the range of outcomes for this group.
  • Clinicians are unable to determine with any certainty which children and young people will go on to have an enduring trans identity.
  • For most young people, a medical pathway will not be the best way to manage their gender-related distress. For those young people for whom a medical pathway is clinically indicated, it is not enough to provide this without also addressing wider mental health and/or psychosocially challenging problems.
  • Innovation is important if medicine is to move forward, but there must be a proportionate level of monitoring, oversight and regulation that does not stifle progress, while preventing creep of unproven approaches into clinical practice. Innovation must draw from and contribute to the evidence base.

Overview of Recommendations

The recommendations set out a different approach to healthcare, more closely aligned with usual NHS clinical practice that considers the young person holistically and not solely in terms of their gender-related distress. The central aim of assessment should be to help young people to thrive and achieve their life goals .

  • Services must operate to the same standards as other services seeing children and young people with complex presentations and/or additional risk factors.
  • Expand capacity through a distributed service model, based in paediatric services and with stronger links between secondary and specialist services.
  • Children/ young people referred to NHS gender services must receive a holistic assessment of their needs to inform an individualised care plan. This should include screening for neurodevelopmental conditions, including autism spectrum disorder, and a mental health assessment.
  • Standard evidence based psychological and psychopharmacological treatment approaches should be used to support the management of the associated distress from gender incongruence and cooccurring conditions, including support for parents/carers and siblings as appropriate.
  • Services should establish a separate pathway for pre-pubertal children and their families. ensuring that they are prioritised for early discussion about how parents can best support their child in a balanced and non-judgemental way. When families/carers are making decisions about social transition of pre-pubertal children, services should ensure that they can be seen as early as possible by a clinical professional with relevant experience.
  • NHS England should ensure that each Regional Centre has a follow-through service for 17–25-year-olds; either by extending the range of the regional children and young people’s service or through linked services, to ensure continuity of care and support at a potentially vulnerable stage in their journey.  This will also allow clinical, and research follow up data to be collected . 
  • There needs to be provision for people considering detransition, recognising that they may not wish to re-engage with the services whose care they were previously under.
  • A full programme of research should be established to look at the characteristics, interventions and outcomes of every young person presenting to the NHS gender services.
  • The puberty blocker trial previously announced by NHS England should be part of a programme of research which also evaluates outcomes of psychosocial interventions and masculinising/ feminising hormones.
  • The option to provide masculinising/feminising hormones from age 16 is available, but the Review recommends extreme caution. There should be a clear clinical rationale for providing hormones at this stage rather than waiting until an individual reaches 18.  Every case considered for medical treatment should be discussed at a national Multi- Disciplinary Team (MDT).
  • Implications of private healthcare on any future requests to the NHS for treatment, monitoring and/or involvement in research, and the dispensing responsibilities of pharmacists of private prescriptions needs to be clearly communicated.

In this section

This paper is in the following e-collection/theme issue:

Published on 11.4.2024 in Vol 26 (2024)

Patients’ Experiences With Digitalization in the Health Care System: Qualitative Interview Study

Authors of this article:

Author Orcid Image

Original Paper

  • Christian Gybel Jensen 1 * , MA   ; 
  • Frederik Gybel Jensen 1 * , MA   ; 
  • Mia Ingerslev Loft 1, 2 * , MSc, PhD  

1 Department of Neurology, Rigshospitalet, Copenhagen, Denmark

2 Institute for People and Technology, Roskilde University, Roskilde, Denmark

*all authors contributed equally

Corresponding Author:

Mia Ingerslev Loft, MSc, PhD

Department of Neurology

Rigshospitalet

Inge Lehmanns Vej 8

Phone: 45 35457076

Email: [email protected]

Background: The digitalization of public and health sectors worldwide is fundamentally changing health systems. With the implementation of digital health services in health institutions, a focus on digital health literacy and the use of digital health services have become more evident. In Denmark, public institutions use digital tools for different purposes, aiming to create a universal public digital sector for everyone. However, this digitalization risks reducing equity in health and further marginalizing citizens who are disadvantaged. Therefore, more knowledge is needed regarding patients’ digital practices and experiences with digital health services.

Objective: This study aims to examine digital practices and experiences with public digital health services and digital tools from the perspective of patients in the neurology field and address the following research questions: (1) How do patients use digital services and digital tools? (2) How do they experience them?

Methods: We used a qualitative design with a hermeneutic approach. We conducted 31 semistructured interviews with patients who were hospitalized or formerly hospitalized at the department of neurology in a hospital in Denmark. The interviews were audio recorded and subsequently transcribed. The text from each transcribed interview was analyzed using manifest content analysis.

Results: The analysis provided insights into 4 different categories regarding digital practices and experiences of using digital tools and services in health care systems: social resources as a digital lifeline, possessing the necessary capabilities, big feelings as facilitators or barriers, and life without digital tools. Our findings show that digital tools were experienced differently, and specific conditions were important for the possibility of engaging in digital practices, including having access to social resources; possessing physical, cognitive, and communicative capabilities; and feeling motivated, secure, and comfortable. These prerequisites were necessary for participants to have positive experiences using digital tools in the health care system. Those who did not have these prerequisites experienced challenges and, in some cases, felt left out.

Conclusions: Experiences with digital practices and digital health services are complex and multifaceted. Engagement in digital practices for the examined population requires access to continuous assistance from their social network. If patients do not meet requirements, digital health services can be experienced as exclusionary and a source of concern. Physical, cognitive, and communicative difficulties might make it impossible to use digital tools or create more challenges. To ensure that digitalization does not create inequities in health, it is necessary for developers and institutions to be aware of the differences in digital health literacy, focus on simplifying communication with patients and next of kin, and find flexible solutions for citizens who are disadvantaged.

Introduction

In 2022, the fourth most googled question in Denmark was, “Why does MitID not work?” [ 1 ]. MitID (My ID) is a digital access tool that Danes use to enter several different private and public digital services, from bank accounts to mail from their municipality or the state. MitID is a part of many Danish citizens’ everyday lives because the public sector in Denmark is digitalized in many areas. In recent decades, digitalization has changed how governments and people interact and has demonstrated the potential to change the core functions of public sectors and delivery of public policies and services [ 2 ]. When public sectors worldwide become increasingly digitalized, this transformation extends to the public health sectors as well, and some studies argue that we are moving toward a “digital public health era” that is already impacting the health systems and will fundamentally change the future of health systems [ 3 ]. While health systems are becoming more digitalized, it is important that both patients and digitalized systems adapt to changes in accordance with each other. Digital practices of people can be understood as what people do with and through digital technologies and how people relate to technology [ 4 ]. Therefore, it is relevant to investigate digital practices and how patients perceive and experience their own use of digital tools and services, especially in relation to existing digital health services. In our study, we highlight a broad perspective on experiences with digital practices and particularly add insight into the challenges with digital practices faced by patients who have acute or chronic illness, with some of them also experiencing physical, communicative, or cognitive difficulties.

An international Organization for Economic Cooperation and Development report indicates that countries are digitalized to different extents and in different ways; however, this does not mean that countries do not share common challenges and insights into the implementation of digital services [ 2 ].

In its global Digital Government Index, Denmark is presented as one of the leading countries when it comes to public digitalization [ 2 ]. Recent statistics indicate that approximately 97% of Danish families have access to the internet at home [ 5 ]. The Danish health sector already offers many different digital services, including web-based delivery of medicine, e-consultations, patient-related outcome questionnaires, and seeking one’s own health journal or getting test results through; “Sundhed” [ 6 ] (the national health portal) and “Sundhedsjournalen” (the electronic patient record); or the apps “Medicinkortet” (the shared medication record), “Minlæge” (My Doctor, consisting of, eg, communication with the general practitioner), or “MinSP” (My Health Platform, consisting of, eg, communication with health care staff in hospitals) [ 6 - 8 ].

The Danish Digital Health Strategy from 2018 aims to create a coherent and user-friendly digital public sector for everyone [ 9 ], but statistics indicate that certain groups in society are not as digitalized as others. In particular, the older population uses digital services the least, with 5% of people aged 65 to 75 years and 18% of those aged 75 to 89 years having never used the internet in 2020 [ 5 ]. In parts of the literature, it has been problematized how the digitalization of the welfare state is related to the marginalization of older citizens who are socially disadvantaged [ 10 ]. However, statistics also indicate that the probability of using digital tools increases significantly as a person’s experience of using digital tools increases, regardless of their age or education level [ 5 ].

Understanding the digital practices of patients is important because they can use digital tools to engage with the health system and follow their own health course. Researching experiences with digital practices can be a way to better understand potential possibilities and barriers when patients use digital health services. With patients becoming more involved in their own health course and treatment, the importance of patients’ health literacy is being increasingly recognized [ 11 ]. The World Health Organization defines health literacy as the “achievement of a level of knowledge, personal skills and confidence to take action to improve personal and community health by changing personal lifestyles and living conditions” [ 12 ]. Furthermore, health literacy can be described as “a person’s knowledge and competencies to meet complex demands of health in modern society, ” and it is viewed as a critical step toward patient empowerment [ 11 , 12 ]. In a digitalized health care system, this also includes the knowledge, capabilities, and resources that individuals require to use and benefit from eHealth services, that is, “digital health literacy (eHealth literacy)” [ 13 ]. An eHealth literacy framework created by Norgaard et al [ 13 ] identified that different aspects, for example, the ability to process information and actively engage with digital services, can be viewed as important facets of digital health literacy. This argument is supported by studies that demonstrate how patients with cognitive and communicative challenges experience barriers to the use of digital tools and require different approaches in the design of digital solutions in the health sector [ 14 , 15 ]. Access to digital services and digital literacy is becoming increasingly important determinants of health, as people with digital literacy and access to digital services can facilitate improvement of health and involvement in their own health course [ 16 ].

The need for a better understanding of eHealth literacy and patients’ capabilities to meet public digital services’ demands as well as engage in their own health calls for a deeper investigation into digital practices and the use of digital tools and services from the perspective of patients with varying digital capabilities. Important focus areas to better understand digital practices and related challenges have already been highlighted in various studies. They indicate that social support, assessment of value in digital services, and systemic assessment of digital capabilities are important in the use and implementation of digital tools, and they call for better insight into complex experiences with digital services [ 13 , 17 , 18 ]. Therefore, we aimed to examine digital practices and experiences with public digital health services and digital tools from the perspective of patients, addressing the following research questions: how do patients use digital services and digital tools, and how do they experience them?

We aimed to investigate digital practices and experiences with digital health services and digital tools; therefore, we used a qualitative design and adopted a hermeneutic approach as the point of departure, which means including preexisting knowledge of digital practices but also providing room for new comprehension [ 19 ]. Our interpretive approach is underpinned by the philosophical hermeneutic approach by Gadamer et al [ 19 ], in which they described the interpretation process as a “hermeneutic circle,” where the researcher enters the interpretation process with an open mind and historical awareness of a phenomenon (preknowledge). We conducted semistructured interviews using an interview guide. This study followed the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist [ 20 ].

Setting and Participants

To gain a broad understanding of experiences with public digital health services, a purposive sampling strategy was used. All 31 participants were hospitalized or formerly hospitalized patients in a large neurological department in the capital of Denmark ( Table 1 ). We assessed whether including patients from the neurological field would give us a broad insight into the experiences of digital practices from different perspectives. The department consisted of, among others, 8 inpatient units covering, for example, acute neurology and stroke units, from which the patients were recruited. Patients admitted to a neurological department can have both acute and transient neurological diseases, such as infections in the brain, stroke, or blood clot in the brain from which they can recover completely or have persistent physical and mental difficulties, or experience chronic neurological and progressive disorders such as Parkinson disease and dementia. Some patients hospitalized in neurological care will have communicative and cognitive difficulties because of their neurological disorders. Nursing staff from the respective units helped the researchers (CGJ, FGJ, and MIL) identify patients who differed in terms of gender, age, and severity of neurological illness. Some patients (6/31, 19%) had language difficulties; however, a speech therapist assessed them as suitable participants. We excluded patients with severe cognitive difficulties and those who were not able to speak the Danish language. Including patients from the field of neurology provided an opportunity to study the experience of digital health practice from various perspectives. Hence, the sampling strategy enabled the identification and selection of information-rich participants relevant to this study [ 21 ], which is the aim of qualitative research. The participants were invited to participate by either the first (CGJ) or last author (MIL), and all invited participants (31/31, 100%) chose to participate.

All 31 participants were aged between 40 to 99 years, with an average age of 71.75 years ( Table 1 ). Out of the 31 participants, 10 (32%) had physical disabilities or had cognitive or communicative difficulties due to sequela in relation to neurological illness or other physical conditions.

Data Collection

The 31 patient interviews were conducted over a 2-month period between September and November 2022. Of the 31 patients, 20 (65%) were interviewed face-to-face at the hospital in their patient room upon admission and 11 (35%) were interviewed on the phone after being discharged. The interviews had a mean length of 20.48 minutes.

We developed a semistructured interview guide ( Table 2 ). The interview questions were developed based on the research aim, findings from our preliminary covering of literature in the field presented in the Introduction section, and identified gaps that we needed to elaborate on to be able to answer our research question [ 22 ]. The semistructured interview guide was designed to support the development of a trusting relationship and ensure the relevance of the interviews’ content [ 22 ]. The questions served as a prompt for the participants and were further supported by questions such as “please tell me more” and “please elaborate” throughout the interview, both to heighten the level of detail and to verify our understanding of the issues at play. If the participant had cognitive or communicative difficulties, communication was supported using a method called Supported Communication for Adults with Aphasia [ 23 ] during the interview.

The interviews were performed by all authors (CGJ, FGJ, and MIL individually), who were skilled in conducting interviews and qualitative research. The interviewers are not part of daily clinical practice but are employed in the department of neurology from where the patients were recruited. All interviews were audio recorded and subsequently transcribed verbatim by all 3 authors individually.

a PRO: patient-related outcome.

Data Analysis

The text from each transcribed interview was analyzed using manifest content analysis, as described by Graneheim and Lundman [ 24 ]. Content analysis is a method of analyzing written, verbal, and visual communication in a systematic way [ 25 ]. Qualitative content analysis is a structured but nonlinear process that requires researchers to move back and forth between the original text and parts of the text during the analysis. Manifest analysis is the descriptive level at which the surface structure of the text central to the phenomenon and the research question is described. The analysis was conducted as a collaborative effort between the first (CGJ) and last authors (MIL); hence, in this inductive circular process, to achieve consistency in the interpretation of the text, there was continued discussion and reflection between the researchers. The transcriptions were initially read several times to gain a sense of the whole context, and we analyzed each interview. The text was initially divided into domains that reflected the lowest degree of interpretation, as a rough structure was created in which the text had a specific area in common. The structure roughly reflected the interview guide’s themes, as guided by Graneheim and Lundman [ 24 ]. Thereafter, the text was divided into meaning units, condensed into text-near descriptions, and then abstracted and labeled further with codes. The codes were categorized based on similarities and differences. During this process, we discussed the findings to reach a consensus on the content, resulting in the final 4 categories presented in this paper.

Ethical Considerations

The interviewees received oral and written information about the study and its voluntary nature before the interviews. Written informed consent was obtained from all participants. Participants were able to opt of the study at any time. Data were anonymized and stored electronically on locked and secured servers. The Ethics Committee of the Capitol Region in Denmark was contacted before the start of the study. This study was registered and approved by the ethics committee and registered under the Danish Data Protection Agency (number P2021-839). Furthermore, the ethical principles of the Declaration of Helsinki were followed for this study.

The analysis provided insights into 4 different categories regarding digital practices and experiences of using digital tools and services in health care systems: social resources as a digital lifeline, possessing the necessary capabilities, big feelings as facilitators or barriers, and life without digital tools.

Social Resources as a Digital Lifeline

Throughout the analysis, it became evident that access to both material and social resources was of great importance when using digital tools. Most participants already possessed and had easy access to a computer, smartphone, or tablet. The few participants who did not own the necessary digital tools told us that they did not have the skills needed to use these tools. For these participants, the lack of material resources was tied particularly to a lack of knowledge and know-how, as they expressed that they would not know where to start after buying a computer—how to set it up, connect it to the internet, and use its many systems.

However, possessing the necessary material resources did not mean that the participants possessed the knowledge and skill to use digital tools. Furthermore, access to material resources was also a question of having access to assistance when needed. Some participants who had access to a computer, smartphone, and tablet and knew how to use these tools still had to obtain help when setting up hardware, updating software, or getting a new device. These participants were confident in their own ability to use digital devices but also relied on family, friends, and neighbors in their everyday use of these tools. Certain participants were explicitly aware of their own use of social resources when expressing their thoughts on digital services in health care systems:

I think it is a blessing and a curse. I think it is both. I would say that if I did not have someone around me in my family who was almost born into the digital world, then I think I would be in trouble. But I feel sorry for those who do not have that opportunity, and I know quite a few who do not. They get upset, and it’s really frustrating. [Woman, age 82 years]

The participants’ use of social resources indicates that learning skills and using digital tools are not solely individual tasks but rather continuously involve engagement with other people, particularly whenever a new unforeseen problem arises or when the participants want a deeper understanding of the tools they are using:

If tomorrow I have to get a new ipad...and it was like that when I got this one, then I had to get XXX to come and help me move stuff and he was sweet to help with all the practical stuff. I think I would have cursed a couple of times (if he hadn’t been there), but he is always helpful, but at the same time he is also pedagogic so I hope that next time he showed me something I will be able to do it. [Man, age 71 years]

For some participants, obtaining assistance from a more experienced family member was experienced as an opportunity to learn, whereas for other participants, their use of public digital services was even tied directly to assistance from a spouse or family member:

My wife, she has access to mine, so if something comes up, she can just go in and read, and we can talk about it afterwards what (it is). [Man, age 85 years]

The participants used social resources to navigate digital systems and understand and interpret communication from the health care system through digital devices. Another example of this was the participants who needed assistance to find, answer, and understand questionnaires from the health care department. Furthermore, social resources were viewed as a support system that made participants feel more comfortable and safer when operating digital tools. The social resources were particularly important when overcoming unforeseen and new challenges and when learning new skills related to the use of digital tools. Participants with physical, cognitive, and communicative challenges also explained how social resources were of great importance in their ability to use digital tools.

Possessing the Necessary Capabilities

The findings indicated that possessing the desire and knowing how to use digital tools are not always enough to engage with digital services successfully. Different health issues can carry consequences for motor skills and mobility. Some of these consequences were visibly affecting how our participants interacted with digital devices, and these challenges were somewhat easy to discover. However, our participants revealed hidden challenges that posed difficulties. In some specific cases, cognitive and communicative inabilities can make it difficult to use digital tools, and this might not always be clear until the individual tries to use a device’s more complex functions. An example of this is that some participants found it easy to turn on a computer and use it to write but difficult to go through security measures on digital services or interpret and understand digital language. Remembering passwords and logging on to systems created challenges, particularly for those experiencing health issues that directly affect memory and cognitive abilities, who expressed concerns about what they were able to do through digital tools:

I think it is very challenging because I would like to use it how I used to before my stroke; (I) wish that everything (digital skills) was transferred, but it just isn’t. [Man, age 80 years]

Despite these challenges, the participants demonstrated great interest in using digital tools, particularly regarding health care services and their own well-being. However, sometimes, the challenges that they experienced could not be conquered merely by motivation and good intentions. Another aspect of these challenges was the amount of extra time and energy that the participants had to spend on digital services. A patient diagnosed with Parkinson disease described how her symptoms created challenges that changed her digital practices:

Well it could for example be something like following a line in the device. And right now it is very limited what I can do with this (iPhone). Now I am almost only using it as a phone, and that is a little sad because I also like to text and stuff, but I also find that difficult (...) I think it is difficult to get an overview. [Woman, age 62 years]

Some participants said that after they were discharged from the hospital, they did not use the computer anymore because it was too difficult and too exhausting , which contributed to them giving up . Using digital tools already demanded a certain amount of concentration and awareness, and some diseases and health conditions affected these abilities further.

Big Feelings as Facilitators or Barriers

The findings revealed a wide range of digital practices in which digital tools were used as a communication device, as an entertainment device, and as a practical and informative tool for ordering medicine, booking consultations, asking health-related questions, or receiving email from public institutions. Despite these different digital practices, repeating patterns and arguments appeared when the participants were asked why they learned to use digital tools or wanted to improve their skills. A repeating argument was that they wanted to “follow the times, ” or as a participant who was still not satisfied with her digital skills stated:

We should not go against the future. [Woman, age 89 years]

The participants expressed a positive view of the technological developments and possibilities that digital devices offered, and they wanted to improve their knowledge and skills related to digital practice. For some participants, this was challenging, and they expressed frustration over how technological developments “moved too fast ,” but some participants interpreted these challenges as a way to “keep their mind sharp. ”

Another recurring pattern was that the participants expressed great interest in using digital services related to the health care system and other public institutions. The importance of being able to navigate digital services was explicitly clear when talking about finding test answers, written electronic messages, and questionnaires from the hospital or other public institutions. Keeping up with developments, communicating with public institutions, and taking an interest in their own health and well-being were described as good reasons to learn to use digital tools.

However, other aspects also affected these learning facilitators. Some participants felt alienated while using digital tools and described the practice as something related to feelings of anxiety, fear, and stupidity as well as something that demanded “a certain amount of courage. ” Some participants felt frustrated with the digital challenges they experienced, especially when the challenges were difficult to overcome because of their physical conditions:

I get sad because of it (digital challenges) and I get very frustrated and it takes a lot of time because I have difficulty seeing when I look away from the computer and have to turn back again to find out where I was and continue there (...) It pains me that I have to use so much time on it. [Man, age 71 years]

Fear of making mistakes, particularly when communicating with public institutions, for example, the health care system, was a common pattern. Another pattern was the fear of misinterpreting the sender and the need to ensure that the written electronic messages were actually from the described sender. Some participants felt that they were forced to learn about digital tools because they cared a lot about the services. Furthermore, fears of digital services replacing human interaction were a recurring concern among the participants. Despite these initial and recurring feelings, some participants learned how to navigate the digital services that they deemed relevant. Another recurring pattern in this learning process was repetition, the practice of digital skills, and consistent assistance from other people. One participant expressed the need to use the services often to remember the necessary skills:

Now I can figure it out because now I’ve had it shown 10 times. But then three months still pass... and then I think...how was it now? Then I get sweat on my forehead (feel nervous) and think; I’m not an idiot. [Woman, age 82 years]

For some participants, learning how to use digital tools demanded time and patience, as challenges had to be overcome more than once because they reappeared until the use of digital tools was more automatized into their everyday lives. Using digital tools and health services was viewed as easier and less stressful when part of everyday routines.

Life Without Digital Tools: Not a Free Choice

Even though some participants used digital tools daily, other participants expressed that it was “too late for them.” These participants did not view it as a free choice but as something they had to accept that they could not do. They wished that they could have learned it earlier in life but did not view it as a possibility in the future. Furthermore, they saw potential in digital services, including digital health care services, but they did not know exactly what services they were missing out on. Despite this lack of knowledge, they still felt sad about the position they were in. One participant expressed what she thought regarding the use of digital tools in public institutions:

Well, I feel alright about it, but it is very, very difficult for those of us who do not have it. Sometimes you can feel left out—outside of society. And when you do not have one of those (computers)...A reference is always made to w and w (www.) and then you can read on. But you cannot do that. [Woman, age 94 years]

The feeling of being left out of society was consistent among the participants who did not use digital tools. To them, digital systems seemed to provide unfair treatment based on something outside of their own power. Participants who were heavily affected by their medical conditions and could not use digital services also felt left out because they saw the advantages of using digital tools. Furthermore, a participant described the feelings connected to the use of digital tools in public institutions:

It is more annoying that it does not seem to work out in my favour. [Woman, age 62 years]

These statements indicated that it is possible for individuals to want to use digital tools and simultaneously find them too challenging. These participants were aware that there are consequences of not using digital tools, and that saddens them, as they feel like they are not receiving the same treatment as other people in society and the health care system.

Principal Findings

The insights from our findings demonstrated that our participants had different digital practices and different experiences with digital tools and services; however, the analysis also highlighted patterns related to how digital services and tools were used. Specific conditions were important for the possibility of digital practice, including having access to social resources; possessing the necessary capabilities; and feeling motivated, secure, and comfortable . These prerequisites were necessary to have positive experiences using digital tools in the health care system, although some participants who lived up to these prerequisites were still skeptical toward digital solutions. Others who did not live up to these prerequisites experienced challenges and even though they were aware of opportunities, this awareness made them feel left out. A few participants even viewed the digital tools as a threat to their participation in society. This supports the notion of Norgaard et al [ 13 ] that the attention paid to digital capability demands from eHealth systems is very important. Furthermore, our findings supported the argument of Hjeltholt and Papazu [ 17 ] that it is important to better understand experiences related to digital services. In our study, we accommodate this request and bring forth a broad perspective on experiences with digital practices; we particularly add insight into the challenges with digital practices for patients who also have acute or chronic illness, with some of them also experiencing physical, communicative, and cognitive difficulties. To our knowledge, there is limited existing literature focusing on digital practices that do not have a limited scope, for example, a focus on perspectives on eHealth literacy in the use of apps [ 26 ] or intervention studies with a focus on experiences with digital solutions, for example, telemedicine during the COVID-19 pandemic [ 27 ]. As mentioned by Hjeltholt et al [ 10 ], certain citizens are dependent on their own social networks in the process of using and learning digital tools. Rasi et al [ 28 ] and Airola et al [ 29 ] argued that digital health literacy is situated and should include the capabilities of the individual’s social network. Our findings support these arguments that access to social resources is an important condition; however, the findings also highlight that these resources can be particularly crucial in the use of digital health services, for example, when interpreting and understanding digital and written electronic messages related to one’s own health course or when dealing with physical, cognitive, and communicative disadvantages. Therefore, we argue that the awareness of the disadvantages is important if we want to understand patients’ digital capabilities, and the inclusion of the next of kin can be evident in unveiling challenges that are unknown and not easily visible or when trying to reach patients with digital challenges through digital means.

Studies by Kayser et al [ 30 ] and Kanoe et al [ 31 ] indicated that patients’ abilities to interpret and understand digital health–related services and their benefits are important for the successful implementation of eHealth services—an argument that our findings support. Health literacy in both digital and physical contexts is important if we want to understand how to better design and implement services. Our participants’ statements support the argument that communication through digital means cannot be viewed as similar to face-to-face communication and that an emphasis on digital health literacy demonstrates how health systems are demanding different capabilities from the patients [ 13 ]. We argue that it is important to communicate the purposes of digital services so that both the patient and their next of kin know why they participate and how it can benefit them. Therefore, it is important to make it as clear as possible that digital health services can benefit the patient and that these services are developed to support information, communication, and dialogue between patients and health professionals. However, our findings suggest that even after interpreting and understanding the purposes of digital health services, some patients may still experience challenges when using digital tools.

Therefore, it is important to understand how and why patients learn digital skills, particularly because both experience with digital devices and estimation of the value of digital tools have been highlighted as key factors for digital practices [ 5 , 18 ]. Our findings indicate that a combination of these factors is important, as recognizing the value of digital tools was not enough to facilitate the necessary learning process for some of our participants. Instead, our participants described the use of digital tools as complex and continuous processes in which automation of skills, assistance from others, and time to relearn forgotten knowledge were necessary and important facilitators for learning and understanding digital tools as well as becoming more comfortable and confident in the use of digital health services. This was particularly important, as it was more encouraging for our participants to learn digital tools when they felt secure, instead of feeling afraid and anxious, a point that Bailey et al [ 18 ] also highlighted. The value of digital solutions and the will to learn were greater when challenges were viewed as something to overcome and learn from instead of something that created a feeling of being stupid. This calls for attention on how to simplify and explain digital tools and services so that users do not feel alienated. Our findings also support the argument that digital health literacy should take into account emotional well-being related to digital practice [ 32 ].

The various perspectives that our participants provided regarding the use of digital tools in the health care system indicate that patients are affected by the use of digital health services and their own capabilities to use digital tools. Murray et al [ 33 ] argued that the use of digital tools in health sectors has the potential to improve health and health delivery by improving efficacy, efficiency, accessibility, safety, and personalization, and our participants also highlighted these positive aspects. However, different studies found that some patients, particularly older adults considered socially vulnerable, have lower digital health literacy [ 10 , 34 , 35 ], which is an important determinant of health and may widen disparities and inequity in health care [ 16 ]. Studies on older adult populations’ adaptation to information and communication technology show that engaging with this technology can be limited by the usability of technology, feelings of anxiety and concern, self-perception of technology use, and the need for assistance and inclusive design [ 36 ]. Our participants’ experiences with digital practices support the importance of these focus areas, especially when primarily older patients are admitted to hospitals. Furthermore, our findings indicate that some older patients who used to view themselves as being engaged in their own health care felt more distanced from the health care system because of digital services, and some who did not have the capabilities to use digital tools felt that they were treated differently compared to the rest of society. They did not necessarily view themselves as vulnerable but felt vulnerable in the specific experience of trying to use digital services because they wished that they were more capable. Moreover, this was the case for patients with physical and cognitive difficulties, as they were not necessarily aware of the challenges before experiencing them. Drawing on the phenomenological and feministic approach by Ahmed [ 37 ], these challenges that make patients feel vulnerable are not necessarily visible to others but can instead be viewed as invisible institutional “walls” that do not present themselves before the patient runs into them. Some participants had to experience how their physical, cognitive, or communicative difficulties affected their digital practice to realize that they were not as digitally capable as they once were or as others in society. Furthermore, viewed from this perspective, our findings could be used to argue that digital capabilities should be viewed as a privilege tied to users’ physical bodies and that digital services in the health care system are indirectly making patients without this privilege vulnerable. This calls for more attention to the inequities that digital tools and services create in health care systems and awareness that those who do not use digital tools are not necessarily indifferent about the consequences. Particularly, in a context such as the Danish one, in which the digital strategy is to create an intertwined and user-friendly public digital sector for everyone, it needs to be understood that patients have different digital capabilities and needs. Although some have not yet had a challenging experience that made them feel vulnerable, others are very aware that they receive different treatment and feel that they are on their own or that the rest of the society does not care about them. Inequities in digital health care, such as these, can and should be mitigated or prevented, and our investigation into the experiences with digital practices can help to show that we are creating standards and infrastructures that deliberately exclude the perspectives of those who are most in need of the services offered by the digital health care system [ 8 ]. Therefore, our findings support the notions that flexibility is important in the implementation of universal public digital services [ 17 ]; that it is important to adjust systems in accordance with patients’ eHealth literacy and not only improve the capabilities of individuals [ 38 ]; and that the development and improvement of digital health literacy are not solely an individual responsibility but are also tied to ways in which institutions organize, design, and implement digital tools and services [ 39 ].

Limitations

This qualitative study provided novel insights into the experiences with public digital health services from the perspective of patients in the Danish context, enabling a deeper understanding of how digital health services and digital tools are experienced and used. This helps build a solid foundation for future interventions aimed at digital health literacy and digital health interventions. However, this study has some limitations. First, the study was conducted in a country where digitalization is progressing quickly, and people, therefore, are accustomed to this pace. Therefore, readers must be aware of this. Second, the study included patients with different neurological conditions; some of their digital challenges were caused or worsened by these neurological conditions and are, therefore, not applicable to all patients in the health system. However, the findings provided insights into the patients’ digital practices before their conditions and other challenges not connected to neurological conditions shared by patients. Third, the study was broad, and although a large number of informants was included, from a qualitative research perspective, we would recommend additional research in this field to develop interventions that target digital health literacy and the use of digital health services.

Conclusions

Experiences with digital tools and digital health services are complex and multifaceted. The advantages in communication, finding information, or navigating through one’s own health course work as facilitators for engaging with digital tools and digital health services. However, this is not enough on its own. Furthermore, feeling secure and motivated and having time to relearn and practice skills are important facilitators. Engagement in digital practices for the examined population requires access to continuous assistance from their social network. If patients do not meet requirements, digital health services can be experienced as exclusionary and a source of concern. Physical, cognitive, and communicative difficulties might make it impossible to use digital tools or create more challenges that require assistance. Digitalization of the health care system means that patients do not have the choice to opt out of using digital services without having consequences, resulting in them receiving a different treatment than others. To ensure digitalization does not create inequities in health, it is necessary for developers and the health institutions that create, design, and implement digital services to be aware of differences in digital health literacy and to focus on simplifying communication with patients and next of kin through and about digital services. It is important to focus on helping individuals meet the necessary conditions and finding flexible solutions for those who do not have the same privileges as others if the public digital sector is to work for everyone.

Acknowledgments

The authors would like to thank all the people who gave their time to be interviewed for the study, the clinical nurse specialists who facilitated interviewing patients, and the other nurses on shift who assisted in recruiting participants.

Conflicts of Interest

None declared.

  • Year in search 2022. Google Trends. URL: https://trends.google.com/trends/yis/2022/DK/ [accessed 2024-04-02]
  • Digital government index: 2019. Organisation for Economic Cooperation and Development. URL: https://www.oecd-ilibrary.org/content/paper/4de9f5bb-en [accessed 2024-04-02]
  • Azzopardi-Muscat N, Sørensen K. Towards an equitable digital public health era: promoting equity through a health literacy perspective. Eur J Public Health. Oct 01, 2019;29(Supplement_3):13-17. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Digital practices. Umeå University. URL: https://www.umu.se/en/humlab/research/digital-practice/ [accessed 2024-04-02]
  • It-anvendelse i befolkningen 2020. Danmarks Statistik. URL: https://www.dst.dk/da/Statistik/nyheder-analyser-publ/Publikationer/VisPub?cid=29450 [accessed 2024-04-02]
  • Sundhed.dk homepage. Sundhed.dk. URL: https://www.sundhed.dk/borger/ [accessed 2024-04-02]
  • Nøhr C, Bertelsen P, Vingtoft S, Andersen SK. Digitalisering af Det Danske Sundhedsvæsen. Odense, Denmark. Syddansk Universitetsforlag; 2019.
  • Eriksen J, Ebbesen M, Eriksen KT, Hjermitslev C, Knudsen C, Bertelsen P, et al. Equity in digital healthcare - the case of Denmark. Front Public Health. Sep 6, 2023;11:1225222. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Digital health strategy. Sundhedsdatastyrelsen. URL: https://sundhedsdatastyrelsen.dk/da/english/digital_health_solutions/digital_health_strategy [accessed 2024-04-02]
  • Hjelholt M, Schou J, Bojsen LB, Yndigegn SL. Digital marginalisering af udsatte ældre: arbejdsrapport 2. IT-Universitetet i København. 2018. URL: https://egv.dk/images/Projekter/Projekter_2018/EGV_arbejdsrapport_2.pdf [accessed 2024-04-02]
  • Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. Jan 25, 2012;12(1):80. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Improving health literacy. World Health Organization. URL: https://www.who.int/activities/improving-health-literacy [accessed 2024-04-02]
  • Norgaard O, Furstrand D, Klokker L, Karnoe KA, Batterham R, Kayser L, et al. The e-health literacy framework: a conceptual framework for characterizing e-health users and their interaction with e-health systems. Knowl Manag E Learn. 2015;7(4). [ CrossRef ]
  • Kramer JM, Schwartz A. Reducing barriers to patient-reported outcome measures for people with cognitive impairments. Arch Phys Med Rehabil. Aug 2017;98(8):1705-1715. [ CrossRef ] [ Medline ]
  • Menger F, Morris J, Salis C. Aphasia in an internet age: wider perspectives on digital inclusion. Aphasiology. 2016;30(2-3):112-132. [ CrossRef ]
  • Richardson S, Lawrence K, Schoenthaler AM, Mann D. A framework for digital health equity. NPJ Digit Med. Aug 18, 2022;5(1):119. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hjelholt M, Papazu I. "De har fået NemID, men det er ikke nemt for mig” - Digital rum(me)lighed i den danske velfærdsstat. Social Kritik. 2021;2021-2(163). [ FREE Full text ]
  • Bailey C, Sheehan C. Technology, older persons’ perspectives and the anthropological ethnographic lens. Alter. 2009;3(2):96-109. [ CrossRef ]
  • Gadamer HG, Weinsheimer HG, Marshall DG. Truth and Method. New York, NY. Crossroad Publishing Company; 1991.
  • Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. Dec 16, 2007;19(6):349-357. [ CrossRef ] [ Medline ]
  • Polit DF, Beck CT. Nursing Research: Generating and Assessing Evidence for Nursing Practice. Philadelphia, PA. Lippincott Williams & Wilkins, Inc; 2012.
  • Kvale S, Brinkmann S. InterViews: Learning the Craft of Qualitative Research Interviewing. Thousand Oaks, CA. SAGE Publications; 2009.
  • Kagan A. Supported conversation for adults with aphasia: methods and resources for training conversation partners. Aphasiology. Sep 1998;12(9):816-830. [ CrossRef ]
  • Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. Feb 2004;24(2):105-112. [ CrossRef ] [ Medline ]
  • Krippendorff K. Content Analysis: An Introduction to Its Methodology. Thousand Oaks, CA. SAGE Publications; 1980.
  • Klösch M, Sari-Kundt F, Reibnitz C, Osterbrink J. Patients' attitudes toward their health literacy and the use of digital apps in health and disease management. Br J Nurs. Nov 25, 2021;30(21):1242-1249. [ CrossRef ] [ Medline ]
  • Datta P, Eiland L, Samson K, Donovan A, Anzalone AJ, McAdam-Marx C. Telemedicine and health access inequalities during the COVID-19 pandemic. J Glob Health. Dec 03, 2022;12:05051. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rasi P, Lindberg J, Airola E. Older service users’ experiences of learning to use eHealth applications in sparsely populated healthcare settings in Northern Sweden and Finland. Educ Gerontol. Nov 24, 2020;47(1):25-35. [ CrossRef ]
  • Airola E, Rasi P, Outila M. Older people as users and non-users of a video conferencing service for promoting social connectedness and well-being – a case study from Finnish Lapland. Educ Gerontol. Mar 29, 2020;46(5):258-269. [ CrossRef ]
  • Kayser L, Kushniruk A, Osborne RH, Norgaard O, Turner P. Enhancing the effectiveness of consumer-focused health information technology systems through eHealth literacy: a framework for understanding users' needs. JMIR Hum Factors. May 20, 2015;2(1):e9. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Karnoe A, Furstrand D, Christensen KB, Norgaard O, Kayser L. Assessing competencies needed to engage with digital health services: development of the ehealth literacy assessment toolkit. J Med Internet Res. May 10, 2018;20(5):e178. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nielsen AS, Hanna L, Larsen BF, Appel CW, Osborne RH, Kayser L. Readiness, acceptance and use of digital patient reported outcome in an outpatient clinic. Health Informatics J. Jun 03, 2022;28(2):14604582221106000. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating digital health interventions: key questions and approaches. Am J Prev Med. Nov 2016;51(5):843-851. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chesser A, Burke A, Reyes J, Rohrberg T. Navigating the digital divide: a systematic review of eHealth literacy in underserved populations in the United States. Inform Health Soc Care. Feb 24, 2016;41(1):1-19. [ CrossRef ] [ Medline ]
  • Chesser AK, Keene Woods N, Smothers K, Rogers N. Health literacy and older adults: a systematic review. Gerontol Geriatr Med. Mar 15, 2016;2:2333721416630492. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mitra S, Singh A, Rajendran Deepam S, Asthana MK. Information and communication technology adoption among the older people: a qualitative approach. Health Soc Care Community. Nov 21, 2022;30(6):e6428-e6437. [ CrossRef ] [ Medline ]
  • Ahmed S. How not to do things with words. Wagadu. 2016. URL: https://sites.cortland.edu/wagadu/wp-content/uploads/sites/3/2017/02/v16-how-not-to-do-ahmed.pdf [accessed 2024-04-02]
  • Monkman H, Kushniruk AW. eHealth literacy issues, constructs, models, and methods for health information technology design and evaluation. Knowl Manag E Learn. 2015;7(4). [ CrossRef ]
  • Brørs G, Norman CD, Norekvål TM. Accelerated importance of eHealth literacy in the COVID-19 outbreak and beyond. Eur J Cardiovasc Nurs. Aug 15, 2020;19(6):458-461. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by A Mavragani; submitted 14.03.23; peer-reviewed by G Myreteg, J Eriksen, M Siermann; comments to author 18.09.23; revised version received 09.10.23; accepted 27.02.24; published 11.04.24.

©Christian Gybel Jensen, Frederik Gybel Jensen, Mia Ingerslev Loft. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. Research Recommendation Sample Pdf

    how to make recommendation in qualitative research

  2. Qualitative Research Introduction

    how to make recommendation in qualitative research

  3. Research Recommendation Sample Pdf

    how to make recommendation in qualitative research

  4. How To Write A Findings Section For Qualitative Research

    how to make recommendation in qualitative research

  5. IMPORTANCE OF RECOMMENDATION IN RESEARCH

    how to make recommendation in qualitative research

  6. how to write a recommendation in report

    how to make recommendation in qualitative research

VIDEO

  1. How to Write the Recommendation in Research Paper

  2. EASY WAY OF WRITING CHAPTER 5, CONCLUSIONS AND RECOMMENDATIONS

  3. CHAPTER 5

  4. PAANO ISULAT ANG CHAPTER 5, Summary of Findings, Conclusions and Recommendations

  5. How to write Recommendation for Quantitative Research Paper

  6. How to Write Chapter 5

COMMENTS

  1. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  2. Research Recommendations

    Example of Research Recommendations sample for students: Further investigate the effects of X on Y by conducting a larger-scale randomized controlled trial with a diverse population. Explore the relationship between A and B by conducting qualitative interviews with individuals who have experience with both.

  3. Conclusions and recommendations

    The aim of this study was to explore the range and nature of influences on safety in decision-making by ambulance service staff (paramedics). A qualitative approach was adopted using a range of complementary methods. The study has provided insights on the types of decisions that staff engage in on a day-to-day basis. It has also identified a range of system risk factors influencing decisions ...

  4. Draw conclusions and make recommendations (Chapter 6)

    For this reason you need to support your conclusions with structured, logical reasoning. Having drawn your conclusions you can then make recommendations. These should flow from your conclusions. They are suggestions about action that might be taken by people or organizations in the light of the conclusions that you have drawn from the results ...

  5. How to Write Recommendations in Research

    Here is a step-wise guide to build your understanding on the development of research recommendations. 1. Understand the Research Question: Understand the research question and objectives before writing recommendations. Also, ensure that your recommendations are relevant and directly address the goals of the study. 2.

  6. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  7. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  8. PDF Recommendations for Designing and Reviewing Qualitative Research in

    Recommendations for design-ing and reviewing qualitative research: The role of methodological integrity. In A. Bohart (Chair), Evolving understandings on trustworthiness or rigor in qualitative research. Presented at the So-ciety for Psychotherapy Research, Philadelphia, PA. Levitt, H. M. (2016a).

  9. Criteria for Good Qualitative Research: A Comprehensive Review

    It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific ...

  10. Recommendations for designing and reviewing qualitative research in

    The current paper presents recommendations from the Task Force on Resources for the Publication of Qualitative Research of the Society for Qualitative Inquiry in Psychology, a section of Division 5 of the American Psychological Association. This initiative was a response to concerns by authors that reviews of qualitative research articles frequently utilize inflexible sets of procedures and ...

  11. (Pdf) Chapter 5 Summary, Conclusions, Implications and Recommendations

    The conclusions are as stated below: i. Students' use of language in the oral sessions depicted their beliefs and values. based on their intentions. The oral sessions prompted the students to be ...

  12. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  13. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  14. Writing Conclusions and Recommendations for a Qualitative Research

    Conclusions should properly answer the specific questions presented at the start of the investigation. Conclusions should be expressed in a concise statement that conveys all the important information. Conclusions should be stated in a strong, clear, and definite manner. Conclusions should pertain only to the subject or topic of the study.

  15. PDF Chapter 5 Conclusions and recommendations

    5.1 INTRODUCTION. In this chapter the conclusions derived from the findings of this study on the experiences of registered nurses involved in the termination of pregnancy at Soshanguve Community Health Centre are described. The conclusions were based on the purpose, research questions and results of the study. The implications of these findings ...

  16. Choosing a Qualitative Research Approach

    Choosing a Qualitative Approach. Before engaging in any qualitative study, consider how your views about what is possible to study will affect your approach. Then select an appropriate approach within which to work. Alignment between the belief system underpinning the research approach, the research question, and the research approach itself is ...

  17. Using qualitative research to strengthen guideline development

    Qualitative Evidence Synthesis (QES) for Guidelines mini series. Paper 1 - Using qualitative evidence synthesis to inform guideline scope and develop qualitative findings statements. Paper 2 - Using qualitative evidence synthesis findings to inform evidence-to-decision frameworks and recommendations. Paper 3 - Using qualitative evidence ...

  18. Improving Qualitative Research Findings Presentations:

    These have negative implications: reducing the methodological quality of, engagement with, and overall influence of the qualitative research presented. We draw on genre theory to make recommendations for future qualitative research findings presentations to improve the rigor, influence, and impact of such presentations.

  19. Chapter 21. Conclusion: The Value of Qualitative Research

    That said, qualitative research can help demonstrate the causal mechanisms by which something happens. Qualitative research is also helpful in exploring alternative explanations and counterfactuals. If you want to know more about qualitative research and causality, I encourage you to read chapter 10 of Rubin's text.

  20. Conclusions and recommendations for future research

    The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and ...

  21. Turn your research insights into actionable recommendations

    Turn your research insights into actionable recommendations. Published. 13 January 2022. I could tell I was steadily improving in my report writing. People were more engaged when I used video clips to show what the different users were doing or feeling. I was able to incorporate more infographics and annotations when doing usability testing.

  22. Making Qualitative Research Inclusive: Methodological Insights in

    Recommendations for future co-creation of research with disability are identified. Introduction. People living with Deafblindness want to be included in research ... to allow for qualitative inquiry to be undertaken. The changes required to make qualitative research accessible may impact the choice of qualitative methodologies to conduct of the ...

  23. Final Report

    To provide an evidence base upon which to make its recommendations, the Review commissioned the University of York to conduct a series of independent systematic reviews of existing evidence and new qualitative and quantitative research to build on the evidence base.

  24. Journal of Medical Internet Research

    Background: The digitalization of public and health sectors worldwide is fundamentally changing health systems. With the implementation of digital health services in health institutions, a focus on digital health literacy and the use of digital health services have become more evident. In Denmark, public institutions use digital tools for different purposes, aiming to create a universal public ...

  25. Resisting the Robotic

    The purpose of this article is to offer a critical discussion on resisting robotic qualitative analysis. We argue that health and social science researchers, as well as qualitative researchers from other disciplines must resist thin engagement with qualitative data analysis, so that they avoid the mechanical and lifeless production of research findings and rather move towards soulful ...