Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 20 January 2009

How to critically appraise an article

  • Jane M Young 1 &
  • Michael J Solomon 2  

Nature Clinical Practice Gastroenterology & Hepatology volume  6 ,  pages 82–91 ( 2009 ) Cite this article

52k Accesses

100 Citations

435 Altmetric

Metrics details

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article

Critical appraisal provides a basis for decisions on whether to use the results of a study in clinical practice

Different study designs are prone to various sources of systematic bias

Design-specific, critical-appraisal checklists are useful tools to help assess study quality

Assessments of other factors, including the importance of the research question, the appropriateness of statistical analysis, the legitimacy of conclusions and potential conflicts of interest are an important part of the critical appraisal process

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

critical evaluation of research methodology

Similar content being viewed by others

critical evaluation of research methodology

Making sense of the literature: an introduction to critical appraisal for the primary care practitioner

critical evaluation of research methodology

How to appraise the literature: basic principles for the busy clinician - part 2: systematic reviews and meta-analyses

critical evaluation of research methodology

How to appraise the literature: basic principles for the busy clinician - part 1: randomised controlled trials

Druss BG and Marcus SC (2005) Growth and decentralisation of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 93 : 499–501

PubMed   PubMed Central   Google Scholar  

Glasziou PP (2008) Information overload: what's behind it, what's beyond it? Med J Aust 189 : 84–85

PubMed   Google Scholar  

Last JE (Ed.; 2001) A Dictionary of Epidemiology (4th Edn). New York: Oxford University Press

Google Scholar  

Sackett DL et al . (2000). Evidence-based Medicine. How to Practice and Teach EBM . London: Churchill Livingstone

Guyatt G and Rennie D (Eds; 2002). Users' Guides to the Medical Literature: a Manual for Evidence-based Clinical Practice . Chicago: American Medical Association

Greenhalgh T (2000) How to Read a Paper: the Basics of Evidence-based Medicine . London: Blackwell Medicine Books

MacAuley D (1994) READER: an acronym to aid critical reading by general practitioners. Br J Gen Pract 44 : 83–85

CAS   PubMed   PubMed Central   Google Scholar  

Hill A and Spittlehouse C (2001) What is critical appraisal. Evidence-based Medicine 3 : 1–8 [ http://www.evidence-based-medicine.co.uk ] (accessed 25 November 2008)

Public Health Resource Unit (2008) Critical Appraisal Skills Programme (CASP) . [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ] (accessed 8 August 2008)

National Health and Medical Research Council (2000) How to Review the Evidence: Systematic Identification and Review of the Scientific Literature . Canberra: NHMRC

Elwood JM (1998) Critical Appraisal of Epidemiological Studies and Clinical Trials (2nd Edn). Oxford: Oxford University Press

Agency for Healthcare Research and Quality (2002) Systems to rate the strength of scientific evidence? Evidence Report/Technology Assessment No 47, Publication No 02-E019 Rockville: Agency for Healthcare Research and Quality

Crombie IK (1996) The Pocket Guide to Critical Appraisal: a Handbook for Health Care Professionals . London: Blackwell Medicine Publishing Group

Heller RF et al . (2008) Critical appraisal for public health: a new checklist. Public Health 122 : 92–98

Article   Google Scholar  

MacAuley D et al . (1998) Randomised controlled trial of the READER method of critical appraisal in general practice. BMJ 316 : 1134–37

Article   CAS   Google Scholar  

Parkes J et al . Teaching critical appraisal skills in health care settings (Review). Cochrane Database of Systematic Reviews 2005, Issue 3. Art. No.: cd001270. 10.1002/14651858.cd001270

Mays N and Pope C (2000) Assessing quality in qualitative research. BMJ 320 : 50–52

Hawking SW (2003) On the Shoulders of Giants: the Great Works of Physics and Astronomy . Philadelphia, PN: Penguin

National Health and Medical Research Council (1999) A Guide to the Development, Implementation and Evaluation of Clinical Practice Guidelines . Canberra: National Health and Medical Research Council

US Preventive Services Taskforce (1996) Guide to clinical preventive services (2nd Edn). Baltimore, MD: Williams & Wilkins

Solomon MJ and McLeod RS (1995) Should we be performing more randomized controlled trials evaluating surgical operations? Surgery 118 : 456–467

Rothman KJ (2002) Epidemiology: an Introduction . Oxford: Oxford University Press

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: sources of bias in surgical studies. ANZ J Surg 73 : 504–506

Margitic SE et al . (1995) Lessons learned from a prospective meta-analysis. J Am Geriatr Soc 43 : 435–439

Shea B et al . (2001) Assessing the quality of reports of systematic reviews: the QUORUM statement compared to other tools. In Systematic Reviews in Health Care: Meta-analysis in Context 2nd Edition, 122–139 (Eds Egger M. et al .) London: BMJ Books

Chapter   Google Scholar  

Easterbrook PH et al . (1991) Publication bias in clinical research. Lancet 337 : 867–872

Begg CB and Berlin JA (1989) Publication bias and dissemination of clinical research. J Natl Cancer Inst 81 : 107–115

Moher D et al . (2000) Improving the quality of reports of meta-analyses of randomised controlled trials: the QUORUM statement. Br J Surg 87 : 1448–1454

Shea BJ et al . (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7 : 10 [10.1186/1471-2288-7-10]

Stroup DF et al . (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 : 2008–2012

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: evaluating surgical effectiveness. ANZ J Surg 73 : 507–510

Schulz KF (1995) Subverting randomization in controlled trials. JAMA 274 : 1456–1458

Schulz KF et al . (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 : 408–412

Moher D et al . (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology 1 : 2 [ http://www.biomedcentral.com/ 1471-2288/1/2 ] (accessed 25 November 2008)

Rochon PA et al . (2005) Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 330 : 895–897

Mamdani M et al . (2005) Reader's guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. BMJ 330 : 960–962

Normand S et al . (2005) Reader's guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. BMJ 330 : 1021–1023

von Elm E et al . (2007) Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335 : 806–808

Sutton-Tyrrell K (1991) Assessing bias in case-control studies: proper selection of cases and controls. Stroke 22 : 938–942

Knottnerus J (2003) Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol 56 : 1118–1128

Furukawa TA and Guyatt GH (2006) Sources of bias in diagnostic accuracy studies and the diagnostic process. CMAJ 174 : 481–482

Bossyut PM et al . (2003)The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138 : W1–W12

STARD statement (Standards for the Reporting of Diagnostic Accuracy Studies). [ http://www.stard-statement.org/ ] (accessed 10 September 2008)

Raftery J (1998) Economic evaluation: an introduction. BMJ 316 : 1013–1014

Palmer S et al . (1999) Economics notes: types of economic evaluation. BMJ 318 : 1349

Russ S et al . (1999) Barriers to participation in randomized controlled trials: a systematic review. J Clin Epidemiol 52 : 1143–1156

Tinmouth JM et al . (2004) Are claims of equivalency in digestive diseases trials supported by the evidence? Gastroentrology 126 : 1700–1710

Kaul S and Diamond GA (2006) Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med 145 : 62–69

Piaggio G et al . (2006) Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 295 : 1152–1160

Heritier SR et al . (2007) Inclusion of patients in clinical trial analysis: the intention to treat principle. In Interpreting and Reporting Clinical Trials: a Guide to the CONSORT Statement and the Principles of Randomized Controlled Trials , 92–98 (Eds Keech A. et al .) Strawberry Hills, NSW: Australian Medical Publishing Company

National Health and Medical Research Council (2007) National Statement on Ethical Conduct in Human Research 89–90 Canberra: NHMRC

Lo B et al . (2000) Conflict-of-interest policies for investigators in clinical trials. N Engl J Med 343 : 1616–1620

Kim SYH et al . (2004) Potential research participants' views regarding researcher and institutional financial conflicts of interests. J Med Ethics 30 : 73–79

Komesaroff PA and Kerridge IH (2002) Ethical issues concerning the relationships between medical practitioners and the pharmaceutical industry. Med J Aust 176 : 118–121

Little M (1999) Research, ethics and conflicts of interest. J Med Ethics 25 : 259–262

Lemmens T and Singer PA (1998) Bioethics for clinicians: 17. Conflict of interest in research, education and patient care. CMAJ 159 : 960–965

Download references

Author information

Authors and affiliations.

JM Young is an Associate Professor of Public Health and the Executive Director of the Surgical Outcomes Research Centre at the University of Sydney and Sydney South-West Area Health Service, Sydney,

Jane M Young

MJ Solomon is Head of the Surgical Outcomes Research Centre and Director of Colorectal Research at the University of Sydney and Sydney South-West Area Health Service, Sydney, Australia.,

Michael J Solomon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jane M Young .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Young, J., Solomon, M. How to critically appraise an article. Nat Rev Gastroenterol Hepatol 6 , 82–91 (2009). https://doi.org/10.1038/ncpgasthep1331

Download citation

Received : 10 August 2008

Accepted : 03 November 2008

Published : 20 January 2009

Issue Date : February 2009

DOI : https://doi.org/10.1038/ncpgasthep1331

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Emergency physicians’ perceptions of critical appraisal skills: a qualitative study.

  • Sumintra Wood
  • Jacqueline Paulis
  • Angela Chen

BMC Medical Education (2022)

An integrative review on individual determinants of enrolment in National Health Insurance Scheme among older adults in Ghana

  • Anthony Kwame Morgan
  • Anthony Acquah Mensah

BMC Primary Care (2022)

Autopsy findings of COVID-19 in children: a systematic review and meta-analysis

  • Anju Khairwa
  • Kana Ram Jat

Forensic Science, Medicine and Pathology (2022)

The use of a modified Delphi technique to develop a critical appraisal tool for clinical pharmacokinetic studies

  • Alaa Bahaa Eldeen Soliman
  • Shane Ashley Pawluk
  • Ousama Rachid

International Journal of Clinical Pharmacy (2022)

Critical Appraisal: Analysis of a Prospective Comparative Study Published in IJS

  • Ramakrishna Ramakrishna HK
  • Swarnalatha MC

Indian Journal of Surgery (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

critical evaluation of research methodology

University at Buffalo print logo

  • University Libraries
  • Research Guides
  • Critical Evaluation

Authority: Critical Evaluation

  • World Views and Voices
  • Understanding Peer Review

Critical Evaluation of Information Sources

After initial evaluation of a source, the next step is to go deeper. This includes a wide variety of techniques and may depend on the type of source. In the case of research, it will include evaluating the methodology used in the study and requires you to have knowledge of those discipline-specific methods. If you are just beginning your academic career or just entered a new field, you will likely need to learn more about the methodologies used in order to fully understand and evaluate this part of a study.

Lateral reading is a technique that can, and should, be applied to any source type. In the case of a research study, looking for the older articles that influenced the one you selected can give you a better understanding of the issues and context. Reading articles that were published after can give you an idea of how scholars are pushing that research to the next step. This can also help with understanding how scholars engage with each other in conversation through research and even how the academic system privileges certain voices and established authorities in the conversation. You might find articles that respond directly to studies that provide insight into evaluation and critique within that discipline.

Evaluation at this level is central to developing a better understanding of your own research question by learning from these scholarly conversations and how authority is tested.

Check out the resources below to help you with this stage of evaluation.

Scientific Method/Methodologies

Here is a general overview of how the scientific method works and how scholars evaluate their work using critical thinking. This same process is used when scholars write up their scholarly work. 

The Steps of the Scientific Method

Question something that was observed, do background research to better understand, formulate a hypothesis (research question), create an experiment or method for studying the question, run the experiment and record the results, think critically about what the results mean, suggest conclusions and report back, lateral reading.

Critical Thinking

Thinking critically about the information you encounter is central to how you develop your own conclusions, judgement, and position. This analysis is what will allow you to make a valuable contribution of your own to the scholarly conversation.

  • TEDEd: Dig Deeper on the 5 Tips to Improve Your Critical Thinking
  • The Foundation for Critical Thinking: College and University Students
  • Stanford Encyclopedia of Philosophy: Critical Thinking

Scholarship as Conversation

It sounds pretty bad if you say an article was retracted, but is it always? As with most things, it depends on the context. Someone retracting a statement made based on false information or misinformation is one thing. It happens fairly often in the case of social media--removed tweets or Instagram posts for example.

In scholarship, there are a number of reasons an article might be retracted. These range from errors in the methods used, experiment structure, data, etc. to issues of fraud or misrepresentation. Central to scholarship is the community of scholars actively participating in the scholarly conversation even after the peer review process. Careful analysis of published research by other scholars is vital to course correction.

In science research, it's a central part of the process ! An inherent part of discovery is basing conclusions on the information at hand and repeating the process to gather more information. If further research is done that provides new information and insight, that might mean an older conclusion gets corrected. Uncertainty is unsettling, but trust in the process means understanding the important role of retraction.

  • << Previous: Evaluation
  • Next: Resources >>
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question, were the data collected appropriately, were the data analysed appropriately, can i transfer the results of this study to my own setting, does the study adequately address potential ethical issues, including reflexivity.

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a sample of such survivors (rather than, for example, healthcare providers, social services workers, or academics in the field). The sample should be broad enough to capture the many facets of a phenomenon, and limitations to the sample should be clearly justified. Since the answers to questions of experience and meaning also relate to people’s social affiliations (culture, religion, socioeconomic group, profession, etc), it is also important that the researcher acknowledges these contexts in the selection of a study sample.

In contrast with quantitative approaches, qualitative studies do not usually have predetermined sample sizes. Sampling stops when a thorough understanding of the phenomenon under study has been reached, an end point that is often called saturation. Researchers consider samples to be saturated when encounters (interviews, observations, etc) with new participants no longer elicit trends or themes not already raised by previous participants. Thus, to sample to saturation, data analysis has to happen while new data are still being collected. Multiple sampling methods may be used to broaden the understanding achieved in a study (box 2). These sampling issues should be clearly articulated in the methods section.

Box 2 Qualitative sampling methods for interviews and focus groups 9

Examples are for a hypothetical study of financial concerns among adult patients with chronic renal failure receiving ongoing haemodialysis in a single hospital outpatient unit.

Typical case sampling —sampling the most ordinary, usual cases of a phenomenon

The sample would include patients likely to have had typical experiences for that haemodialysis unit and patients who fit the profile of patients in the unit for factors found on literature review. Other typical cases could be found via snowball sampling (see below)

Deviant case sampling —sampling the most extreme cases of a phenomenon

The sample would include patients likely to have had different experiences of relevant aspects of haemodialysis. For example, if most patients in the unit are 60-70 years old and recently began haemodialysis for diabetic nephropathy, researchers might sample the unmarried university student in his 20s on haemodialysis since childhood, the 32 year old woman with lupus who is now trying to get pregnant, and the 90 year old who newly started haemodialysis due to an adverse reaction to radio-opaque contrast dye. Other deviant cases could be found via theoretical and/or snowball sampling (see below)

Critical case sampling —sampling cases that are predicted (based on theoretical models or previous research) to be especially information-rich and thus particularly illuminating

The nature of this sample depends on previous research. For example, if research showed that marital status was a major determinant of financial concerns for haemodialysis patients, then critical cases might include patients whose marital status changed while on haemodialysis

Maximum-variation sampling —sampling as wide a range of perspectives as possible to capture the broadest set of information and experiences)

The sample would include typical, deviant, and critical cases (as above), plus any other perspectives identified

Confirming-disconfirming sampling —Sampling both individuals or texts whose perspectives are likely to confirm the researcher’s developing understanding of the phenomenon under study and those whose perspectives are likely to challenge that understanding

The sample would include patients whose experiences would likely either confirm or disconfirm what the researchers had already learnt (from other patients) about financial concerns among patients in the haemodialysis unit. This could be accomplished via theoretical and/or snowball sampling (see below)

Snowball sampling —sampling participants found by asking current participants in a study to recommend others whose experiences would be relevant to the study

Current participants could be asked to provide the names of others in the unit who they thought, when asked about financial concerns, would either share their views (confirming), disagree with their views (disconfirming), have views typical of patients on their unit (typical cases), or have views different from most other patients on their unit (deviant cases)

Theoretical sampling —sampling individuals or texts whom the researchers predict (based on theoretical models or previous research) would add new perspectives to those already represented in the sample

Researchers could use their understanding of known issues for haemodialysis patients that would, in theory, relate to financial concerns to ensure that the relevant perspectives were represented in the study. For example, if, as the research progressed, it turned out that none of the patients in the sample had had to change or leave a job in order to accommodate haemodialysis scheduling, the researchers might (based on previous research) choose to intentionally sample patients who had left their jobs because of the time commitment of haemodialysis (but who could not do peritoneal dialysis) and others who had switched to jobs with more flexible scheduling because of their need for haemodialysis

It is important that a qualitative study carefully describes the methods used in collecting data. The appropriateness of the method(s) selected to use for the specific research question should be justified, ideally with reference to the research literature. It should be clear that methods were used systematically and in an organised manner. Attention should be paid to specific methodological challenges such as the Hawthorne effect, 1 whereby the presence of an observer may influence participants’ behaviours. By using a technique called thick description, qualitative studies often aim to include enough contextual information to provide readers with a sense of what it was like to have been in the research setting.

Another technique that is often used is triangulation, with which a researcher uses multiple methods or perspectives to help produce a more comprehensive set of findings. A study can triangulate data, using different sources of data to examine a phenomenon in different contexts (for example, interviewing palliative patients who are at home, those who are in acute care hospitals, and those who are in specialist palliative care units); it can also triangulate methods, collecting different types of data (for example, interviews, focus groups, observations) to increase insight into a phenomenon.

Another common technique is the use of an iterative process, whereby concurrent data analysis is used to inform data collection. For example, concurrent analysis of an interview study about lack of adherence to medications among a particular social group might show that early participants seem to be dismissive of the efforts of their local pharmacists; the interview script might then be changed to include an exploration of this phenomenon. The iterative process constitutes a distinctive qualitative tradition, in contrast to the tradition of stable processes and measures in quantitative studies. Iterations should be explicit and justified with reference to the research question and sampling techniques so that the reader understands how data collection shaped the resulting insights.

Qualitative studies should include a clear description of a systematic form of data analysis. Many legitimate analytical approaches exist; regardless of which is used, the study should report what was done, how, and by whom. If an iterative process was used, it should be clearly delineated. If more than one researcher analysed the data (which depends on the methodology used) it should be clear how differences between analyses were negotiated. Many studies make reference to a technique called member checking, wherein the researcher shows all or part of the study’s findings to participants to determine if they are in accord with their experiences. 2 Studies may also describe an audit trail, which might include researchers’ analysis notes, minutes of researchers’ meetings, and other materials that could be used to follow the research process.

The contextual nature of qualitative research means that careful thought must be given to the potential transferability of its results to other sociocultural settings. Though the study should discuss the extent of the findings’ resonance with the published literature, 3 much of the onus of assessing transferability is left to readers, who must decide if the setting of the study is sufficiently similar for its results to be transferable to their own context. In doing so, the reader looks for resonance—the extent that research findings have meaning for the reader.

Transferability may be helped by the study’s discussion of how its results advance theoretical understandings that are relevant to multiple situations. For example, a study of patients’ preferences in palliative care may contribute to theories of ethics and humanity in medicine, thus suggesting relevance to other clinical situations such as the informed consent exchange before treatment. We have explained elsewhere in this series the importance of theory in qualitative research, and there are many who believe that a key indicator of quality in qualitative research is its contribution to advancing theoretical understanding as well as useful knowledge. This debate continues in the literature, 4 but from a pragmatic perspective most qualitative studies in health professions journals emphasise results that relate to practice; theoretical discussions tend to be published elsewhere.

Reflexivity is particularly important within the qualitative paradigm. Reflexivity refers to recognition of the influence a researcher brings to the research process. It highlights potential power relationships between the researcher and research participants that might shape the data being collected, particularly when the researcher is a healthcare professional or educator and the participant is a patient, client, or student. 5 It also acknowledges how a researcher’s gender, ethnic background, profession, and social status influence the choices made within the study, such as the research question itself and the methods of data collection. 6 7

Research articles written in the qualitative paradigm should show evidence both of reflexive practice and of consideration of other relevant ethical issues. Ethics in qualitative research should extend beyond prescriptive guidelines and research ethics boards into a thorough exploration of the ethical consequences of collecting personal experiences and opening those experiences to public scrutiny (a detailed discussion of this problem within a research report may, however, be limited by the practicalities of word count limitations). 8 Issues of confidentiality and anonymity can become quite complex when data constitute personal reports of experience or perception; the need to minimise harm may involve not only protection from external scrutiny but also mechanisms to mitigate potential distress to participants from sharing their personal stories.

In conclusion: is what the researchers did clear?

The qualitative paradigm includes a wide range of theoretical and methodological options, and qualitative studies must include clear descriptions of how they were conducted, including the selection of the study sample, the data collection methods, and the analysis process. The list of key questions for beginning readers to ask when reading qualitative research articles (see box 1) is intended not as a finite checklist, but rather as a beginner’s guide to a complex topic. Critical appraisal of particular qualitative articles may differ according to the theories and methodologies used, and achieving a nuanced understanding in this area is fairly complex.

Further reading

Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999.

Denzin NK, Lincoln YS, eds. Handbook of qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 2000.

Finlay L, Ballinger C, eds. Qualitative research for allied health professionals: challenging choices . Chichester: Wiley, 2006.

Flick U. An introduction to qualitative research . 2nd ed. London: Sage, 2002.

Green J, Thorogood N. Qualitative methods for health research . London: Sage, 2004.

Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007.

Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in Qualitative Research . Thousand Oaks, CA: Sage, 2002.

Seale C. The quality of qualitative research . London: Sage, 1999.

Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000.

Journal articles

Greenhalgh T. How to read a paper: papers that go beyond numbers. BMJ 1997;315:740-3.

Mays N, Pope C. Qualitative research: Rigour and qualitative research. BMJ 1995;311:109-12.

Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000;320:50-2.

Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res 1998;8:341-51.

Internet resources

National Health Service Public Health Resource Unit. Critical appraisal skills programme: qualitative research appraisal tool . 2006. www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf

Cite this as: BMJ 2008;337:a1035

  • Related to doi: , 10.1136/bmj.a288
  • doi: , 10.1136/bmj.39602.690162.47
  • doi: , 10.1136/bmj.a1020
  • doi: , 10.1136/bmj.a879
  • doi: 10.1136/bmj.a949

This is the last in a series of six articles that aim to help readers to critically appraise the increasing number of qualitative research articles in clinical journals. The series editors are Ayelet Kuper and Scott Reeves.

For a definition of general terms relating to qualitative research, see the first article in this series.

Contributors: AK wrote the first draft of the article and collated comments for subsequent iterations. LL and WL made substantial contributions to the structure and content, provided examples, and gave feedback on successive drafts. AK is the guarantor.

Funding: None.

Competing interests: None declared.

Provenance and peer review: Commissioned; externally peer reviewed.

  • ↵ Holden JD. Hawthorne effects and research into professional practice. J Evaluation Clin Pract 2001 ; 7 : 65 -70. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hammersley M, Atkinson P. Ethnography: principles in practice . 2nd ed. London: Routledge, 1995 .
  • ↵ Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000 .
  • ↵ Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000 ; 320 : 50 -2. OpenUrl FREE Full Text
  • ↵ Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007 .
  • ↵ Seale C. The quality of qualitative research . London: Sage, 1999 .
  • ↵ Wallerstein N. Power between evaluator and community: research relationships within New Mexico’s healthier communities. Soc Sci Med 1999 ; 49 : 39 -54. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in qualitative research . Thousand Oaks, CA: Sage, 2002 .
  • ↵ Kuzel AJ. Sampling in qualitative inquiry. In: Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999 :33-45.

critical evaluation of research methodology

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Variation
  • Language Families
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Culture
  • Music and Religion
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business History
  • Business Strategy
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Methodology
  • Economic Systems
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research

A newer edition of this book is available.

  • < Previous chapter
  • Next chapter >

9 Critical Approaches to Qualitative Research

Kum-Kum Bhavnani, Department of Sociology, University of California at Santa Barbara

Peter Chua, Department of Sociology, San José State University

Dana Collins, Department of Sociology, California State University, Fullerton

  • Published: 04 August 2014
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter reflects on critical strategies in qualitative research. It examines the meanings and debates associated with the term “critical,” in particular, contrasting liberal and dialectical notions and practices in relation to social analysis and qualitative research. The chapter also explores how critical social research may be synonymous with critical ethnography in relation to issues of power, positionality, representation, and the production of situated knowledges. It uses Bhavnani’s framework to draw on Dana Collins’ research as a specific case to suggest how the notion of the “critical” relates to ethnographic research practices: ensuring feminist and queer accountability, resisting reinscription, and integrating lived experience.

Qualitative research is now ubiquitous and fairly well-respected throughout the human sciences. That Oxford University Press is producing this much-needed volume is further testament to that notion, and one which we applaud. However, although there are different approaches to conducting qualitative research, what is often not addressed are the philosophical notions underlying such research. And that is where the “critical” enters. Indeed, “critical,” used as an adjective and applied, within the academy, to methods of research is also a familiar phrase. The question is, therefore: what does “critical” mean, and how might it be translated such that present and future researchers could draw on some of its fundamentals as they plan their research studies in relation to progressive political activism?

The popularity of critical research is not predictable. Although the 1960s and early 1970s did offer a number of publications that engaged with critical research traditions (e.g., Gouldner, 1970 ), and the 1990s also led to a resurgence of interest in this area (e.g., Harvey, 1990 ; Thomas, 1993 ), it is now two decades since explicit discussions of critical research have been widely discussed within the social sciences (see Smith, 1999 ; Madison, 2012 , as exceptions).

In this chapter, we first outline meanings associated with “critical.” We then suggest that the narratives of critical ethnography are best suited for an overview chapter such as this. We consider critical ethnography to be virtually synonymous with critical social research as we discuss it in this chapter. In the final section of our chapter, we discuss Dana Collins’ specific research studies to suggest how her approach embraces the notion of “critical” ( Collins, 2005 ; 2007 ; 2009 ).

The “Critical” in Critical Approaches

“Critical” is used in many ways. In everyday use, the term can refer, among other definitions, to an assessment that points out flaws and mistakes (“a critical approach to the design”), or to being close to a crisis (“a critical illness”). On the positive side, it can refer to a close reading (“a critical assessment of Rosa Luxembourg’s writings”) or as being essential (“critical for effective educational strategies”). A final definition is that the word can be used to either denote considerable praise (“the playwright’s work was critically acclaimed”) or to indicate a particular turning point (“this is a critical time to vote”). It is this last definition that is closest to our approach as we reflect on “critical” in the context of qualitative research. That is, drawing from the writings of Marx, the Frankfurt School, and others (see Delanty, 2005 ; Marx, 1845/1976 ; Strydom, 2011 ), we suggest that critical approaches to qualitative methods do not signify only a particular way of thinking about the methods we use in our research studies, but that “critical approaches” also signify a turning point in how we think about the conduct of research across the human sciences, including its dialectical relations to the progressive and systematic transformation of social relations and social institutions.

The most straightforward notion of “critical” in this context is that it refers to (at the least) or insists (at its strongest) that research—and all ways by which knowledge is created—is firmly grounded within an understanding of social structures (social inequalities), power relationships (power inequalities), and the agency of human beings (an engagement with the fact that human beings actively think about their worlds). Critical approaches are most frequently associated with Marxist, feminist, and antiracist, indigenous, and Third World perspectives. At its most succinct, therefore, we argue that “critical” in this context refers to issues of epistemology, power, micropolitics, and resistance.

What does this mean, both theoretically and for how we conduct our research? Most would agree that whereas qualitative research does not, by definition, insist on a nonpositivist way of examining the social world, for critical approaches to be truly critical, an antipositivist approach is the sine qua non of critical research. Furthermore, it is evident as we survey critical empirical research that issues of reflexive and subjective techniques in data collection and the researcher’s relationship with research subjects also frame both the practices and the theories associated with research.

The following section begins by drawing attention to developments and debates involving the more restricted use the term critical as related to Marxism and then explores the ramifications for varying attempts to conduct critical qualitative research.

The Critical Debates

Karl Marx, Friedrich Engels, and their contemporaries (see Engels, 1877/1969 ; Harvey, 1996 ; Lenin, 1915/1977 ; Mao, 1990 ; Ollman, 2003 ) developed dialectical materialist notions of critique and “critical” that were substantively different from prior notions. They incorporated these dialectical materialist notions to develop Marxist theories and politics.

Dialectical materialism refers to an outlook on reality that emphasizes the importance of process and change that are inherent to things (such as objects, phenomena, and situations), as well as of the importance of human practices in making change. Significantly, human struggle over existing conditions and contradictions in things creates not only new conditions, but also new contradictions. This outlook serves as an analytical tool over idealist and old-fashioned materialist worldviews and as a source of strength for exploited peoples in their struggle against ruling elites and classes. It emphasizes that correct ideas, knowledge, and theoretical abstractions are established initially, and perhaps inevitably, through practice.

Dialectical materialism may be used to examine two aspects of the research process and the production of academic knowledge. The first aspect involves the writing process as it is carried out among multiple authors. At the drafting phase, the authors craft their distinct ideas into textual form. Contradictions in ideas are bound to exist in the draft. In doing revisions, some contradictions may become intensified and remain unresolved, yet, most frequently (and hopefully!), many are addressed in the form of clearer, more solid, and coherent arguments, thus resolving the earlier contradictions in the text. Yet, new struggles and contradictions emerge. The synthesis of ideas and argument in the final manuscript may again, however, engage in new struggles with the prevailing arguments being discussed.

The second aspect involves the relationship and interaction between the researcher and the interviewee. As their relationship begins, contradictions and differences usually exist between them, for instance, in terms of their prior experiences and knowledge, their material interests in the research project, and their communication skills in being persuasive and forging consent. The struggle of these initial contradictions could result in new conditions and contradictions. For example, this could lead to

the establishment of quality rapport between them, allowing the interview to be completed while the researcher maintains control over the situation;

the abrupt end of the interview due to the interviewee refusing and asserting her or his right to comply with the interview process; or

an explicit set of negotiations that address the unevenness in power relations between them, along with an invitation for both to be part of the research team and to collaborate in the collection and analysis of data and in the forging of new theories and knowledges.

In the first possibility, the prevailing power relations in interviews remain but shift to beneath the surface of the relationship, under the guise of “rapport.” In the second possibility, power relations in the interview process and initial contradictions are heightened, resulting in new conditions and contradictions that the researcher and research participant have to address, jointly and singly. In the third possibility, the research subject is transformed into a researcher as well, and the relationship between the two is transformed into a more active co-learning and co-teaching relationship. Still, new conflicts and contradictions may emerge as the research process continues to unfold. 1 In short, dialectical materialism stresses the analysis of change in the essence (1), practice (2), and struggle (3). Such analyses are at the root of how change may be imagined within the practices of social research.

Dialectical materialism, which forms the basis of the concept of “critical,” emphasizes the need to engage with power, inequality, and social relations in the arenas of the social, political, economic, cultural, and ideological. Based on this status, it is argued that an analysis of societies and ways of life demands a more comprehensive approach, one that does not view society and social institutions merely as a singular unit of analysis but rather as ones that are replete with history. Dialectical materialism directs its criticism against prevailing views or hegemonies, and, within the context of academic endeavors, engages in debates against positivism and neo-Kantian forms of social inquiry. It is this basis of “critical” that defines it in the context of research as a deep questioning of science, objectivity, and rationality. Thus, the meaning of the term “critical,” based on the idea of “critique,” emerges from the practice and application of dialectical materialism.

Historical materialism emerges from and is based on dialectical materialism. That is, any application of the dialectic to material realities is historical materialism. For example, any study of human society, its history, its development, and its process of change demands a dialectical approach rooted in historical materialism. This involves delving deeper into past and present social phenomena to thereby determine how people change the essence of social phenomena, and, simultaneously, transform their contradictions.

Dialectical materialism regards positivism as a crude and naïve endeavor to seek knowledge and explain phenomena and as one that assumes it is the task of social researchers to determine the laws of social relationships by relying solely on observations (i.e., by assuming there is a primacy of external conditions and actions). In addition, positivism separates the subject (the seemingly unbiased, detached observer) and object (the phenomenon/a under consideration) of study. Dialectical materialism overcomes the shortcomings of positivism by offering a holistic understanding of (a) the essence of phenomena; (b) the processes of internal changes, the handling of contradictions, and the development of knowledge; (c) the unity of the subject and object in the making of correct ideas; and (d) the role of practice and politics in knowledge creation.

Dialectical materialism directs its criticism against dominant standpoints. These standpoints can offer a simplistic form of idealism and philosophical materialism. Within the context of academic endeavors, the methods of dialectical materialism engage in debates against positivism and neo-Kantian forms of social inquiry. This approach challenges assertions that science, objectivity, and rationality are the sine qua non of research and that skepticism and liberalism are the only appropriate analytical positionings by which a research project can be defined as “critical.”

For instance, Auguste Comte and Emile Durkheim, in developing sociological positivism, argued for a new science to study society, one that adopted the methods of the natural sciences, such as skeptical empiricism and the practices of induction. In adopting these methods, approaches relying on early positivism sought to craft knowledge based on seemingly affirmative verification rather than being based on judgmental evaluation and transformative distinctions.

Positivism and dialectical materialism were both developed in response to Kantian and idealist philosophy. In the context of the European Enlightenment, in the late 1700s, Immanuel Kant inaugurated the philosophy of critique. Positivism challenged Kant’s philosophy of critique as the basis for the theory of knowledge.

Kant developed his notion of critique to highlight the workings of human reason and judgment, to illuminate its limitations, and to consolidate its application in order to secure a stable foundation for morality, religion, and metaphysical concerns. Politically, Kantian philosophy provided justification for both a traditionalism derived from earlier periods and a liberalism developed during the ascendance of the Enlightenment.

Kant sought to settle philosophical disputes between a narrow notion of empiricism (that relies on pure observation, perception, and experience as the basis for knowledge) and a narrow notion of rationalism (that relies on pure reason and concepts as the basis for knowledge). He argued that the essence (termed “thing-in-itself”) is unknowable, countering David Hume’s skeptical empiricism, and he was convinced that there is no knowledge outside of innate conceptual categories. For Kant, “concepts without perceptions are empty; perceptions without concepts are blind” (1781/1965, pp. A 51/B 75).

The method of dialectical materialism challenges Kant’s idealism for (what is claimed to be) its faulty assertion that correct ideas and knowing about the “thing-in-itself” can only emerge from innate conceptual categories, ones that are universal and transcendental. In Kantian philosophy, there is no reality (out there) to be known. Rather, it is the experience of reality itself that provides for human reason and consciousness.

Dialectical materialism overcomes Kant’s idealism with its recognition of the existence of concrete phenomena, outside and independent of human reason. Dialectical materialism stresses that social reality and concrete phenomena reflect on and determine the content of human consciousness (and also, we would argue, vice versa). Dialectical materialism also emphasizes the role of practice and politics in knowledge development, instead of merely centering the primacy of ideas and the meanings of objects.

In sum, the core debate against positivism centers on the practices of science. Dialectical materialism regards positivist approaches as crude and naïve endeavors that seek to determine unchangeable laws of nature, rely solely on observations and “sense experience” of phenomena as the basis for knowledge, highlight the primacy of external conditions and actions to explain phenomena, and separate the subject from the object of study. That is, dialectical materialism views positivism as a form of mechanical, as distinct from historical, materialism.

This abridged account of dialectical materialism and the critiques it offers of Kantian idealism and sociological positivism can allow for the formation of a preliminary set of criteria for what may constitute the “critical.” We argue that qualitative research may be critical if it makes clear conceptually and analytically:

The essence and root cause of any social phenomena (e.g., youth and politics);

The relationship between the essence of the social phenomena under consideration to the general social totality (such as how youth and their views of politics are related to wider systems within society, such as education, age, exploitation);

The contradictions within this social phenomenon (such as how young people are expressing their discontent),

and, therefore,

How to conduct more reflexive practices that interrelate data generation, data analysis, and political engagement that challenge existing relations of power.

Contemporary debates between neo-Kantian idealists and dialectical materialists have often been friendly regarding the direction for carving out what is meant by a critical project in qualitative social research. These debates bring to the fore issues of politics, ethics, research design, and the collection and analysis of data. They have also prompted a variety of ways in which “critical” may be used in relation to qualitative research. For the purposes of this chapter, we suggest four substantial ways in which “critical” is used in the context of qualitative research: (a) critical as a form of liberalism, (b) critical as a counterdisciplinary perspective, (c) critical as an expansion of politics, and (d) critical as a professionalized research endeavor and perspective.

Critical as a form of Kantian liberalism is one of the more conventional uses of the term in qualitative research. This use of critical is generally contrasted against the dogmatism of positivist approaches within social scientific research. Yet, to use critical in this way means that we embrace a liberalism that ends up promoting idealism in outlook and pluralism in practice. That is, Kantian liberalism presents itself as a “critical” and novel analysis by combining eclectic ideas and theories while not making known its political stand and its material interests. As a result, it supports prevailing modes of thinking that emphasize abstraction over concrete reality, and it succumbs to relativistist and pragmatist practices in research, such as “anything goes” in collecting data. In terms of methods, this use of “critical” promotes looseness and leniency in ethics and data collection and analysis, often without a structured accountability to the many constituencies that underlie all social research. Furthermore, the use of, for example, phrases such as “critical spaces,” when applied to social research, may be better understood as a celebration of method above theory and meta-theory and an engagement with some (of the often rather) excessive approaches to reflexivity and meta-reflexivity. In sum, this understanding of “critical” lacks appropriate structures of ethics and accountability and often tends to reject dialectic materialism.

The second use of “critical” in regards to qualitative research proposes a more analytical disagreement with conventional scholarly disciplines and, in so doing, seeks to take up counterdisciplinary positions ( Burawoy, 1998 ; 2003 ; Carroll, 2004 ; Smith, 2007 ). There are two main strands in this use of “critical.” One strand argues that “critical” is a means of exposing the weaknesses of conventional academic disciplines such as anthropology, political science, psychology, and sociology. At the same time, this strand maintains the viability of these core social science disciplines. For instance, academic feminists have continually highlighted the masculinist and heterosexist bias in what is considered top-tier scholarship and the need for these disciplines to be more inclusive in terms of perspectives and methodological techniques (e.g., Fonow & Cook, 1991 ; Harding, 1991 ; Ray, 2006 ). Yet such an approach may not inevitably focus on the fundamental problems, such as a neglect of the study of power inequalities (e.g., Boserup 1970 ; and see examples in Reinharz & Davidman, 1992 ). This second strand seeks to carve out interdisciplinary and multidisciplinary fields such as women studies, cultural studies, and area studies to overcome the paradigmatic and fundamental crises within core disciplines ( Bhavnani, Foran, & Kurian, 2003 ; March, 1995 ; Mohanty, 2003 ). Many of these interdisciplinary and multidisciplinary fields have often been more historical and qualitative in their approaches, seeking to go beyond positivist limitations and present a more nuanced and thorough analysis. However, even these multi-, inter-, and antidisciplinary fields have an uneven impact on dominant and conventional knowledge.

Moreover, both strands have not been able to overcome the increasing corporatization and neoliberalization of academic institutions. This issue addresses the increasing restructuring of public education into a private domain, one that relies on privatized practices and funding of both teaching and research. The neoliberalization of the academy is found in the ties of academic research to corporate grants, individualized career advancement, excessive publishing demands and citation indices, and the use of outsourcing for transcription, interviewing, online education, and private research spaces that are “rented” by public institutions, to name a few. These neoliberal conditions of research usually push out those critical researchers who attempt to avoid such exploitative avenues for research, writing, and collaboration. This use of “critical,” however, does expose that critical research is taking shape within contemporary processes of neoliberalism and the increasing privatization of the academy ( Giroux, 2009 ; Greenwood, 2012 ; Pavlidis, 2012 ).

The third and less familiar approach is to view “critical” as invigorating politics through the practices of feminist, antiracist, and participatory action research. This approach, for example, highlights the importance of analyzing power in research, as in terms of the conduct of inquiry, in political usefulness, and in affecting relations of power and material relations. Yet this view of “critical” is dogmatic because this approach demands that every research study meet all criteria of criticality comprehensively and perfectly.

A final use of “critical” emerges from the many scholarly and professionalized approaches that engage with the politics of academic knowledge construction while making visible the limits of positivism. “Critical” is used here as a means to focus primarily on revitalizing scholarship and research endeavors. However, we argue that even this use of “critical” ossifies the separation of the making of specialized knowledge from an active engagement to transform social life. Such a separation is antithetical to dialectical materialism. Often, this fourth form of the term “critical” is based on the logics of the Frankfurt School of critical theory (such as that of Adorno [1973] , Habermas [1985] , and Marcuse [1968] ) and other Western neo-Marxisms (from Lukacs [1971] and Gramsci [1971] to Negri [1999] ). Critical ethnographers and other critical social researchers, drawing from this tradition, often develop public intellectual persona by writing and talking about politics through scholarly and popular forms of publishing and speaking presentations and are even seen to take part in political mobilizations. Yet they can also shy away from infusing their research with a deep engagement in political processes outside the academy.

Later in this chapter, we discuss how to avoid some of the pitfalls of these four types of “critical,” but suffice it to say, in short, that it is the politics and the explicit situatedness of research projects that can permit research to remain “critical.”

Is Critical Ethnography the Same as Critical Research?

George Marcus (1998) argues that the ethnographer is a midwife who, through words, gives birth to what is happening in the lives of the oppressed. Beverley Skeggs (1994) has proposed that ethnography is, in itself, “a theory of the research process,” and Asad (1973) offered the now-classic critique of anthropology as the colonial encounter. However, although many approaches to and definitions of ethnography abound, it is the case that they all agree on one aspect: namely, that ethnographies offer an “insider’s” perspective on the social phenomena under consideration. It is often suggested that the best ethnographies, whether defined as critical or not, offer detailed descriptions of how people see, and inhabit, their social worlds and cultures (e.g., Behar, 1993 ; Ho, 2009 ; Kondo, 1990 ; Zinn, 1979 ).

It is evident from our argument so far that we do not think of ethnographic approaches to knowledge construction as being, in and of themselves, critical. This is because an ethnographic study, although not in opposition to critical ethnography or to critical research in general, has practices rooted in social anthropology. Therefore, its assumptions are often in line with anthropological assumptions (see Harvey [1990] for a recounting of some of these assumptions). Concepts such as “insider” versus “outsider,” “going native,” “gaining access,” and even conceptualizations of a homogenized and/or exoticized “field” that is out there ready to be examined by research remain as significant lenses of methodological conceptualization in much ethnographic research.

Despite, or perhaps because of, the move to reflexivity in ethnographic research, there remain enduring assumptions about best practices. As a result, a certain fetishization of research methods transpires, one that is often epitomized as reflexivity. In this instance, ethnographic and qualitative research become an ideal set of practices for extracting information. In sum, “best research practices,” as ways to extract information, reproduce core power dynamics of racism, gender, class, imperialism, and heteronormativity, which, in turn, reproduce the oppressive dynamics of noncritical qualitative research.

Furthermore, when presenting research merely as reflexive research, it is the case that the researcher can lose sight of the broader social structural and historical materialist context. In addition, a static notion of reflexivity can lead to the researcher not looking outward to assess the wider interconnections among the micropolitics of the research. That is, reflexivity is a dialectic among the researcher, the research process, and the analysis ( Jordan & Yeomans, 1995 ), but it is often presented simply as a series of apparently unchangeable/essential facets of the researcher. Our final point is that for theory to be critical in the development of research paradigms, it has to explicitly engage with lived experiences and cultures for, without that engagement, it remains as formalism (see, e.g., the work of Guenther [2009] and Kang [2010] as examples of critical qualitative research). We are very much in tune with Hesse-Biber and Leavy, who have suggested that (grounded) theory building is a “dynamic dance routine” in which “there is no one right dance, no set routine to follow. One must be open to discovery” (2006, p. 76).

An example of the limitation of conventionally reflexive research is in the area of lesbian and gay research methods that focus on the experiences of gay men and lesbians conducting qualitative research. It also offers a commentary on the role that non-normative sexuality plays in social research. By looking inward (see the earlier comment on “reflexivity”), these methodological frameworks focus on the researcher’s and participants’ lesbian/gay identifications. In so doing, this can fabricate a shared social structural positionality with research participants who have been labeled “gay” or “lesbian.” Such an approach to reflexivity overlooks the fabricated nature of positionalities and ignores the sometimes more significant divisions between researchers and participants that are expressed along the lines of race, class, gender, and nationality. Reflexivity is used only as a way to forge a connection for the exchange of information. A grave mistake is made in this rush to force similarity along the lines of how people practice non-normative sexualities ( Lewin & Leap, 1996 ; for a more successful engagement with queer intersectionality in research, see Browne & Nash, 2010 ).

The point to be made is that critical researchers should not merely ask “how does this knowledge engage with social structure?” Critical researchers, when contemplating the question “What is this?” as they set up and analyze their research, could also ask, “What could this be?” ( Carspecken, 1996 ; Degiuli, 2007 ; Denzin, 2001 ; Noblit, Flores, & Murillo, 2004 , all cited in Degiuli, 2007 ). Perhaps, borrowing from Karen O’Reilly’s thoughts on critical ethnography, one may think of critical research as “an approach that is overtly political and critical, exposing inequalities in an effort to effect change” ( Reilly, 2009 , p. 51). That is, in order for qualitative research to be critical, it must be grounded in the material relationships of history, as may be seen in the work of Carruyo (2011) , Chua (2001 ; 2006 ; 2007 ; 2012 ), Collins (2005 ; 2007 ; 2009 ), Lodhia (2010) , and Talcott (2010) .

Quantz (1992) , in his discussion of critical ethnography, suggests that five aspects are central to the discussion of critical research/ethnography: knowledge, values, society, history, and culture. So far in this chapter, we have discussed knowledge and its production, values/reflexivity and qualitative research/ethnography, society and unequal social relationships, and history as a method of historical and dialectical materialism in order to better understand social and institutional structures. What we have not discussed, however, is the notion of culture, nor, indeed, the predicament of culture ( Clifford, 1998 ): “Culture is an ongoing political struggle around the meaning given to actions of people located within unbounded asymmetrical power relations” ( Quantz, 1992 , p. 483).

Quantz elaborates by stating that culture develops as people struggle together to name their experiences (see Comaroff & Comaroff, 2012 , for a sophisticated and elegant discussion of this thinking). For example, one key task of critical research is to tease out how disempowerment is achieved, undermined, or resisted. That is, the job of the researcher is to see how the disempowerment—economic, political, cultural—of subordinated groups manifests itself within culture, and, indeed, whether the subordinated groups even recognize their disempowerment. For example, “the hand that rocks the cradle rules the world” is one example of how the material disempowerment of many groups of women is presented, in fact, as a strength of women, and yet it takes the gaze away from seeing the subordination of women by ostensibly emphasizing women’s hidden social power.

It is critical qualitative research that has to simultaneously analyze how our research can identify processes and expressions of disempowerment and can then lead to a restructuring of these relationships of disempowerment. At times, critical social researchers engage in long-term projects that involve policy advocacy and community solidarity to link community-driven research with social empowerment and community change (see Bonacich, 1998 ; Bonacich & Wilson, 2008 ; Hondagneu-Sotelo, 2007 ; Stoecker, 2012 ).

The key point is that critical qualitative research parts company with positivistic approaches because it is argued that positivism is only able to offer a superficial set of findings. Critical qualitative research hones research concepts, practices, and analyses into finer points of reference so that societal relationships may be not only understood, but also so that social power inequalities can be undermined. In short, critical social research has a Foucauldian notion of power at its very core and may thus be thought of as offering insights into people’s lived experiences ( Williams, 1976 ) as they negotiate asymmetrical societal power relations (see e.g., Novelli, 2006 ).

The Practices of Critical Qualitative Research

Within our current era of enduring global inequalities, what could constitute a truly critical approach to qualitative research? More than twenty years ago, in “Tracing the Contours” ( Bhavnani, 1993 ), it was argued that if all knowledge is historically contingent and, therefore, that the processes of knowledge production are situated, then this must apply to all research practices as well. 2 This argument was based on Haraway’s (1988) idea that the particularities of knowledge production do not lie in the characteristics of individuals. Rather, knowledge production is “about communities, not about isolated individuals” (p. 590). Building on this, Haraway discussed the significance of partiality and its relationship to objectivity. She suggested that it is the researcher’s knowledge of her own “limited location” that creates objectivity. In other words, knowing the limitations of one’s structural position as a researcher contributes to objective research because there is no objectivity that is omniscient, one from which all can be revealed (Haraway discusses this as the “god trick,” which is like “seeing everything from nowhere,” p. 582).

It is from Haraway’s insights that we develop our argument that situated knowledges are not synonymous with the static reflexivity we describe earlier. This is because, in this latter scenario, the researcher implies that all research knowledge is based on and derives from an individual’s personal historical and biographical perspectives. That is, researchers note their racial/ethnic identity, sex/gender, sexuality, age, class, and ability (i.e., biographical aspects of themselves), which are presented as essential and unchanging factors and that determine the knowledge created by the research. This has also been called “absolute relativism” ( Bhavnani, 1993 ) or “extreme relativism” ( Alcoff & Potter, 1993 ).

We suggest that the three elements central to research being “critical” are partiality, positionality, and accountability. Partiality leads to critical research interrogating prevailing representations as the research is conducted, and this builds on difference. Positionality is not about being reflexive, but about understanding the sociohistorical/political context from which research is created and thus engages with the micropolitics of a research endeavor. Accountability makes it evident that there are many constituencies to which all academic researchers are accountable—for example, their discipline, intellectual integrity, their institution and academic colleagues, the idea of rigorous scientific research, and academic freedom in research—as well as being accountable to the people with whom the research is being conducted. It is accountability that leads to a critical research project interrogating how the lived experiences and cultures of the research participants are inscribed within the research (see Stoecker, 2012 ).

What might the necessary elements be for ensuring that our research practices retain the criticality we have discussed earlier? We offer four possibilities that could form a filter through which one could decide if research is critical, using our definition of the term. First, all critical qualitative researchers should interrogate the history of ethnographic research that has led to the systematic domination of the poor; working classes; ethnic, racialized, sexual Others; women; and colonized peoples. That is, critical qualitative researchers must begin research with an understanding of how previous research, including their own, may continue to play a part in the subordination of peoples around the world, for example, by reinscribing them into predictable and stereotypical roles. Second, critical qualitative researchers should work to develop a consciousness of what might constitute critical research practices—without fetishizing methods—that challenge the system of domination often present in social research. Third, researchers who embrace critical qualitative approaches must develop comfort with the notion that they are conducting research with a purpose; that is, researchers grapple with and comprehend that critical research demands that they engage with the idea that they conduct research into research inequalities in order to undo these inequalities. Finally, critical qualitative researchers comprehend that their level of comfort can extend into the idea that research does not simply capture social realities; rather, the critical research approach is generative of narratives and knowledges. Once this last idea is accepted—namely, that knowledge is created in a research project and not merely captured—it is then a comparatively straightforward task to see the need for a researcher’s accountability for the narratives and knowledges he or she ultimately produces. In so doing, it is possible to recognize that all representations have a life of their own outside of any intentions and that representations can contribute to histories of oppression and subordination.

We propose that it is the actual practice of research, and, perhaps, even the idea of researcher as witness ( Fernandes, 2003 ), and not a notion of “best practices,” that keeps the politics of research at the center of the work we do. This includes insights into the redistribution of power, representation, and knowledge production. We suggest that critical research is work that shifts research away from the production of knowledge for knowledge’s sake and edges or nudges it toward a more transformative vision of social justice (see Burawoy, 1998 ; Choudry, 2011 ; D’Souza, 2009 ; Hussey, 2012 ; Hunter, Emerald, & Martin, 2013 ).

Thoughts from the Field

Here, based on Collins’s fieldwork, we highlight a set of critical methodological lessons that became prominent while she was conducting her field research in Malate, in the city of Manila, the Philippines, currently a tourist destination but once famous as a sex district. We define her work as a critical research practice.

Since 1999, Dana Collins has conducted urban ethnographic work in Malate, exploring gay men’s production of urban sexual place. She has been interested in the role of “desire” in urban renewal, and, in particular, how informal sexual laborers (whom she terms “gay hospitality workers,” a nomenclature drawn from their own understandings of their labor and lives) use “desire” to forge their place in a gentrifying district that is also displacing them. This displacement has involved analyzing urban tourism development, city-directed urban renewal, and gay-led gentrification, as well as informal sexual labor.

The research has involved her precarious immersion in an urban sexual field. She undertook participant observation of gay night life in the streets, as well as in private business establishments, and conducted in-depth and in-field interviews with gay business owners, city officials, conservationists, gay tourists, and gay-identified sexual laborers. In addition, she drew on insights from visual sociology and also completed extensive archival work and oral history interviewing. In all of this, she explored the collective memories of Malate as a freeing urban sexual space.

There exist multiple and shifting positionalities of power, knowledge, exchange, and resistance in her research. For one, she points out that she occupies multiple social locations as a white, lesbian-identified feminist ethnographer from a US university, one who forges complicated relationships with urban sexual space, sex workers, and both gay Filipino men and gay tourists.

A critical research practice at heart involves the shifting of epistemological foundations of social science research by addressing core questions of how we know what we know, how power shapes the practices of research, how we can better integrate research participants and communities as central producers of knowledge in our research, and how we can better conceptualize the relationship between the research we do and the social justice we are working toward in this world. 3 Such questions function as a call to action for critical researchers not only to examine the power relations present in research, but to generate new ways of researching that can confront the realities of racism, gender and class oppression, imperialism, and homophobia. This is about not only becoming better researchers, but also about seeking ways to shift the very paradigm of qualitative research and ensuring its service to social change. We have learned to use these questions as a central and ongoing part of the research we do.

Feminist and Queer Accountability to the Micropolitics of the Field

One of the primary tenets of critical qualitative research is that researchers must work with a wider understanding and application of the politics of research. For Kum-Kum Bhavnani (1993) , this means that one needs to be accountable to the micropolitics of research because such accountability destabilizes the tendency to conduct and present research from a transcendent position—the “all knowing” ethnographer, the “outsider” going in to understand the point of view of “insiders,” the attempt to (avoid) “go(ing) native,” and the researcher who aims to “gain access” at all costs and in the interests of furthering research. Micropolitics is not only the axis of inequality that shapes contemporary field relations; it is also the historical materialist relationship that constitutes the field and informs the basis of critical qualitative research. Micropolitics therefore is a critical framework that questions the essentializing and power-laden perceptions of research spaces and people because it encourages both a reflexive inquiry into the limited locations of research, and it involves the more critical practice of the researcher turning outward, to comprehend what Bhavnani calls the “interconnections” among researcher, research participants, and the social structural spaces of “the field.”

Micropolitics illuminates how all research is conducted from the limited locations of gender, race, class, sexual identification, and nationality, as well as illuminating the interconnections among all of these locations. This is not a simplistic reflexive practice of taking a moment in research to account for one’s positionality and then moving on to conduct normative field work; Bhavnani has been critical of such moments of inward inspection that lack substantial accountability to the wider micropolitics of the field. Rather, this move requires an ongoing interrogation of the limited locations of research that show how knowledge is not transcendent. Furthermore, when used reflexively, limited locations offer a more critical framework from which to practice research.

Micropolitics encouraged Collins’ attention to the limited location of a global feminist ethnographer doing research on gay male urban sexual space in Manila. For one, she moved among different positionalities throughout her research—of woman, queer-identified, white, US academic, tourist, ate (Tagalog term for older sister)—and none of these positions was either a transcendent or more authentic standpoint from which to conduct ethnographic work. So, for instance, as a white tourist, she moved easily among the gentrifying gay spaces because these spaces were increasingly designed to encourage her movement around Malate. This limited location showed the increasing establishment of white consumer space, which encouraged the movement of consumers like herself yet dissuaded the movement of the informal sexual laborers with whom she was also spending time—the gay hosts. Her limited location as a white woman researcher from a major US university meant that gay hosts sometimes shared their spaces and meanings of urban gay life with her, yet many times those particular spaces and dialogues were closed—she was not allowed into the many public sexual spaces (parks and avenues for cruising and sex late at night), yet gay hosts treated her as an audience for their many romantic stories about the boyfriends they met in the neighborhood.

Hosts emphasized that they gained much from hosting foreigners in terms of friendship, love, desire, and cultural capital. Yet they monitored the information they shared because she remained to them a US researcher who wielded the power of representation over their lives, despite her closeness with a group of five gay hosts. Hence, gay hosts often chose to remain silent about their difficult memories of sex work or any information that could frame them as one-dimensional “money boys,” as distinct from the “gay”-identified Filipino men who migrated to Malate to take part in a gay urban community.

Micropolitics challenges the authenticity of any one positionality over another; it was Collins’ movement among all of them, as well as her ongoing consideration of their social structural places, that provided her with a more critical orientation to the research. She suggests that she was not essentially a better “positioned” researcher to study “gay” life in Manila because she too is gay. Rather she found that differences of race, class, gender, and nationality tended to serve as more enduring, limited locations that influenced relationships within this research and that required ongoing critical reflexive engagement.

We want to add that a queer micropolitics of the field also offers critical insight into how identities are not stagnant but rather can be fabricated and performative during the research process. This moves researchers away from an essentialist take on their standpoint because an essentialist mind-set can lead to a search for the authentic insider and outsider. It can also lead to an essentialist social positionality that is more conducive for researching. Queer micropolitics show that research is made up of a collection of productive relations and identities. So, for example, her lesbian identification did not create a more authentic connection with gay hosts in Manila; rather, she often fabricated a shared “gay” positionality. This was a performance that served as a point of departure for her many conversations, from which she could proceed to share meanings of what it meant to be “gay” in the Manila and the United States.

Some of the productive relations that arise in research are the continuum of intimacies that develop while doing research. So, like feminists before her, she chose to develop close friendships with hosts where they genuinely loved (in a familial way) as they spoke of love. While learning about gay life in Malate, she stroked egos, offered advice, cried over broken hearts and life struggles, and built and maintained familial relations. Queer micropolitics shows, however, the limitations of such intimacies because intimacy does not equal similarity—the differing social locations of class, race, gender, and nationality meant that the experiences of urban gay life varied immensely. Thus, building such intimacies across these differences requires both the recognition and respect for boundaries that hosts constructed. She had to learn to see and know that when hosts became quiet and pulled away these were acts of self-preservation as well as acts of defiance against the many misrepresentations of their lives that had taken shape in academic research and journalistic renderings of their place in “exotic” sex districts.

A queer micropolitics also shows how research is an embodied practice: researchers are gendered, racialized, classed, and sexualized in the field. This became most apparent as she walked alone at night in the “field” and developed a keen awareness of the deeply gendered aspects of Malate’s urban spaces. For one, her embodiment was a peculiar presence because women in Manila do not walk alone at night. This includes women sex workers who publicly congregate in groups or with clients and escorts; otherwise, they are subject to police harassment. Hence, her very movement in the field as a sole woman felt like a transgression into masculine urban space because her feminine body was treated as “out-of-place” in the public spaces of the streets at night—she was flirted with, name called, followed, and sexually handled as she walked to gay bars for her research. As much as her queer location afforded her an understanding of how gender is a discursive production on the body, replete with the possibility of her being able to transcend and destabilize the gendered body as a biological “reality,” she confronted the discomfort of being read as a real woman in what became predominantly men’s spaces at night.

Yet this gendered embodiment, in part, shaped her knowledge of the district as she developed quick and knowledgeable movement through the streets, a queer micropolitical reading of urban space that arose out of this limited gender location. She was aware of the spacing of blocks, the alleys, the street lighting, and the time of night when crowds spilled out from the bars and onto the streets, allowing her to realize that a socially vibrant street life actually facilitated her movement. This queer micropolitical reading of urban space showed how both researchers and research participants do not simply exist in a neutral way in city space; rather, gender leads to our use and misuse of urban space. She has juxtaposed her experience with those of research participants in her study. The latter spoke at length about their exploratory and liberatory experiences of urban space, replete with their access to masculine sexual spaces—parks for cruising and sex, city blocks for meeting clients or picking up male sex workers, and alleys, movie theaters, and mall bathrooms for anonymous sex.

This queer micropolitical read of Malate’s gentrified space showed how very different was her access to the newly opening bars, restaurants, cafés, and lifestyle stores. Her whiteness signaled assumptions of her class location and positioned her as part of the international presence that this gentrifying space was targeting and whose movement among establishments was encouraged. She received free entry, free drinks, exceptional hospitality, and invitations to private parties, and her movements were closely monitored as she entered and exited establishments for the sake of “protecting a foreign tourist from street harassment” (interview with bar owner).

Overall, she experienced whiteness and class as equally embodied because these locations signaled her power as a “legitimate” consumer, allowing access to urban consumer sites and a privileged movement among gentrified spaces. This embodied experience of gentrified space differed from that of her gay hosts, who were often denied access to these establishments for being Filipino, young, working class, gay, and interested in foreigners. Contrarily, their bodies were constructed as a “threat” to urban renewal in the district.

Resisting Reinscription

Critical qualitative research is also concerned with the politics of representation in research. This requires a hard look at the implicit imperialisms of ethnographic work, including the tendency to go in and get out with abundant factual information, as well as the lasting impact of objectificatory research practices on fields of study. Such practices are evident in the now global rhetoric about the so-called Third World prostitute, who in both academic and journalistic renderings tends to be sensationalized and sexually Othered. This rendering is part of a long history of exoticization that has denied subjectivity and rendered invisible the lived experiences of sexual laborers around the world.

Such failed representations are part of what Kum-Kum Bhavnani (1993) has called “reinscription”—the tendency in research to freeze research participants and sites in time and space, thus rendering them both exotic and silenced. Reinscription denies agency to research participants and renders invisible the dynamic lived experiences of those same research participants. Doing research in both postcolonial and sexual spaces means that researchers must grapple with how our research participates in histories of reinscription—we both enter into and potentially contribute to a field that has been already “examined,” overstudied, and often exoticized. Thus, a critical qualitative approach is one that begins with a thorough understanding of these histories of representation so that we are not entering fields naïvely, as spaces only of exploration. Rather, we enter with knowledge of how the field has already been constituted for us through reinscription. A critical orientation has a core objective of understanding how our representations of research at all levels of the research process could contribute to exoticization by reinscribing participants and sites.

The issue of reinscription became particularly apparent when Dana Collins interviewed gay hosts and grappled with what appeared to be their elaboration of a contradictory picture of their sexual labor, as well as of their lives. In short, hosts tended to “lie,” remain silent, embellish “truths,” and articulate contradictory allusions to their life and labor in Malate. When Collins began her interviewing, she held the implicit objective of obtaining the “truth” about hosts’ lives, which she believed resided in “what they do” in the tourism industry. She was concerned with the “facts” about their lives, even though gay hosts were more likely to express their desire—desire for relations with foreigners, desire to migrate to a “gay” urban district, desire for rewarding work, and desire for community and social change. She struggled with many uncertainties about the discussions: how could they hold a range of “jobs” and attend school, yet spend most of their days and nights in Malate? How could they understand gay tourists as both boyfriends and clients? Why resist the label “sex worker” yet refer to themselves as “working boys” and claim to have “clients?” She struggled to make sense of the meanings that hosts offered even as she simultaneously felt misled concerning the “real” relations of hospitality.

Interviewing hosts about sexualized labor—as a way to produce a representation of sex work—did not facilitate the flow of candid information; hosts later expressed their view that sex work and their lives were already “overstudied.” Many researchers had previously descended on Malate to study sex work, and the district was a prime location for the outreach of HIV/AIDS organizations, some of which had breached the confidence of the gay host community. In short, Dana mistakenly started her research without the knowledge of Malate as a hyperrepresented field, and her research risked reinscribing gay hosts’ lives within that field as static and unchanging.

Importantly, those gay hosts who resisted becoming the “good research subjects” who give accurate and bountiful information, prompted a radical shift in her research framework. They told her stories about their imagined social lives, which encouraged her to rethink her commitment to researching sex work because the transformation of the discourses offered another view of the district, their work, and lives, one that offered a more visionary perspective. She began to focus less on “misinformation” and instead followed how hosts framed their lives. She treated these framings as social imaginings in which Malate features prominently in their understandings of gay identity, community, belonging, and change. In short, their social imaginings functioned as counternarratives to reinscription and offered their lived experience of urban gay place. Such imaginations expressed hope, fear, critique, and desire—in short, they present a utopic vision of identity, community, and urban change.

Integrating Lived Experience

Finally, critical qualitative research is a call to study lived experience, which is a messy, contradictory realm, but a deeply important one if we as critical researchers are truly interested in working against a history of research that has silenced those “under study” (see Weis & Fine, 2012 ). Paying attention to lived experience allows us to better engage with the contradictions mentioned earlier because lived experience is about understanding the meanings that research participants choose to share with researchers, and it is also about respecting their silences. As Kum-Kum Bhavnani (1993) has argued, silences can be as eloquent as words. Finally, integrating lived experience can take a critical qualitative project further because lived experience allows researchers to explore the epistemological relationship of the meanings and imaginings offered by research participants and to be explicit about the project of knowledge production. In other words, a central guiding question of critical qualitative research is how can research participants speak and shape epistemology, rather than solely being spoken about or being the subjects of epistemology?

Collins used hosts’ social imaginings as an epistemological contribution because their imaginings showed how hosts draw from experiences of urban gay community to articulate their desires for change, despite their simultaneous experiences of inequality and exclusion. We read social imaginings as a subjective rendering of urban place—the hosts’ social imaginings expressed their history, identity, subversive uses of urban space, and, ultimately, the symbolic reconstitution of that urban space. In this way, hosts were refiguring transnational urban space by writing themselves and their labor back into the district’s meaning, even as the global forces of tourism and urban renewal threatened to displace them.

In conclusion, we seek to highlight how critical research insists on the interplay of reflexivity, process, and practice. In particular, we encourage critical researchers to be mindful of the multiple meanings and usages of the term “critical” so that we can make more explicit our political interests and stand within our disciplines, the academy, our community, and the world. We offer dialectical materialism as a distinct mode of critical analysis that emphasizes an analysis of change in essence, practice, and struggle. We also suggest that, for researchers to be critical in their research, they should strive to take up research questions and projects that study change, contradictions, struggle, and practice in order to counter dominant interests and advance the well-being of the world’s majority. We should strive to build new research relationships—such as overcoming the faulty divides between researchers and research participants and by promoting systems of community accountability—that dialectically fuse research, political activism, and progressive social change.

Furthermore, we suggest that critical research can agitate against the homogeneity of ethnographic representation, allowing for the realities of people’s lives to come into view. Critical researchers recognize the contested fields of research; yet this requires our critical engagement with the research process, as a reflexive, empathetic, collective, self-altering, socially transformative, and embedded exercise in knowledge production. Therefore, critical research can resist imperialist research practices that are disembodied and that assume a singular social positioning. We use an imperative here to say that we must conduct research as embodied subjects who shift between multiple and limited locations. We also have to find more ways to remain accountable to our communities of research as a way to undo implicit imperialisms in social research. Critical research can work against the remnants of an objectivist and truth-seeking method that supports prevailing interests, classes, and groups while embracing research from social locations that offer situated knowledges and the possibility for greater shared understandings. Finally, critical research can engage the micropolitics of research and foreground the need for the accountability of researchers to resist reproducing epistemic violence.

This last is an idealist imagining of what should happen. However, a number of research projects have approximated closely to these goals.

Parts of our argument have appeared in some of our earlier work (e.g., Bhavnani & Talcott, 2011 ; Collins, 2009 ; 2002 ; Chua, 2001 ).

Although we, as the chapter’s three authors, do not usually use “we” in our writing as a general pronoun, it is the most direct way to offer our insights in this section.

Adorno, T. W. ( 1973 ). Negative dialectics . London: Routledge and Kegan Paul.

Google Scholar

Google Preview

Alcoff, L. , & Potter, E. (Eds.). ( 1993 ). Feminist epistemologies . New York: Routledge.

Asad, T. (Ed.). ( 1973 ). Anthropology and the colonial encounter . London: Ithaca Press.

Behar, R. ( 1993 ). Translated women: Crossing the border with Esperanza’s story . Boston: Beacon Press.

Bhavnani, K. -K. ( 1993 ). Tracing the contours: Feminist research and feminist Objectivity.   Women’s Studies International Forum , 16 (2), 95–104.

Bhavnani, K. -K. , Foran, J. , & Kurian, P. (Eds.). ( 2003 ). Feminist futures: Re-imagining women, culture, and development . London: Zed Books.

Bhavnani, K. -K. , & Talcott, M. ( 2011 ). Interconnections and configurations: Towards a global feminist ethnography. In S. N. Hesse-Biber (Ed.), Handbook of feminist research: Theory and praxis (pp. 176–186). Thousand Oaks, CA: Sage Publications.

Bonacich, E. ( 1998 ). Reflections on union activism.   Contemporary Sociology , 27 (2), 129–132.

Bonacich, E. , & Wilson, J. B. ( 2008 ). Getting the goods: Ports, labor, and logistics revolution . Cornell, NY: Cornell University Press.

Boserup, E.   1970 . Women’s Role in Economic Development . New York: St. Martin’s Press.

Browne, K. , & Nash, C. J. (Eds.). ( 2010 ). Queer methods and methodologies: Intersecting queer theories and social science research . Burlington, VA: Ashgate Publishing Company.

Burawoy, M. ( 1998 ). The extended case method.   Sociological Theory , 16 (1), 4–33.

Burawoy, M. ( 2003 ). Revisits: Outline of a theory of reflexive ethnography.   American Sociological Review , 68 , 645–679.

Carroll, W. (Ed.). ( 2004 ). Critical strategies for social research . Ontario: Canadian Scholars Press.

Carspecken, P. F. ( 1996 ). Critical Ethnography in Educational Research: A theoretical and practical guide . New York: Routledge.

Carruyo, L. ( 2011 ). Producing knowledge, protecting forests: Rural encounters with gender, ecotourism, and international aid in the Dominican Republic . Philadelphia: Penn State University Press.

Choudry, A. ( 2011 ). On knowledge, learning and research in struggle. In C. Fanelli & P. Lefebvre (Eds.), Uniting struggles: Critical social research in critical times (pp. 175– 194). Ontario: Red Quill Books.

Clifford, J. ( 1998 ). The predicament of culture: Twentieth-century ethnography, literature, and art . Cambridge, MA: Harvard University Press.

Chua, P. (2001). Condom matters and social inequalities: Inquiries into condom production, exchange, and advocacy practices . PhD. dissertation, Department of Sociology, University of California, Santa Barbara, CA.

Chua, P. ( 2006 ). Bloodshed and coercive communal peace negotiations. In R. Tolentino & S. Raymundo (Eds.), Kontra-gahum: Academics against political killings (pp. 42–57). Quezon City, Philippines: IBON Publishing.

Chua, P. ( 2007 ). Corporatizing public education in the Philippines: The case of USAID and the Ayala Foundation. In B. Lumbera , R. Guillermo , & A. Alamon (Eds.), Mula Tore Patungong Palengke: Neoliberal education in the Philippines (pp. 115–125). Quezon City, Philippines: IBON Publishing.

Chua, P. ( 2012 ). National schooling in crisis: Neoliberal policies and the 2011 justice campaign for the PGCPS Filipino overseas contract teachers.   Pingkian , 1 (1), 45–58.

Collins, D. M. (2002). Laboring districts, pleasuring sites: Hospitality, “gay” life, and the production of urban sexual space in Manila . Ph.D. dissertation, Department of Sociology, University of California, Santa Barbara, CA.

Collins, D. ( 2005 ). Identity, mobility, and urban placemaking: Exploring gay life in Manila.   Gender and Society , 19 (2), 180–198.

Collins, D. ( 2007 ). When sex work isn’t “work”: Hospitality, gay life, and the production of desiring labor.   Tourist Studies , 7 (2), 115–139.

Collins, D. ( 2009 ). “ We’re there and queer”: Homonormative mobility and lived experience among gay expatriates in Manila.   Gender and Society , 23 (4), 465–493.

Comaroff, J. , & Comaroff, J. L. ( 2012 ). Theory from the South: How Euro-America is evolving towards Africa . Boulder, CO: Paradigm Publishers.

D’Souza, R. ( 2009 ). The prison houses of knowledge: Activist scholarship and revolution in the era of globalization.   McGill Journal of Education , 44 (1), 19–38.

Degiuli, F. ( 2007 ). A job with no boundaries: Home eldercare work in Italy.   European Journal of Women’s Studies , 14 (3), 193–207.

Delanty, G. ( 2005 ). Social science: Philosophical and methodological foundations (2nd ed.). Berkshire, UK: Open University Press.

Denzin, N. K. ( 2001 ). Interpretive interactionism . Thousand Oaks, CA. Sage Publications.

Engels, F. (1877/ 1969 ). Anti-Dühring . London: Lawrence and Wishart.

Fernandes, L. ( 2003 ). Transforming feminist practice: Non-violence, social justice, and the possibilities of a spiritualized feminism . San Francisco: Aunt Lute Books.

Fonow, M. M. , & Cook, J. A. (Eds.). ( 1991 ). Beyond methodology: Feminist scholarship as lived research . Bloomington: Indiana University Press.

Giroux, H. A. ( 2009 ). Democracy’s nemesis. The rise of the corporate university.   Cultural Studies ↔ Critical Methodologies , 9 (5), 669–695.

Gouldner, A. ( 1970 ). The coming crisis of American sociology . New York: Basic Books.

Gramsci, A. ( 1971 ). Selections from the prison notebooks of Antonio Gramsci . ( Q. Hoare & G. N. Smith , Eds. & Trans.). New York: International Publishers.

Greenwood, D. J. ( 2012 ). Doing and learning action research in the neo-liberal world of contemporary higher education.   Action Research , 10 (2), 115–132.

Guenther, K. M. ( 2009 ). The impact of emotional opportunities on the emotion cultures of feminist organizations.   Gender and Society , 23 (3), 337–362.

Habermas, J. ( 1985 ). The theory of communicative action: vol. 2: Lifeword and system: A critique of functionalist reason . Boston: Beacon Press.

Haraway, D. ( 1988 ). Situated knowledges: The science question in feminism and the privilege of partial perspective.   Feminist Studies , 14 (3), 575–599.

Harding, S. ( 1991 ). Whose science? Whose knowledge? Ithaca: Cornell University Press.

Harvey, D. ( 1996 ). Justice, nature, and the geography of difference . Cambridge, MA: Blackwell Publisher.

Harvey, L. ( 1990 ). Critical social research . London: Routledge.

Hesse-Biber, S. N. , & Leavy, P. ( 2006 ). The practice of qualitative research . Thousand Oaks, CA: Sage Publications.

Ho, K. ( 2009 ). Liquidated: An Ethnography of Wall Street . Durham: Duke University Press.

Hondagneu-Sotelo, P. ( 2007 ). Domestica: Immigrant workers cleaning and caring in the shadows of affluence (2nd ed.). Berkeley: University of California Press.

Hunter, L. , Emerald, E. , & Martin, G. ( 2013 ). Participatory activist research in the globalised world . Düsseldorf: Springer.

Hussey, I. ( 2012 ). “Political activist as ethnographer” revisited.   Canadian Journal of Sociology , 37 (1), 1–23.

Jordan, S. , & Yeomans, D. ( 1995 ). Critical ethnography: Problems in contemporary theory and practice.   British Journal of Sociology of Education , 16 (3), 389–408.

Kant, I. (1781/ 1965 ). The critique of pure reason . New York: St. Martins Press.

Kang, M. ( 2010 ). The managed hand: Race, gender, and the body in beauty service work . Berkeley: University of California Press.

Kondo, D. K. ( 1990 ). Crafting selves: Power, gender, and discourses of identity in a Japanese workplace . Chicago: University of Chicago Press.

Lenin, V. I. (1915/ 1977 ). On the question of dialectics. In Collected works (vol. 38, pp. 357–361). Moscow: Progress Publishers.

Lewin, E. , & Leap, W. L. (Eds.). ( 1996 ). Out in the field: Reflections of lesbian and gay anthropologists . Champaign, IL: University of Illinois Press.

Lodhia, S. ( 2010 ). Constructing an imperfect citizen-subject: Globalization, national “security,” and violence against South Asian women.   Women’s Studies Quarterly , 38 (1), 161–177.

Lukacs, G. ( 1971 ). History and class consciousness . ( R. Livingstone ,)Trans.. Cambridge, MA: MIT Press.

Mao, Z. ( 1990 ). Lecture notes on dialectical materialism . In N. Knight (Ed.), Mao Zedong on dialectical materialism: Writings on philosophy, 1937 (pp. 84–131). Armonk, NY: M. E. Sharpe.

Madison, D. S. ( 2012 ). Critical ethnography: Method, ethics, and performance (2nd ed.). Thousand Oaks, CA: Sage Publications.

Marchand, M. H. ( 1995 ). Latin American women speak on development: Are we listening yet? In M. H. Marchand & J. L. Parpart (Eds.), Feminism/postmodernism/development (pp. 56–72). London: Routledge.

Marcus, G. E. ( 1998 ). Ethnography through thick and thin . Princeton, NJ: Princeton University Press.

Marcuse, H. ( 1968 ). Negations: Essays in critical theory . Boston: Beacon Press.

Marx, K. (1845/ 1976 ). Thesis on Feuerbach. In Marx and Engels collected works (vol. 5, pp. 3–9). New York: International Publishers.

Mohanty, C. T. ( 2003 ). Feminism without borders: Decolonizing theory, practicing solidarity . Durham, NC: Duke University Press.

Reinharz, S. , & Davidman, L. (Eds.). ( 1992 ). Feminist methods in social research . New York: Oxford University Press.

Negri, A. ( 1999 ). Insurgencies: Constituent power and the modern state . Minneapolis, MN: University of Minnesota Press.

Noblit, G. W. , Flores, S. Y. , & Murillo, E. G. ( 2004 ). Postcritical ethnography: Reinscribing critique . New York: Hampton Press.

Novelli, M. ( 2006 ). Imagining research as solidarity and grassroots globalisation: A response to Appadurai (2001).   Globalisation, Societies and Education , 4 (2), 275–286.

Ollman, B. ( 2003 ). Dance of the dialectic: Steps in Marx’s method . Champaign, IL: University of Illinois Press.

Pavlidis, P. ( 2012 ). The antinomic condition of the university: “Universal labour” beyond “academic capitalism.”   Journal for Critical Education Policy Studies , 10 (2), 139–159.

Ray, R. ( 2006 ). Is the revolution missing or are we looking in the wrong places?   Social Problems , 53 (4), 459–465.

Reilly, K. ( 2009 ). Key concepts in ethnography . London: Sage Publications.

Quantz, R. A. ( 1992 ). On critical ethnography (with some postmodern considerations). In M. S. Lecompte , W. L. Millroy , & J. Preissle (Eds.), Handbook of qualitative research in education (pp. 447–505). San Diego, CA: Academic Press.

Skeggs, B. ( 1994 ). Situating the production of feminist ethnography. In M. Maynard (Ed.), Researching women’s lives from a feminist perspective (pp. 72–81). London: Routledge.

Smith, D. ( 2007 ). Institutional ethnography: From a sociology for women to a sociology of the people. In S. N. Hesse-Biber (Ed.), Handbook of feminist research: Theory and praxis (pp. 409–416). Thousand Oaks, CA: Sage Publications.

Smith, L. T. ( 1999 ). Decolonizing methodologies . London: Zed Books.

Stoecker, R. ( 2012 ). Research methods for community change: A project based approach (2nd ed.). Thousand Oaks, CA: Sage Publications.

Strydom, P. ( 2011 ). Contemporary critical theory and methodology . London: Routledge.

Talcott, M. ( 2010 ). As neoliberal crises persist, indigenous-led movements resist: Examining the current social and political-economic conjuncture in southern Mexico. In R. A. Dello Buono & D. Fasenfest (Eds.), Social change, resistance and social practices (pp. 131–148). Boston: Brill Publishers.

Thomas, J. ( 1993 ). Doing critical ethnography . Thousand Oaks, CA: Sage Publications.

Weis, L. , & Fine, M. ( 2012 ). Critical bifocality and circuits of privilege: Expanding critical ethnographic theory and design.   Harvard Educational Review , 82 (2), 173–201.

Williams, R. ( 1976 ). Keywords: A vocabulary of culture and society . New York: Oxford University Press.

Zinn, M. B. ( 1979 ). Field research in minority communities: Ethical, methodological and political observations by an insider.   Social Problems , 27 (2), 209–219.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

critical evaluation of research methodology

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

78k Accesses

28 Citations

71 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

critical evaluation of research methodology

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

critical evaluation of research methodology

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Questions

Research Questions – Types, Examples and Writing...

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Diagn Res
  • v.11(5); 2017 May

Critical Appraisal of Clinical Research

Azzam al-jundi.

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.

Salah Sakka

2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].

Critical appraisal is essential to:

  • Combat information overload;
  • Identify papers that are clinically relevant;
  • Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

  • What is the research question?
  • What is the study type (design)?
  • Selection issues.
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • Statistical results.
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

  • The publishing journal and the year
  • The article title: Does it state key trial objectives?
  • The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

  • Aim of the study: It should be well and clearly written.
  • Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
  • Results: The measured variables with their statistical analysis and significance.
  • Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].

V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

  • Does it address any source of potential bias?
  • Are interpretations consistent with the results?
  • How are null findings interpreted?
  • Does it mention how do the findings of this study relate to previous work in the area?
  • Can they be generalized (external validity)?
  • Does it mention their clinical implications/applicability?
  • What are the results/outcomes/findings applicable to and will they affect a clinical practice?
  • Does the conclusion answer the study question?
  • -Is the conclusion convincing?
  • -Does the paper indicate ethics approval?
  • -Can you identify potential ethical issues?
  • -Do the results apply to the population in which you are interested?
  • -Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].

[Table/Fig-1]:

Categories of clinical questions and the related study designs.

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].

  • Is the sample defined? Human, Animals (type); what population does it represent?
  • Does it mention eligibility criteria with reasons?
  • Does it mention where and how the sample were recruited, selected and assessed?
  • Does it mention where was the study carried out?
  • Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
  • Does it mention a suitable study design/type?
  • Is the study type appropriate to the research question?
  • Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
  • Are the samples similar at baseline? Is sample attrition mentioned?
  • All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
  • Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
  • Is it mentioned how was the data analysed?
  • Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

  • -Soundness, effectiveness of the measuring instrument;
  • -What does the test measure?
  • -Does it measure, what it is supposed to be measured?
  • -How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].

3-Selection issues:

The following questions should be raised:

  • - How were subjects chosen or recruited? If not random, are they representative of the population?
  • - Types of Blinding (Masking) Single, Double, Triple?
  • - Is there a control group? How was it chosen?
  • - How are patients followed up? Who are the dropouts? Why and how many are there?
  • - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
  • - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
  • - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
  • - Are there selection biases?
  • • In a case-control study, if exercise habits to be compared:
  • - Are the controls appropriate?
  • - Were records of cases and controls reviewed blindly?
  • - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
  • • Cross Sectional Studies:
  • - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
  • - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
  • - Were reliability (reproducibility) and validity reported?
  • • In an intervention study, how were subjects recruited and assigned to groups?
  • • In a cohort study, how many reached final follow-up?
  • - Are the subject’s representatives of the population to which the findings are applied?
  • - Is there evidence of volunteer bias? Was there adequate follow-up time?
  • - What was the drop-out rate?
  • - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].

4-What are the outcome factors and how are they measured?

  • -Are all relevant outcomes assessed?
  • -Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

  • -Are all the relevant study factors included in the study?
  • -Have the factors been measured using appropriate tools?

Data Analysis and Results:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

  • How strong is the association between intervention and outcome?
  • How precise is the estimate of the risk?
  • Does it clearly mention the main finding(s) and does the data support them?
  • Does it mention the clinical significance of the result?
  • Is adverse event or lack of it mentioned?
  • Are all relevant outcomes assessed?
  • Was the sample size adequate to detect a clinically/socially significant result?
  • Are the results presented in a way to help in health policy decisions?
  • Is there measurement error?
  • Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].

6- What important potential confounders are considered?

  • -Are potential confounders examined and controlled for?
  • -Is confounding an important source of bias?

7- What is the statistical method in the study?

  • -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
  • -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
  • -Were the tests appropriate for the data?
  • -Are confidence intervals or p-values given?
  • -Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

  • When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

  • -Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [ 9 ].

  • -Was the sample size adequate to detect a clinically/socially significant result?
  • -Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

  • -If the results are statistically significant, do they also have clinical significance?
  • -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].

  • -Are the questions posed in the study adequately addressed?
  • -Are the conclusions justified by the data?
  • -Do the authors extrapolate beyond the data?
  • -Are shortcomings of the study addressed and constructive suggestions given for future research?
  • -Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].

Critical appraisal of RCTs: Factors to look for:

  • Allocation (randomization, stratification, confounders).
  • Follow up of participants (intention to treat).
  • Data collection (bias).
  • Sample size (power calculation).
  • Presentation of results (clear, precise).
  • Applicability to local population.

[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].

[Table/Fig-2]:

Summary of the CONSORT guidelines.

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

  • Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
  • Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
  • Homogeneity of studies.

[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].

[Table/Fig-3]:

Summary of PRISMA guidelines.

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

Financial or other Competing Interests

IMAGES

  1. (DOC) critical evaluation of qualitative reserch methodology

    critical evaluation of research methodology

  2. A framework for critical research methodology.

    critical evaluation of research methodology

  3. Research Methodology Flow Chart Example

    critical evaluation of research methodology

  4. Evaluative Research: Definition, Methods & Types

    critical evaluation of research methodology

  5. 15 Types of Research Methods (2024)

    critical evaluation of research methodology

  6. (PDF) A Critical Evaluation of Qualitative Reports and Their

    critical evaluation of research methodology

VIDEO

  1. Evaluating RCT Performance Bias

  2. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  3. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  4. 12 Important Practice Questions /Research Methodology in English Education /Unit-1 /B.Ed. 4th Year

  5. Critical Appraisal of Qualitative Research

  6. International Conference: COM 4.0 Inaugural Session: Eudoxia Research University

COMMENTS

  1. Critical Analysis: The Often-Missing Step in Conducting Literature

    A critical analysis approach has the methodological rigor necessary for making meaningful contributions to the field and can be applied no matter what literature review method authors choose. The research process for conducting a critical analysis literature review has three phases ; (a) the deconstruction phase in which the individually ...

  2. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  3. PDF Planning and writing a critical review

    appraisal, critical analysis) is a detailed commentary on and critical evaluation of a text. You might carry out a critical review as a stand-alone exercise, or ... getting a general idea of the research aims, methods and results. At this stage, you might have some general questions about the research article that you can think about:

  4. Full article: Critical appraisal

    What is critical appraisal? Critical appraisal involves a careful and systematic assessment of a study's trustworthiness or rigour (Booth et al., Citation 2016).A well-conducted critical appraisal: (a) is an explicit systematic, rather than an implicit haphazard, process; (b) involves judging a study on its methodological, ethical, and theoretical quality, and (c) is enhanced by a reviewer ...

  5. Critical appraisal of published research papers

    The objective of this study was to evaluate the perception of pharmacology postgraduate students and teachers toward use of critical appraisal as a reinforcing tool for research methodology. Evaluation of performance of the in-house pharmacology postgraduate students in the critical appraisal activity constituted secondary objective of the study.

  6. How to critically appraise an article

    The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological ...

  7. Critical Evaluation

    Critical Evaluation of Information Sources. After initial evaluation of a source, the next step is to go deeper. This includes a wide variety of techniques and may depend on the type of source. In the case of research, it will include evaluating the methodology used in the study and requires you to have knowledge of those discipline-specific ...

  8. PDF The Methodological Integrity of Critical Qualitative Research

    Heidi M. Levitt, Zenobia Morrill, Kathleen M. Collins, and Javier L. Rizo. University of Massachusetts-Boston. This article articulates principles and practices that support methodological integrity in relation to critical qualitative research. We begin by describing 2 changes that have occurred in psychological methods over the last 15 years.

  9. Critically appraising qualitative research

    Six key questions will help readers to assess qualitative research #### Summary points Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

  10. Research Evaluation

    Evaluation is an essential aspect of research. It is ubiquitous and continuous over time for researchers. Its main goal is to ensure rigor and quality through objective assessment at all levels. It is the fundamental mechanism that regulates the highly critical and competitive research processes.

  11. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  12. Critical evaluation of publications

    Critical evaluation is the process of examining the research for the strength or weakness of the findings, validity, relevance, and usefulness of the research findings. [ 1] The availability of extensive information and the difficulty in differentiating the relevant information obligate the primary need of critical appraisal.

  13. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  14. 9 Critical Approaches to Qualitative Research

    That is, drawing from the writings of Marx, the Frankfurt School, and others (see Delanty, 2005; Marx, 1845/1976; Strydom, 2011), we suggest that critical approaches to qualitative methods do not signify only a particular way of thinking about the methods we use in our research studies, but that "critical approaches" also signify a turning ...

  15. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  16. Decolonizing Methodologies in Qualitative Research: Creating Spaces for

    Abdi N. M. (2019). Researching from the margin: Challenges and tensions of doing research within one's own refugee community. In Warriner D. S., Bigelow M. (Eds.), Critical reflections on research methods: Power and equity in complex multilingual contexts (pp. 98-109). Multilingual Matters.

  17. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants, and the word 'significance' is applied to a degree of chance ...

  18. Chapter 9 Methods for Literature Reviews

    Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results (Baumeister & Leary ...

  19. Critical appraisal of qualitative research

    Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...

  20. Evaluating Research

    Evaluating Research Methods. Evaluating Research Methods are as follows: Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field. Critical appraisal: Critical appraisal involves systematically evaluating a study based ...

  21. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  22. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.

  23. Critical Evaluation of the research methodologies

    3. Critical Analysis of her Methodology: In this research the Methodology which is used by Chevrier is Case study. I think to do this research- survey studies or experimental strategies will be too complex to implement, this is the main reason for Sylvie to choose Case study as the Methodology for this research.