U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

A Phenomenological Paradigm for Empirical Research in Psychiatry and Psychology: Open Questions

Leonor irarrázaval.

1 Section Phenomenological Psychopathology and Psychotherapy, Psychiatric Department, University Clinic Heidelberg, Heidelberg, Germany

2 Centro de Atención Psicológica, Facultad de Ciencias Sociales y Humanidades Sede Talca, Universidad Autónoma de Chile, Talca, Chile

This article seeks to clarify the way in which phenomenology is conceptualized and applied in empirical research in psychiatry and psychology, emphasizing the suitability of qualitative research. It will address the “What,” “Why,” and “How” of phenomenological interviews, providing not only preliminary answers but also a critical analysis and pointing to future directions for research. The questions it asks are: First, what makes an interview phenomenological? What are phenomenological interviews used for in empirical research in psychiatry and psychology? Second, why do we carry out phenomenological interviews with patients? Is merely contrasting phenomenological hypotheses or concepts enough to do justice to the patients’ involvement? Third, how should we conduct phenomenological interviews with patients? How can we properly perform analysis in empirical phenomenological research in psychiatry and psychology? In its conclusion, the article attempts to go a step beyond these methodological questions, highlighting the “bigger picture”: namely, the phenomenological scientific paradigm and its core philosophical claim of reality as mind-dependent.

Introduction

An initial proposal in favor of “naturalizing phenomenology” was presented in the article “First-person methodologies: What, Why, How?” published by Varela and Shear (1999) in the Journal of Consciousness Studies . The authors were not only concerned with the need for a method in cognitive sciences to obtain empirically-based descriptions of the subject, but also with providing the basis for a “science of consciousness.” “Neurophenomenology” was proposed by Varela (1996) as a means of linking first‐ and third-person perspectives through a systematic examination of subjective experience within experimental settings. An important requirement of neurophenomenology was that both experimenter and experimental subject must learn the Husserlian phenomenological method. The notion “phenomenology” was employed in the etymological sense of the term, that is, “the study of that which appears” (from Greek phainómenon “that which appears” and lógos “study”). Additionally, Varela (1990) coined the term “enactive,” meaning not to act out or to perform as on a stage, but to “enact,” that is, “to bring forth” or to “emerge” ( hervorbringen , in German), as it is used in the phenomenological tradition. Accordingly, the phenomenological method was conceived and applied as a form of training one’s attention to that which “appears” in the subject’s conscious experience, making it similar to a meditation technique. Examples of neurophenomenology are the experiments led by Lutz et al. (2002) , which analyzed subjective reports, reaction times, and brain activity. However, a different approach was proposed by Gallagher (2003) , who claimed that a “phenomenologically enlightened experimental science” means incorporating concepts and distinctions from the phenomenological analysis into the actual design of an experiment. In contrast to neurophenomenology, this approach does not require learning the Husserlian phenomenological method or even making first-person reports in the experiments. Examples of “front-loaded phenomenology” are neuroimaging experiments employing the phenomenological distinction between “sense of agency” and “sense of ownership” in involuntary movement ( Ruby and Decety, 2001 ; Chaminade and Decety, 2002 ; Farrer and Frith, 2002 ).

However, experimental designs are normally not classified as part of qualitative research methodologies ( Fischer, 2006 ; Maxwell, 2011 , 2012 ; Patton, 2015 ; Creswell and Poth, 2018 ). One of the clearest differences between qualitative and quantitative approaches is that qualitative research is carried out in everyday natural conditions, rather than in experimental settings. Concerning the qualitative/quantitative distinction, there is an ongoing debate not only around the differences between the two approaches ( Morgan, 2018 ; Maxwell, 2019 ), but also around whether they are actually distinguishable at all ( Hammersley, 2018 ). Whatever their differences or similarities, qualitative and quantitative approaches are commonly conceived as compatible and their integration – in the form of mixed-methods research designs – valuable ( Tashakkori and Teddlie, 2010 ). So, the incorporation of phenomenological interviews in experimental designs is one kind of mixed-method research design: One example is neurophenomenology, where the qualitative component is provided by phenomenology. Broadly speaking, qualitative research is used in many social sciences and humanities disciplines, including psychology, sociology, political sciences, and anthropology. A range of techniques are employed in qualitative research to gather experiential data, such as open-ended interviews, direct observation, focus groups, and document analysis (e.g., clinical records and personal diaries), and different methods are used for the associated qualitative data analysis, including phenomenology, ethnography, narrative analysis (e.g., biographical and life story studies), case studies, and grounded theory. In contrast to the large sample sizes needed in quantitative research to accomplish statistical validation of the results, qualitative research is characterized by an in-depth approach, which means working with few cases, with representativeness not being of such key importance ( Barbour and Barbour, 2003 ). The use of less structured methods allows for the emergence of ideographic descriptions, personal beliefs and meanings, thus addressing the experiential processes of the subjects being studied ( Schwartz and Jacobs, 1979 ; Barbour, 2000 ; Maxwell, 2011 , 2012 , 2019 ).

This article shall not focus on experimental phenomenology. However, this is no way meant to discredit in any sense this form of research design. Indeed, mention has already been made of the precursors of the experimental application of phenomenology to acknowledge the important contribution this research tradition has made – and continues to make – in ensuring that phenomenology acquires a scientific status. For instance, the project “cardiophenomenology” has been recently proposed by Depraz and Desmidt (2019) as a refinement of Varela’s neurophenomenology and performed in experimental studies of surprise in depression ( Depraz et al., 2017 ). In addition, it is worth mentioning Martiny’s (2017) transdisciplinary research on the phenomenological and neurological aspects of living with brain damage, specifically cerebral palsy. Martiny’s work not only has been influenced by, but also seeks to revitalize, Varela’s “radical” proposal, reminding us of the importance of working with openness and a change of mindset in cognitive science. Usually framed as “embodied cognition,” this proposal approaches the mind as embodied, embedded, enacted, and extended (4E cognition), implying an awareness regarding the fact that the “embodied” notion applies not only to the mind of the experimental subject but also to the cognitive scientist carrying out the research ( Depraz et al., 2003 ). Indeed, phenomenology has breached the frontiers of the philosophical discipline to influence the development of interdisciplinary fields of studies bridging the biomedical sciences and the humanities. Besides its application in the cognitive sciences, phenomenology is currently being widely applied in empirical research in healthcare-related disciplines, mostly in psychiatry and psychology. The most influential empirical application of phenomenology has been in the field of psychopathology, with the development of phenomenological interviews for the investigation of schizophrenia spectrum disorders ( Parnas et al., 2005 ; Sass et al., 2017 ). However, the extent of phenomenology’s applicability outside the strict domain of philosophy is currently a topic of intense debate and controversy ( Zahavi and Martiny, 2019 ). The conceptualization of phenomenology in the literature of qualitative research, which has been mostly developed in North America, is not always in line with that of the continental European philosophical tradition. Recent years have seen the start of a dialogue bridging the two traditions, qualitative research and philosophical phenomenology, giving a promise of fruitful collaboration in the future.

This article will address the “What,” “Why,” and “How” of phenomenological interviews, reviewing recent empirical research in the field of phenomenological psychopathology and psychotherapy. Important to note is that qualitative research, as described above, refers to empirical research, not to basic or theoretical investigations. Phenomenological qualitative research in psychology has been developed using Husserlian concepts such as the “epoché” and the “phenomenological reduction,” and precisely on the use of such conceptualizations is where most of the current discussion has been placed. The article, therefore, will not attempt to provide a broad understanding of the phenomenological tradition. Instead, it will focus on a more specific discussion of methodological issues concerning the empirical application of phenomenology in qualitative research in psychiatry and psychology, and Husserl’s methodology in particular. To do so, we first need to agree that the application of phenomenology to empirical research in psychiatry and psychology employing interviews is qualitative, not quantitative. In a strict sense, quantitative methodology based on frequency and scales of severity of the patients’ anomalous experience, although necessary for the statistical validation of the interviews, goes beyond the scope of phenomenology. According to the phenomenological approach, mental disorders cannot be reducible to a cerebral organic basis, nor to numbers, as they are not entities per se but psychopathological configurations that can be identified in the diagnostic process of interaction between a clinician and a patient ( Fuchs, 2010a ; Pallagrosi et al., 2014 ; Pallagrosi and Fonzi, 2018 ; Gozé et al., 2019 ). Consequently, phenomenological interviews are designed to address not objective, but subjective data, namely the what it is like of patients’ anomalous experiences. In this way, the patients’ descriptions of their subjective experiences are not conceived as “static” entities, but, rather, as part of dynamically, open-ended developing processes and interpretations ( Martiny, 2017 ).

What makes an interview “phenomenological”? What are phenomenological interviews used for in empirical research in psychiatry and psychology?

Medical psychiatric diagnosis relies on standardized manuals providing a description of the apparent symptomatology and mostly excludes any assessment of subjective experience ( Mishara, 1994 ; Parnas and Zahavi, 2002 ; Fuchs, 2010a ). Under this approach, research in psychiatry has mainly developed from a third-person perspective, using the methods of the physical and natural sciences. Biomedical psychiatry has prioritized the use of quantitative methods and statistical analysis, whereas the value of qualitative in-depth analysis has been underestimated. The preferred experimental design has been the randomized controlled trial to demonstrate the efficacy of treatments involving psychoactive drugs ( Deacon, 2013 ; Deacon and McKay, 2015 ). An alternative conceptual model to this comes from the phenomenological tradition of psychopathology. In order to understand and conceptualize the anomalous experience of a given mental illness, the phenomenological diagnosis highlights the importance of assessing patients’ subjectivity. Over the last two decades, phenomenological interviews have been developed to complement standardized diagnostic systems such as Diagnostic and Statistical Manual of Mental Disorders (DSM-5) ( American Psychiatric Association, 2013 ) and International Statistical Classification of Diseases and Related Health Problems (ICD-10) ( World Health Organization, 2012 ). The most important phenomenological interviews are the Examination of Anomalous Self-experience (EASE, Parnas et al., 2005 ) and its supplement, the Examination of Anomalous World Experience (EAWE, Sass et al., 2017 ). These interviews have been inspired by the Husserlian tradition and have incorporated classical descriptions of phenomenological psychopathology (particularly from Blankenburg, Conrad, and Minkowski, among other authors). Their semi-structured design allows for an in-depth examination of the patients’ subjective experiences within formal structures, such as corporeality, temporality, spatiality, and intersubjectivity. In this way, the descriptive task is not carried out on a totally random basis, as the interviews have specific domains and items that have already been established to guide the examination of the patient’s experience. EASE and EAWE were developed with the chief purpose of exploring and better understanding patients’ experiential and behavioral manifestations of schizophrenia spectrum disorders. These interviews offer comprehensive descriptions of disorders of the pre-reflexive self or ipseity ( Sass, 1992 ; Parnas and Handest, 2003 ; Sass and Parnas, 2003 ; Parnas and Sass, 2008 ; Raballo et al., 2009 ; Fuchs, 2010b , 2013a ; Sass et al., 2018 ). Indeed, EASE and EAWE have had great international impact in clinical practice and empirical research in psychiatry and psychology, and EASE has been translated into more than 10 languages, among them German, Danish, Spanish, Italian, and French.

EASE and EAWE describe aspects of the patients’ anomalous experience that are not only relevant for diagnostic but also for psychotherapeutic purposes, as they can be useful as tools in both psychotherapeutic settings and in psychotherapy research. However, phenomenological psychopathology has focused primarily on the issue of psychiatric diagnosis, while the treatment of mental illness has remained less developed. Only in recent years has the treatment of mental illness become the focus of stronger research interest, directly involving the practice of psychotherapy ( Fuchs et al., 2019 ). For its part, although not rooted in phenomenology, body-oriented therapy has been linked to a phenomenological framework, as it provides empirical evidence for embodiment-approach conceptualizations ( Fuchs 2005 ; Fuchs and Schlimme, 2009 ; Koch and Fuchs, 2011 ; Fuchs and Koch 2014 ). The embodiment approach regards schizophrenia as a fundamental disturbance of embodiment, namely a “disembodiment,” that entails a diminishment of the basic sense of self, a disruption of implicit bodily functioning and, as a result, a disconnection from intercorporeality with others. A range of empirical research into body-oriented therapy has been carried out in the field of phenomenological psychopathology. Empirical evidence of the effectiveness of body-oriented therapy for schizophrenia has been obtained from quantitative research carried out with manualized interventions ( Röhricht and Papadopoulos, 2010 ) and using randomized controlled trials to measure outcomes ( Martin et al., 2016 ). Recent research has incorporated phenomenological interviews to describe therapeutic change processes in body-oriented therapy for schizophrenia, thus explaining the relationship between processes and outcomes ( Galbusera et al., 2018 ). Unsurprisingly, the phenomenological interviews revealed an understanding of change as a recovery of a “sense of self” in patients with schizophrenia ( Galbusera et al., 2019 ).

The conceptualization of schizophrenia as a disorder of the self is shared by a number of philosophical and clinical approaches: it is not exclusive to phenomenological psychiatry ( Parnas and Henriksen, 2014 ). So, in much the same way as body therapy has been “converted” to phenomenology, any other psychotherapeutic approach might well incorporate “front-loaded phenomenology,” in the sense of the possibility of being linked to the phenomenological framework. This is especially the case when the effectiveness of psychotherapy has been widely evidenced and recognized independently of its theoretical framework ( Campbell et al., 2013 ). For instance, narrative/dialogical psychotherapy addressing schizophrenia as a disorder of the self might be consistent with the phenomenological conceptualization and could even serve as a complement for body-oriented therapy. In fact, EASE’s and EAWE’s rich descriptions provide evidence that patients with schizophrenia are able to communicate their experience in a comprehensive narrative form, which is quite contrary to Martin et al.’s (2016) claim that verbal dialogue can be difficult in patients with severe mental disorders. A suitable alternative might be the “metacognitive model” ( Lysaker et al., 2018a ). Under this model, deficits in metacognition undermine the availability of a sense of self, others, and the world, making it difficult to provide an adequate response to everyday-life situations. To deal with this, the so-called metacognitive reflection and insight therapy (MERIT) has been designed to target metacognition and recover the availability of a sense of self in the patients’ experience ( Lysaker et al., 2018b ). Precisely because contemporary phenomenological psychiatry places particular emphasis on the bodily and pre-reflective level of experience, the use of phenomenological interviews to explore change process in MERIT might reveal interesting relationships between pre-reflexive and reflective forms of self-experience.

Does psychotherapy needs be rooted in the phenomenological tradition in order to be called “phenomenological?” Here we are talking about enterprises such as Freud’s psychoanalysis or Binswanger’s existential analysis/daseinsanalysis. Such an enterprise requires a well-achieved and comprehensive conceptualization of phenomenological psychopathology as well as a consequent psychotherapeutic intervention rooted in the same phenomenological conceptualization. Certainly, psychotherapy does not need to be rooted in phenomenology, although this enterprise, not a minor one, might be worth undertaking. Yet, the very essence of phenomenological psychotherapy is to remain faithful to the patient’s self-experience and their constitutive vulnerability ( Fuchs, 2013b ; Irarrázaval, 2013 , 2018 ; Irarrázaval and Sharim, 2014 ; Škodlar and Henriksen, 2019 ). Consequently, the development of integrative models of psychotherapy both bodily and narrative/dialogical addressing the patients’ experience of vulnerability is definitely a future challenge.

Why do we carry out phenomenological interviews with patients? Is merely contrasting phenomenological hypotheses or concepts enough to justify the patients’ involvement?

The justification for empirical research employing phenomenological interviews is extremely important, especially when persons with mental illnesses are involved. It is not only a matter of gathering data from the patients’ experience but also one of what to do with this data and, in the end, what for. It is an ethical issue concerning the impact phenomenological interviews might have on patients interviewed. Any interview aimed at exploring the experience of a patient always involves some kind of intervention, so even when applied by accredited experienced clinicians, an ethical justification is required. Arguments before ethics committees that phenomenological interviews are beneficial and do not worsen patients’ instability need to be convincing. Recalling and enacting in patients disturbing experiences we aim to grasp is certainly an intervention that needs justification. Obviously, phenomenological interviews are not psychotherapeutic interventions in themselves – that is, the dialogue in psychotherapy is not an interview – but they can be justified on the grounds similar to those usually employed by psychotherapy: the possibility of sharing anomalous experiences through an accepting and understanding communication helps patients to recover a sense of familiarity with their experience, thus reducing their sense of self-alienation. Furthermore, by means of the descriptive tasks called for in the semi-structured interviews, patients improve their articulation of anomalous experiences, which might have been otherwise overlooked, neglected, or even remain ineffable for them ( Zahavi and Martiny, 2019 ).

Phenomenological interviews have been simply defined as falling within the framework of an interview “which is informed by insights and concepts from the phenomenological tradition and (which) in turn informs a phenomenological investigation” ( Høffding and Martiny, 2016 , p. 540). However, phenomenological interviews involving patients with mental illness should not only be consistent with insights and concepts from the phenomenological tradition of philosophy and psychopathology but, most importantly, they must make explicit their contribution to both diagnosis and psychotherapy. While a biomedical psychiatric diagnosis is ultimately oriented toward finding a suitable pharmacological treatment, a phenomenological diagnosis is ultimately oriented toward providing a treatment based on the experiential dimension of a given mental illness. The interest of a psychotherapist goes beyond the psychiatric diagnostic emphasis by approaching the patient as a whole person, aiming to understand the anomalies of experience within his/her social, cultural, and historical context. This broader, psychological, approach enables an understanding not only of how patients make sense of their anomalous experiences but also of how symptoms manifest themselves within the patients’ immediate life context, as well as how a certain mental illness configures itself along the patients’ history of meaningful interactions with others ( Irarrázaval and Sharim, 2014 ; Irarrázaval, 2018 ). However, in spite of the importance given to the analysis of the patients’ biography by several authors from the phenomenological tradition of psychopathology (Jaspers, Binswanger, and Blankenburg, among other authors), “biographical methods,” originally developed for sociological research in the influential “Chicago School” ( Bornat, 2008 ), have not been sufficiently incorporated in current phenomenological empirical research in psychiatry and psychology.

How should we conduct phenomenological interviews with patients? How can we properly perform analysis in empirical phenomenological research in psychiatry and psychology?

A phenomenological interview involves a second-person situation, in which the dialogical communication with the patient is crucial. No matter how strange or unrealistic the patients’ anomalous experiences might appear to the interviewer, an attitude of professional competence and familiarity is necessary ( Nordgaard et al., 2013 ). For the patient, anomalous experiences are actually lived experiences despite their lack of commonsensical validity. Hallucinations and delusions are, like nonpsychotic experiences, first-personally given, which means that they have a solipsistic validity. This is one of the reasons why it is difficult, especially in psychotic phases, for patients to come to terms with the fact that what they actually experience is not credible or real in the eyes of others, and even abnormal or pathological in the eyes of the clinician. Clearly, the interviewer’s role is not to confront or contradict this lack of commonsensical validity, but simply to grasp the experiences as they appear to the patients. In other words, the interviewer conducts the interview with an attitude of empathetic understanding. Empathy should not be reduced to an attempt to understand the patient in a “representational” manner, in the sense that it does not refer to the interviewer’s own experience of processing (imitating, thinking, or imagining) the patient’s subjectivity ( Irarrázaval, 2019 ). Empathy is the condition of possibility for the “subject-subject” relationship ( Zahavi, 2015 ). That is to say, empathy is a distinct mode of other-directed intentionality that permits the unfolding of the patient’s experience, approached as a unique other person. In this sense, empathic understanding permits the unfolding of the what it is like of the patient’s anomalous experience.

In phenomenological interviews, why-like questions lead patients to respond with causal explanations of the anomalies of their experience or diagnosed mental illness, such as judgments, beliefs, theoretical constructions, etc., For their part, how-like questions guide patients to describe the way in which they live their experience, that is, the way in which the anomalies actually appear to the patients in their experience. To put it another way, both types of questions lead patients to talk about experiential contents, but in different ways: causal attributions in the former, and appearances in the latter. Causal attributions are by no means irrelevant aspects of the patient’s experience not worth addressing in the interview. The way in which patients’ attribute causes to their anomalous experience or mental illness can also provide valuable information for both diagnosis and psychotherapy. Moreover, the relationship between causal attributions and appearances is certainly valuable, as it entails a circular, dynamic process in which both orders of experiencing constantly influence one another. However, the gathering of phenomenological data is generally not aimed at obtaining causal explanations or attributional reports, as in the case of cognitive psychology, but mainly at exploring aspects of experience that how-like questions are designed to unfold.

Turning to data analysis, it has been said that phenomenology is interested in describing the formal structure of the experience rather than its content ( Gallagher and Zahavi, 2008 ), but what does this actually mean? It seems difficult to imagine an experience as a mere structure without any content. Moreover, it is not possible to establish a category of experience that has not been previously built upon any content analysis. In qualitative studies, categories are built upon the basis of prior content analysis; both hypotheses and categories are developed as the study progresses and emerge from the data itself ( Morrow, 2005 ; Maxwell, 2012 ), so-called “iterative process” ( Barbour and Barbour, 2003 ). EASE and EAWE were built collecting first-person descriptions by a significant number of patients (around 100 each), which allowed for their statistical validation. However, only a fairly general description has been provided of how EASE’s domains and items were developed: singular contents of anomalous experience are conceptualized and interconnected within a comprehensive system of meaningful structural wholes or Gestalts , leading to the “core” underlying psychopathological configuration ( Nordgaard et al., 2013 ). A recent qualitative study on the responses to the two scales highlights the specificities of the phenomena described by EASE and EAWE, indicating that disturbances of world experience are fundamentally less unitary, while the experience of the self presents a more coherent and unitary Gestalt ( Englebert et al., 2019 ).

Beyond the statistical validation of the interviews, replication is needed in other clinical samples and cultures to support previous findings and provide added evidence when compared with multiple clinical groups and cross culturally. However, if the focus of the analysis is placed merely on formal structural aspects, then when applying EASE and EAWE to new patients, we will not find domains or categories different from those already defined. To put it differently, quantitative replication of EASE or EAWE in other samples would barely lead to any new knowledge, because already established domains and items tend to constrain the patients’ responses. So, particularly in terms of their potential contribution to psychotherapy, the best contribution that could be made from applying EASE and EAWE to new patients would result from a content analysis of the patients’ reports. However, one key question concerning these interviews’ replication remains unanswered: Which is the most appropriate qualitative method for analyzing the patients’ descriptions?

The empirical application of Husserl’s phenomenological method outside the strict scope of philosophy still is a topic of ongoing debate in both philosophy and the cognitive sciences. According to Zahavi (2019a , b , c) , in philosophy, the main goal of phenomenology is not purely descriptive or attentive to how things appear to the subject; it focuses neither on the subject nor on the object, but on the correlation between them. In this context, the term epoché is used to refer to suspending or putting between parentheses a “naïve” or “natural” attitude toward reality in order to reflect upon fundamental ontological questions, thus adopting a critical stance on the conception of reality as mind-independently given. Epoché , usually described as putting “in brackets” the prejudices and theoretical assumptions of the interviewer ( Fischer, 2009 ), in order to access phenomena as they appear in the subject’s experience, has little to do with the original philosophical method. This does not imply that bracketing our prejudices and theoretical assumptions would not be desirable to avoid bias when conducting phenomenological interviews or analyzing data (we can find several techniques for doing so). It is also not so important to calling such bracketing epoché , as long as we have a basic notion of Husserl’s original sense of the term.

Phenomenology has been applied in empirical research not only in psychiatry and psychology, but also in other healthcare-related disciplines, such as nursing studies ( Zahavi and Martiny, 2019 ). Nevertheless, the different forms in which phenomenology has been applied in these disciplines have been also controversial due to their divergence from the original Husserlian philosophical method ( Zahavi, 2019b , d ). For instance, some have questioned whether the method of analysis proposed by Giorgi (2009 , 2012) , “descriptive phenomenological psychological method,” should be considered “phenomenological” or given another label. This method is aimed at the establishment of inclusive categories resulting from the content analysis of subjects’ descriptions. In fact, Giorgi’s method of content analysis seems closer to an adapted form of “eidetic variation” and quite different to the original Husserlian sense of the epoché , because it basically consists of summarizing the content of the interview transcript by deleting its redundancies, in order to reveal invariables or essences in “meaning” (see Irarrázaval, 2015 ). Eidetic variation is a conceptual analysis that, by imagining a phenomenon as being different from how it currently is, leads to the isolation of its essential features or aspects, in the sense that such features or aspects cannot be varied or deleted without preventing the phenomenon from being the kind of phenomenon that it is ( Parnas and Zahavi, 2002 ). Another example of a so-called applied phenomenological method is “microphenomenology” ( Petitmengin et al., 2018 ; Depraz, 2020 ). This method, like Giorgi’s, also diverges from the original Husserlian philosophical method. In addition to the method of analysis, micro-phenomenology includes some “principles” regarding the interview. Microphenomenological analysis seeks to identify generic pre-reflexive structures from descriptions of “singular” lived experiences. The pre-reflexive aspect of experience is conceived as experientially “unnoticed,” in the sense that it is not immediately accessible to reflective consciousness and verbal description. However, at least in the way Petitmengin et al. (2018) present it, what results from the analysis seems to be more a description of the figurative aspects or features of the object rather than experiential structures of the subject (for example, size, shape, temperature, color, etc.,).

Whether to find evidence supporting already-existing insights and concepts or to make it possible for new insights and concepts to emerge from the data itself, phenomenological empirical research must take on board patients’ accounts of their subjective experience. Phenomenological interviews should present clear guidelines on both how to conduct them and the qualitative methods employed in analyzing patients’ subjective experiences. The research report should follow standards for presenting qualitative research ( O’Brien et al., 2014 ). Still, the most challenging aspect of phenomenological empirical research in psychiatry and psychology is the proper method for analyzing patients’ reports. Neither the original Husserlian question of phenomenological philosophizing nor the phenomenological method of philosophical analysis appears appropriate for empirical application. There seems to be a gap between the phenomenological philosophical method and its empirical versions.

Phenomenological philosophy, psychiatry, and psychology have different aims and practical implications. This implies that the methods used in each of these research fields are necessarily different, since they serve as a means to achieve the different aims pursued by each of the corresponding disciplines. In philosophy, the phenomenological method serves as a means to reflect upon fundamental ontological questions regarding our active subjective involvement in the constitution of the world. However, in phenomenological psychiatry and psychology, the methods serve as a means to achieve more precise, complete, and differential diagnoses, with the aim of improving psychotherapy and, ultimately, patients’ well-being. Nevertheless, regardless of their divergence from the original philosophical method, Georgi’s method of content analysis (to a greater extent), and “microphenomenology” (to a lesser extent), have been quite influential, precisely because of their attempt to bridge this gap, providing a response to the need for a phenomenological method for qualitative research.

An entirely different way of dealing with this problem would not be to seek empirical adaptations of the original phenomenological method inherent in philosophy, nor to limit phenomenology to a mere descriptive task of subjective experience, but to make phenomenology a theoretical framework for empirical research, and even more, a transcendental paradigm. Although its method is certainly fundamental to it, phenomenology should not be reduced to its methodology. Phenomenology is a comprehensive theoretical framework that has been developed on the basis of serious conceptual and empirical research into the subject-world correlation ( Zahavi, 2019a ), including studies of formal structures of experience (spatiality, temporality, corporeality, intersubjectivity, and historicity), research into the modes of intentionality (perception, agency, phantasy, memory, emotions, and empathy), and psychological analyses of meaning-making processes in social interactions. Additionally, despite the different aims and methods involved, just as in phenomenological philosophy, in phenomenological psychiatry and psychology the core philosophical commitment regarding a critical stance on the conception of reality as mind-independently given is fundamental ( Zahavi, 2017 , 2019e ). Does psychiatry and psychology really need the Husserlian method to adopt the phenomenological attitude toward the conception of reality as mind-dependent? No, because this core philosophical commitment already constitutes the basis of a transcendental paradigm in phenomenological psychiatry and psychology.

Mainstream psychiatry has been developed within a natural-scientific paradigm. From the positivist viewpoint of psychiatry, the notion of normality is defined with regard to the degree of correspondence between subjective experience and objective reality. Consequently, abnormality is defined in terms of its degree of deviation from an objective reality that provides the evidence for commonsensical validity. For its part, phenomenological psychopathology approaches mental phenomena in terms of a phenomenological analysis of the patient’s subjectivity, placing the focus on the conditions of possibility of human experience in general, beyond it being diagnosed as abnormal according to common standards of objectivity. For instance, in current diagnostic systems, psychosis is diagnosed by the presence of hallucinations and delusions, as defined by a “natural attitude” that takes for granted the validity of an objective given reality. In DSM-5, hallucination is defined as a perception without object (or an error of perception) and delusion as a false belief of reality ( American Psychiatric Association, 2013 ). In contrast, from a phenomenological approach, a disturbance is not approached in terms of the clinician’s evidence of the inexistence of the object of perception or the lack of external evidence of the patient’s belief, but rather in terms of an analysis of the particular mode of intentionality that constitutes the hallucination or delusion as such. In other words, the clinician is concerned with a phenomenological analysis of the patient’s subjectivity, addressing with empathic understanding the patient’s “self-evidence” or “solipsistic truth,” correlated with the experience of hallucination or delusion, respectively. Indeed, the “external” inexistent object should provide for the clinician with evidence that hallucination is not perception, as it is impossible to have a perception without a directly present object.

Consequently, it would be misleading to conceive of hallucination as something to do with perception at all. Instead, hallucinations would have more to do with the phenomenology of fantasy, whose distinctive character is to “re-present” an object of perception that is not directly present, but absent from the actual field of perception. According to Cavallaro (2017) , it is not the presentation/re-presentation dichotomy, but what Husserl calls “ego-splitting” ( Ichspaltung ) that is crucial to distinguishing when experiencing the “quasi perception” produced by fantasy and not a perception as such. Ego-splitting makes possible the experience of the “as if” fictive character of self-awareness when fantasizing. However, when hallucinating, the patient experiences his/her own thoughts, anticipations, or imaginations just as in original experiences of perception. So, it may be posited that it is precisely this lack of the “as if” self-awareness of the “quasi perception” that lies at the core of psychosis. Such a theory would require further phenomenological research to draw more distinctions between the nature of hallucination in contrast to that of fantasy, as well as regarding other modalities of experiencing which do not have an intentional object directly present, such as anticipations, thoughts, memories, and dreams. Still, introducing the concept of “ego-splitting” as non-pathological might be challenging to traditional psychiatric concepts, especially with regard to schizophrenia.

Finally, the phenomenological attitude should not be conceived of as being like any other attitude; it is obviously not literally an attitude. The phenomenological attitude is a paradigmatic commitment of a non-pregiven reality. This core philosophical commitment is particularly important because it entails a quite unique approach to mental illness, including different conceptualizations of psychopathology, diagnosis, normality, empathy, and psychotherapy, thus leading qualitative empirical research in psychiatry and psychology toward new horizons. Moreover, the notion of suspending the natural attitude to approaching reality (including all kinds of phenomena) lies at the heart of the phenomenological framework for anyone claiming to be a phenomenologist, whether conceptual or empirical, and regardless of other particular methods and topics of study. In this way, the phenomenological attitude might be conceived of the basis of a transcendental scientific paradigm for qualitative research in psychiatry and psychology. This latter claim, which supports the idea that phenomenological psychology – in order to be properly phenomenological – must become transcendental, and the phenomenological conceptualization of hallucination as pathology of fantasy provide challenging directions for future research.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Funding. This article has been financially supported by Chilean National Commission for Scientific and Technological Research CONICYT PFCHA/POSTDOCTORADO EN EL EXTRANJERO BECAS CHILE/2017 – 74180011.

  • American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders DSM-5 . 5th Edn. Washington: American Psychiatric Publishing. [ Google Scholar ]
  • Barbour R. S. (2000). The role of qualitative research in broadening the “evidence base” for clinical practice . J. Eval. Clin. Pract. 6 , 155–163. 10.1046/j.1365-2753.2000.00213.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barbour R., Barbour M. (2003). Evaluating and synthesizing qualitative research: the need to develop a distinctive approach . J. Eval. Clin. Pract. 9 , 179–186. 10.1046/j.1365-2753.2003.00371.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bornat J. (2008). “ Biographical methods ” in The sage handbook of social research methods . eds. Alasuutari P., Bickman L., Brannen J. (London: Sage; ), 344–356. [ Google Scholar ]
  • Campbell L. F., Norcross J. C., Vasquez M. J. T., Kaslow N. J. (2013). Recognition of psychotherapy effectiveness: the APA resolution . Psychotherapy 50 , 98–101. 10.1037/a0031817, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cavallaro M. (2017). The phenomenon of ego-splitting in Husserl’s phenomenology of pure phantasy . J. Br. Soc. Phenomenol. 48 , 162–177. 10.1080/00071773.2016.1250436 [ CrossRef ] [ Google Scholar ]
  • Chaminade T., Decety J. (2002). Leader or follower? Involvement of the inferior parietal lobule in agency . Neuroreport 13 , 1975–1978. 10.1097/00001756-200210280-00029, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Creswell J. W., Poth C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches . 4th Edn. London: Sage. [ Google Scholar ]
  • Deacon B. (2013). The biomedical model of mental disorder: a critical analysis of its validity, utility, and effects on psychotherapy research . Clin. Psychol. Rev. 33 , 846–861. 10.1016/j.cpr.2012.09.007, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deacon B., McKay D. (2015). The biomedical model of psychological problems: a call for critical dialogue . Behav. Ther. 38 , 231–235. [ Google Scholar ]
  • Depraz N. (2020). “ Husserlian phenomenology in the light of microphenomenology ” in Husserl, Kant and transcendental phenomenology . eds. Apostolu I., Serban C. (Berlin: De Gruyter; ). [ Google Scholar ]
  • Depraz N., Desmidt T. (2019). Cardiophenomenology: a refinement of neurophenomenology . Phenomenol. Cogn. Sci. 18 , 493–507. 10.1007/s11097-018-9590-y, PMID: [ CrossRef ] [ Google Scholar ]
  • Depraz N., Gyemant M., Desmidt T. (2017). A first-person analysis using third-person data as a generative method a case study of surprise in depression . Constr. Found. 12 , 190–203. Available at: http://constructivist.info/12/2/190 [ Google Scholar ]
  • Depraz N., Varela F. J., Vermersch P. (2003). On becoming aware: A pragmatics of experiencing . (Amsterdam: John Benjamin; ). [ Google Scholar ]
  • Englebert J., Monville F., Valentiny C., Mossay F., Pienkos E., Sass L. (2019). Anomalous experience of self and world: administration of the EASE and EAWE scales to four subjects with schizophrenia . Psychopathology 52 , 294–303. 10.1159/000503117, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Farrer C., Frith C. D. (2002). Experiencing oneself vs. another person as being the cause of an action: the neural correlates of the experience of agency . NeuroImage 15 , 596–603. 10.1006/nimg.2001.1009, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fischer C. T. (2006). Qualitative research methods for psychologists: Introduction through empirical case studies . San Diego, CA: Academic Press. [ Google Scholar ]
  • Fischer C. T. (2009). Bracketing in qualitative research: conceptual and practical matters . Psychother. Res. 19 , 583–590. 10.1080/10503300902798375, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fuchs T. (2005). Corporealized and disembodied minds: a phenomenological view of the body in melancholia and schizophrenia . Philos. Psychiatry Psychol. 12 , 95–107. 10.1353/ppp.2005.0040 [ CrossRef ] [ Google Scholar ]
  • Fuchs T. (2010a). Subjectivity and intersubjectivity in psychiatric diagnosis . Psychopathology 43 , 268–274. 10.1159/000315126, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fuchs T. (2010b). The psychopathology of hyperreflexivity . J. Specul. Philos. 24 , 239–255. 10.1353/jsp.2010.0010 [ CrossRef ] [ Google Scholar ]
  • Fuchs T. (2013a). “ The self in schizophrenia: Jaspers, Schneider and beyond ” in One century of Karl Jaspers’ general psychopathology . eds. Stanghellini G., Fuchs T. (Oxford: Oxford University Press; ), 245–257. [ Google Scholar ]
  • Fuchs T. (2013b). Existential vulnerability: toward a psychopathology of limit situations . Psychopathology 46 , 1–8. 10.1159/000351838, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fuchs T., Koch S. C. (2014). Embodied affectivity: on moving and being moved . Front. Psychol. 5 :508. 10.3389/fpsyg.2014.00508, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fuchs T., Messas G. P., Stanghellini G. (2019). More than just description: phenomenology and psychotherapy . Psychopathology 52 , 63–66. 10.1159/000502266, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fuchs T., Schlimme J. E. (2009). Embodiment and psychopathology: a phenomenological perspective . Curr. Opin. Psychiatry 22 , 570–575. 10.1097/YCO.0b013e3283318e5c, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Galbusera L., Fellin L., Fuchs T. (2019). Towards the recovery of a sense of self: an interpretative phenomenological analysis of patients’ experience of body-oriented psychotherapy for schizophrenia . Psychother. Res. 29 , 234–250. 10.1080/10503307.2017.1321805, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Galbusera L., Finn M. T., Fuchs T. (2018). Interactional synchrony and negative symptoms: an outcome study of body-oriented psychotherapy for schizophrenia . Psychother. Res. 28 , 457–469. 10.1080/10503307.2016.1216624, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gallagher S. (2003). Phenomenology and experimental design: toward a phenomenologically enlightened experimental science . J. Conscious. Stud. 10 , 85–99. [ Google Scholar ]
  • Gallagher S., Zahavi D. (2008). The phenomenological mind: An introduction to philosophy of mind and cognitive science . 2nd Edn. New York: Routledge. [ Google Scholar ]
  • Giorgi A. (2009). The descriptive phenomenological method in psychology: A modified Husserlian approach . Pittsburgh, PA: Duquesne University Press. [ Google Scholar ]
  • Giorgi A. (2012). The descriptive phenomenological psychological method . J. Phenomenol. Psychol. 43 , 3–12. 10.1163/156916212X632934 [ CrossRef ] [ Google Scholar ]
  • Gozé T., Moskalewicz M., Schwartz M. A., Naudin J., Micoulaud-Franchi J. A., Cermolacce M. (2019). Reassessing “praecox feeling” in diagnostic decision making in schizophrenia: a critical review . Schizophr. Bull. 45 , 966–970. 10.1093/schbul/sby172, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hammersley M. (2018). Commentary—on the “indistinguishability thesis”: a response to Morgan . J. Mixed Methods Res. 12 , 256–261. 10.1177/1558689818772764 [ CrossRef ] [ Google Scholar ]
  • Høffding S., Martiny K. (2016). Framing a phenomenological interview: what, why and how . Phenomenol. Cogn. Sci. 15 , 539–564. 10.1007/s11097-015-9433-z [ CrossRef ] [ Google Scholar ]
  • Irarrázaval L. (2013). Psychotherapeutic implications of self disorders in schizophrenia . Am. J. Psychother. 67 , 277–292. 10.1176/appi.psychotherapy.2013.67.3.277, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Irarrázaval L. (2015). The lived body in schizophrenia: transition from basic self-disorders to full-blown psychosis . Front. Psychol. 6 :9. 10.3389/fpsyt.2015.00009, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Irarrázaval L. (2018). Vulnerability in schizophrenia: a phenomenological anthropological approach . Journal of Intercultural Philosophy 1 , 157–167. 10.11588/icp.2018.1.48070 [ CrossRef ] [ Google Scholar ]
  • Irarrázaval L. (2019). Empathy for the foreign experience: a convergent phenomenological definition . J. Theor. Phil. Psychol. 10.1037/teo0000128 [ CrossRef ] [ Google Scholar ]
  • Irarrázaval L., Sharim D. (2014). Intersubjectivity in schizophrenia: life story analysis of three cases . Front. Psychol. 5 :100. 10.3389/fpsyg.2014.00100, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Koch S. C., Fuchs T. (2011). Embodied arts therapies . Arts Psychother. 38 , 276–280. 10.1016/j.aip.2011.08.007 [ CrossRef ] [ Google Scholar ]
  • Lutz A., Lachaux J. P., Martinerie J., Varela F. J. (2002). Guiding the study of brain dynamics by using first-person data: synchrony patterns correlate with ongoing conscious states during a simple visual task . Proc. Natl. Acad. Sci. U. S. A. 99 , 1586–1591. 10.1073/pnas.032658199, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lysaker P. H., Dimaggio G., Hamm J. A., Leonhardt B. L., Hochheiser J., Lysaker J. T. (2018a). Disturbances in self-experience in schizophrenia: metacognition and the development of an integrative recovery-oriented individual psychotherapy . Psychopathology 52 , 135–142. 10.1159/000495297, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lysaker P. H., Irarrázaval L., Gagen E., Armijo I., Ballerini M., Mancini M., et al.. (2018b). Metacognition in schizophrenia disorders: comparisons with community controls and bipolar disorder: replication with a Spanish language Chilean sample . Psychiatry Res. 267 , 528–534. 10.1016/j.psychres.2018.06.049, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Martin L. A., Koch S. C., Hirjak D., Fuchs T. (2016). Overcoming disembodiment: the effect of movement therapy on negative symptoms in schizophrenia—a multicenter randomized controlled trial . Front. Psychol. 7 :483. 10.3389/fpsyg.2016.00483, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Martiny K. M. (2017). Varela’s radical proposal: how to embody and open up cognitive science . Constr. Found. 13 , 59–67. Available at: http://constructivist.info/13/1/059 [ Google Scholar ]
  • Maxwell J. (2011). A realist approach for qualitative research . Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Maxwell J. (2012). Qualitative research design: An interactive approach . 3rd Edn. London: Sage. [ Google Scholar ]
  • Maxwell J. A. (2019). Distinguishing between quantitative and qualitative research: a response to Morgan . J. Mixed Methods Res. 13 , 132–137. 10.1177/1558689819828255 [ CrossRef ] [ Google Scholar ]
  • Mishara A. L. (1994). “ A phenomenological critique of commonsensical assumptions of DSM-III-R: The avoidance of the patient’s subjectivity ” in Philosophical perspectives on psychiatric diagnostic classification . eds. Sadler J. Z., Schwartz M. A., Wiggins O. P. (Baltimore: Johns Hopkins Series in Psychiatry and Neuroscience; ), 129–147. [ Google Scholar ]
  • Morgan D. L. (2018). Living within blurry boundaries: the value of distinguishing between qualitative and quantitative research . J. Mixed Methods Res. 12 , 268–279. 10.1177/1558689816686433 [ CrossRef ] [ Google Scholar ]
  • Morrow S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology . J. Couns. Psychol. 52 , 250–260. 10.1037/0022-0167.52.2.250 [ CrossRef ] [ Google Scholar ]
  • Nordgaard J., Sass L. A., Parnas J. (2013). The psychiatric interview: validity, structure and subjectivity . Eur. Arch. Psychiatry Clin. Neurosci. 263 , 353–364. 10.1007/s00406-012-0366-z, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • O’Brien B. C., Harris I. B., Beckman T. J., Reed D. A., Cook D. A. (2014). Standards for reporting qualitative research: a synthesis of recommendations . Acad. Med. 89 , 1245–1251. 10.1097/ACM.0000000000000388, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pallagrosi M., Fonzi L. (2018). On the concept of praecox feeling . Psychopathology 51 , 353–361. 10.1159/000494088 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pallagrosi M., Fonzi L., Picardo A., Biondi M. (2014). Assessing clinician’s subjective experience during the interaction with patients . Psychopathology 47 , 111–118. 10.1159/000351589, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parnas J., Handest P. (2003). Phenomenology of anomalous self-experience in early schizophrenia . Compr. Psychiatry 44 , 121–134. 10.1053/comp.2003.50017, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parnas J., Henriksen M. G. (2014). Disordered self in the schizophrenia spectrum: a clinical and research perspective . Harv. Rev. Psychiatry 22 , 251–265. 10.1097/HRP.0000000000000040, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parnas J., Moeller P., Kircher T., Thalbitzer J., Jannson L., Handest P., et al. (2005). EASE: examination of anomalous self-experience . Psychopathology 38 , 236–258. 10.1159/000088441 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parnas J., Sass L. (2008). “ Varieties of phenomenology: on description, understanding, and explanation in psychiatry ” in Philosophical issues in psychiatry: Explanation, phenomenology and nosology . eds. Kendler K., Parnas J. (New York: Johns Hopkins University Press; ), 239–277. [ Google Scholar ]
  • Parnas J., Zahavi D. (2002). “ The role of phenomenology in psychiatric diagnosis and classification ” in Psychiatric diagnosis and classification . eds. Maj M., Gaebel W., López-Ibor J. J., Sartorius N. (Hoboken, NJ: John Wiley & Sons Inc.), 137–162. [ Google Scholar ]
  • Patton M. (2015). Qualitative research and evaluation methods . 4th Edn. Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Petitmengin C., Remillieux A., Valenzuela-Moguillansky C. (2018). Discovering the structures of lived experience: towards a micro-phenomenological analysis method . Phenomenol. Cogn. Sci. 18 , 691–730. 10.1007/s11097-018-9597-4 [ CrossRef ] [ Google Scholar ]
  • Raballo A., Sæbye D., Parnas J. (2009). Looking at the schizophrenia spectrum through the prism of self-disorders: an empirical study . Schizophr. Bull. 37 , 44–351. 10.1093/schbul/sbp056, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Röhricht F., Papadopoulos N. (2010). A treatment manual: Body oriented psychological therapy for chronic schizophrenia . London: Newham Centre for Mental Health. [ Google Scholar ]
  • Ruby P., Decety J. (2001). Effect of subjective perspective taking during simulation of action: a PET investigation of agency . Nat. Neurosci. 4 , 546–550. 10.1038/87510, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sass L. (1992). Madness and modernism: Insanity in the light of modern art, literature and thought . New York: Basic Books. [ Google Scholar ]
  • Sass L., Borda J. P., Madeira L., Pienkos E., Nelson B. (2018). Varieties of self disorder: a bio-pheno-social model of schizophrenia . Schizophr. Bull. 44 , 720–727. 10.1093/schbul/sby001, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sass L., Parnas J. (2003). Schizophrenia, consciousness, and the self . Schizophr. Bull. 29 , 427–444. 10.1093/oxfordjournals.schbul.a007017, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sass L., Pienkos E., Skodlar B., Stanghellini G., Fuchs T., Parnas J., et al.. (2017). EAWE: examination of anomalous world experience . Psychopathology 50 , 10–54. 10.1159/000454928, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwartz H., Jacobs J. (1979). Qualitative sociology: A method to the madness . New York: Free Press. [ Google Scholar ]
  • Škodlar B., Henriksen M. H. (2019). Toward a phenomenological psychotherapy for schizophrenia . Psychopathology 52 , 117–125. 10.1159/000500163, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2010). SAGE handbook of mixed methods in social & behavioral research . 2nd Edn. Thousand Oaks, CA: SAGE Publications, Inc. [ Google Scholar ]
  • Varela F. (1990). Conocer: Las ciencias cognitivas: tendencias y perspectivas. Cartografía de las ideas actuales [To know: Cognitive sciences: Trends and prospects. Cartography of current ideas] . Barcelona: Gedisa. [ Google Scholar ]
  • Varela F. J. (1996). Neurophenomenology: a methodological remedy for the hard problem . J. Conscious. Stud. 3 , 330–349. [ Google Scholar ]
  • Varela F. J., Shear J. (1999). First-person methodologies: what, why, how? J. Conscious. Stud. 6 , 1–14. [ Google Scholar ]
  • World Health Organization (2012). The ICD-10 classification of mental and behavioral disorders: Clinical descriptions and diagnostic guidelines . Geneva: World Health Organization. [ Google Scholar ]
  • Zahavi D. (2015). You, me and we: the sharing of emotional experiences . J. Conscious. Stud. 22 , 84–101. [ Google Scholar ]
  • Zahavi D. (2017). Husserl’s legacy: Phenomenology, metaphysics, and transcendental philosophy . Oxford, UK: Oxford University Press. [ Google Scholar ]
  • Zahavi D., Martiny K. M. M. (2019). Phenomenology in nursing studies: new perspectives . Int. J. Nurs. Stud. 93 , 155–162. 10.1016/j.ijnurstu.2019.02.018, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zahavi D. (2019a). Phenomenology: The basics . London: Routledge. [ Google Scholar ]
  • Zahavi D. (2019b). Getting it quite wrong: Van Manen and Smith on phenomenology . Qual. Health Res. 29 , 900–907. 10.1177/1049732318817547, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zahavi D. (2019c). Applied phenomenology: why it is safe to ignore the epoché . Cont. Philos. Rev. 1–15. 10.1007/s11007-019-09463-y [ CrossRef ] [ Google Scholar ]
  • Zahavi D. (2019d). The practice of phenomenology: the case of Max van Manen . Nurs. Philos. 21 :e12276. 10.1111/nup.12276, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zahavi D. (2019e). “ Phenomenology as metaphysics ” in The Routledge handbook of metametaphysics . eds. Bliss R., Miller J. (London: Routledge; ). [ Google Scholar ]
  • Open access
  • Published: 14 July 2014

New directions in evidence-based policy research: a critical analysis of the literature

  • Kathryn Oliver 1 , 2 ,
  • Theo Lorenc 2 &
  • Simon Innvær 3  

Health Research Policy and Systems volume  12 , Article number:  34 ( 2014 ) Cite this article

42k Accesses

263 Citations

118 Altmetric

Metrics details

Despite 40 years of research into evidence-based policy (EBP) and a continued drive from both policymakers and researchers to increase research uptake in policy, barriers to the use of evidence are persistently identified in the literature. However, it is not clear what explains this persistence – whether they represent real factors, or if they are artefacts of approaches used to study EBP. Based on an updated review, this paper analyses this literature to explain persistent barriers and facilitators. We critically describe the literature in terms of its theoretical underpinnings, definitions of ‘evidence’, methods, and underlying assumptions of research in the field, and aim to illuminate the EBP discourse by comparison with approaches from other fields. Much of the research in this area is theoretically naive, focusing primarily on the uptake of research evidence as opposed to evidence defined more broadly, and privileging academics’ research priorities over those of policymakers. Little empirical data analysing the processes or impact of evidence use in policy is available to inform researchers or decision-makers. EBP research often assumes that policymakers do not use evidence and that more evidence – meaning research evidence – use would benefit policymakers and populations. We argue that these assumptions are unsupported, biasing much of EBP research. The agenda of ‘getting evidence into policy’ has side-lined the empirical description and analysis of how research and policy actually interact in vivo . Rather than asking how research evidence can be made more influential, academics should aim to understand what influences and constitutes policy, and produce more critically and theoretically informed studies of decision-making. We question the main assumptions made by EBP researchers, explore the implications of doing so, and propose new directions for EBP research, and health policy.

Peer Review reports

Introduction: the evidence-based policy movement

Although sceptics can be found, few researchers or policymakers would publicly disagree that evidence-based policy (EBP) is a goal for both academe and government. Reports describing the importance – and difficulty of – achieving this goal are published with regularity [ 1 – 3 ]. Perhaps unlike other disciplines, EBP has staunch advocates who contribute to an ever-increasing body of commentary around the subject.

EBP is sometimes said to have derived from evidence-based medicine (EBM), which dates back at least to 1972, with Archie Cochrane’s seminal work on effectiveness and efficiency [ 4 ]. Since the early 1970s, both practitioners and academics have also considered how policy – in the sense of larger-scale decisions about the delivery and management of services at a population level – could be based on, or informed by, evidence [ 5 , 6 ]. For Cochrane and his heirs, the goal of EBM was to bring about the abandonment of harmful and ineffective interventions, and the adoption of interventions shown to be effective for clinical outcomes. This was to be achieved by implementing findings from systematic reviews and meta-analyses of robust outcome evaluations, ideally randomised trials [ 7 – 10 ].

However, this straightforward narrative of evaluation-based EBP modelled on medicine has long existed in a broader landscape of initiatives to foster closer and more effective links between research and policy (or between researchers and policy-makers). In the UK, for example, these ideas can be traced back at least to the Rothschild experiment, evaluated by Kogan and Henkel in 1983 [ 11 ]. The aim of this funding initiative was to enable the health research system to respond to policymakers’ priorities for research [ 12 ]. The experiment was largely abandoned, which Kogan and Henkel ascribed to cultural differences between researchers and policymakers, the need for interaction, and other barriers [ 12 , 13 ]. Recent publications, again in the UK, from the Government Office for Science have provided guides for both academics and policymakers who wish to engage with the other group [ 14 , 15 ], and there is increasing interest within academic and higher education bodies generally about how academics may be able to increase their impact, the nature of this impact, and implications of the ‘impact agenda’ [ 16 – 18 ]. However, the tenor of these publications still tends to focus on promoting the use of academic research, rather than studying the practices of knowledge production and policymaking and implementation.

As a result, specific questions about the use of research within policy have become divorced from a broader perspective on policy-making. Many researchers in what may be termed ‘applied research’ fields use terms like ‘knowledge translation’, ‘knowledge exchange’, or ‘evidence use’ without providing clear definitions of what knowledge is part of which decision-making process [ 19 – 21 ]. The result has been a loss of clarity with respect to how evidence is supposed to improve decision-making, what constitutes and defines ‘evidence’ and ‘policy’, and which processes and outcomes are targeted by political or academic efforts. Nevertheless, both EBM and EBP have achieved substantial financial and political support, and substantial weight by the creation of a collection of organisations dedicated to producing evidence-based policy and practice recommendations, guidelines, and best practice statements, such as the What Works centres [ 22 ]. For health policy and health management in particular, there remains a prominent discourse about moral, ethical, political, and often financial imperatives to use evidence to make the best (value) decisions [ 23 ] – without, ideally, disenfranchising non-expert publics [ 24 ]. The study of the use of evidence in policy varies from negative to positive advocacy, from simplistic to complex understandings of the processes involved, from uncritical technical approaches to highly cynical commentary. Broadly, practitioners and academics have focused on facets of the use of evidence by policymakers and practitioners, have written polemics encouraging colleagues to do so [ 23 , 25 – 29 ], identified barriers and facilitators into evidence use [ 30 , 31 ], and designed interventions to increase the use of evidence by policymakers [ 32 ]. Evidence-based policy and practice, knowledge translation, and related concepts have become touchstones across a vast range of disciplines – almost sub-disciplines in their own right, with canons and conceptual toolkits of their own.

While much of this work remains mainly theoretical (e.g., [ 33 ]), there is a rapidly growing empirical evidence base on barriers and facilitators of evidence use. Some idea of the extent and nature of this literature can be gained by looking at systematic reviews. Three reviews are particularly relevant here [ 30 , 31 , 34 ]; their methods, findings, and conclusions are summarised in Table  1 .

Common findings across all three include the importance of personal relationships and contacts between decision-makers and researchers, and the need for research to be clearly and accessibly presented. Cultural and practical barriers to the use of evidence by policy-makers are identified. Finally, all three make the point that policymakers’ definitions of evidence do not match academic constructions of ‘evidence’. All three reviews also point to gaps in the literature as priorities for further research, but differ in their identification of these gaps. Orton focuses on the need for evaluation research of strategies to increase the uptake of research evidence [ 31 ], while Innvær et al. [ 34 ] and Oliver et al. [ 30 ] are both more circumspect, pointing out that despite the size of the evidence base, much about policy-makers’ attitudes to research evidence remains unclear. Innvær et al. show how the limited available evidence mainly describes policymakers’ beliefs and attitudes, rather than actual behaviours, and hence cannot be used as a basis to make strong recommendations [ 34 ]. Perhaps more importantly, both reviews note that there are few grounds by which to make firm recommendations or conclusions about the process, impact, or effectiveness of research in policy. Only rarely is enough detail known about the policy process to be able to comment usefully: for example, who are the main actors, where are decisions made, and how evidence fits into the process.

The structure of this paper is as follows. First, drawing on a dataset from a recent systematic review of barriers and facilitators [ 30 ], we offer a high-level overview of the literature on evidence use in health policy, drawing out broad theoretical and definitional commitments. This paper does not primarily consider the findings of those studies, which are summarised above and set out in more detail elsewhere [ 30 ]. We also draw on similar findings about policymakers in non-health fields [ 35 ]. We set these data in the context of the wider theoretical literature about evidence and policy, drawing insights from policy sciences. Secondly, we aim to identify and challenge some of the more normative assumptions which are widely prevalent (if often implicit) in the EBP literature, particularly the following: that the policy-evidence ‘gap’ needs ‘bridging’; that policy is usually not based on any data; that policy requires research evidence, preferably evaluative intervention research evidence; and that if more evidence were used in policy making, policy would be better. Finally, we make suggestions for a new agenda for EBP research.

We are not advocating against EBP. On the contrary, we believe that better policy decisions would be a desirable outcome, and that evidence ought to play a role in those decisions. Rather, our conclusions from this critical review of the literature are about EBP research, rather than EBP itself. Researchers have directed their attention at how to increase the impact of their own outputs, rather than on understanding the processes behind policy change. The support for EBP is not as single-minded or vociferous as it was. Reflecting on this historical trend, and reasons behind any such shift, we present our novel contribution to the literature: an illumination of the discourse around EBP by comparing theories, methods, and substantive approaches with those from other fields, and on this basis propose new directions for EBP research.

Review: evidence-based policy research

Below, we describe some of the underlying concepts and approaches available to researchers attempting to understand the relationship between research and policy processes.

Theoretical underpinnings

Innvær et al. described how the literature at the time fell into two camps [ 34 ]; those supporting the ‘two-communities’ hypothesis, which explores whether barriers to research utilisation are mainly driven by cultural or institutional differences between researchers and decision-makers, and those drawing on Carol Weiss’s typology of ways of using evidence [ 36 , 37 ]. Policy and academic actors were conceptualised as opposing sets of actors, with different priorities, languages, practices, and priorities. For proponents of this perspective, ‘bridging’ this gap becomes a priority. It seems likely, however, that insisting on the existence of this ‘gap’ may polarize previously neutral actors; and indeed this debate fails to recognise that this may be a UK-specific problem (cf. to Dutch studies).

Models of the research and policy processes are, as noted by previous theorists of EBP, rarely made explicit in evaluations of applied research. Where implicit, a simple ‘pipeline’ model is usually assumed (i.e., that the more research is carried out and the higher the quality, the bigger the effect on policy and practice) [ 38 – 41 ].

A large new theoretical strand within health policy focuses on knowledge brokerage/translation as a framework for understanding use of evidence [ 42 – 46 ]. This model can be seen as an extension of Weiss’s ‘instrumental use’ model, or ‘enlightening’ and ‘strategic use’ of evidence, describing the influence of research on policy, and is linked to ideas about ‘coproduction’ and ‘user involvement’ [ 47 ].

Weiss and Caplan are still major influences on the field of EBP research, but the degree to which these contributions are appropriately exploited is debatable. The linear (direct use) model is usually perceived to be superior, quite contrary to Weiss’ argument that policymakers thought enlightening (indirect) use could offer more. This message is often overlooked by research (and researchers) in the field, namely that researchers need to reflect on the common view that policymakers are interest-oriented and indifferent to evidence. Barriers to use of research are equated with barriers to direct use of research, while the broader concept of enlightening (indirect) use is rarely seen as equally relevant and useful.

While this is still widespread, theoretical learning from other fields is filtering into the debate. Researchers in policy studies have long seen the policy process itself as a contested arena of negotiation [ 48 , 49 ]. The messy, complex, and serendipitous nature of policymaking is described by Kingdon [ 50 ], Weiss [ 51 ], Simon [ 52 ], and Lindblom [ 53 ], amongst others. These scholars have contributed ideas such as punctuated equilibrium [ 54 ], policy ‘windows’ and ‘streams’ [ 50 ] or ‘stages’ [ 48 ], bounded rationality [ 55 ], and incrementalism [ 53 ]. However, the degree to which these models are used in planning and conducting empirical research is debateable. As Lomas notes, one reason why these may have been resisted by EBP researchers is because they offer little help and no tools to help the aspiring policy-advisor [ 56 ].

Of course, many of these are theories about policy, rather than analysis tools to assist advisors [ 57 ]. Cairney argues that policy analysts tend to use concepts, such as the policy cycle, which have been rejected by policy scholars. We would qualify this statement by restricting the rejection of such theories to within science and technology studies and policy sciences – these ideas are still common currency within health policy amongst other fields [ 58 ].

The influence of these ideas on the health field is perhaps increasing, with applications of ethnographic methods [ 59 ] and actor-network theory applied to EBP [ 60 ]. However, the majority of studies still use over-simple theoretical models [ 61 ]. Asserting the existence of and describing and prescribing interventions to close the ‘research-policy gap’ is a stance which, we argue, is likely to perpetuate and even create gaps between the professions.

Focus of the research

The main focus of much theoretical and empirical work in EBP has traditionally been, implicitly or explicitly, on research evidence uptake, primarily peer-reviewed research carried out by university-based academics [ 30 , 31 ]. However, a third of studies included in our review examined non-research data, for example, public health surveillance data, strategic needs assessments and other impact assessments, geographic information systems [ 62 ], or other non-research evidence [ 63 – 65 ]. This suggests that policymakers interpret and use ‘evidence’ in a broad sense, which is usually not acknowledged by academic commentators [ 7 , 10 ].

The reviews described above included a minority of studies which aimed to describe the policy process in detail, often using a case-study approach. Frequently, these focused on the use of a particular type of evidence such as economic evaluations [ 66 ] or the role of a specific piece of evidence [ 60 , 67 ]. Few attempt a descriptive contextualisation or ethnographic understanding of the policy process, with exceptions, e.g., [ 68 – 71 ]. Virginia Berridge’s seminal work on the NHS and comparative health policy development uses historical methods to understand these processes and develop theory around them [ 72 ]. Others have taken empirical ethnographic approaches to understand, for instance, how health decision-makers conceptualise and use evidence [ 73 , 74 ]. With these exceptions, very few studies have, as yet, taken anthropological or historical approaches to understanding the role of evidence or research in policy.

Research in the area thus focuses primarily on how to increase uptake of research, on designing and evaluating interventions aiming to increase research use, and on identifying barriers and facilitators of research use by policymakers [ 74 – 76 ]. This pattern of research on evidence use skews the debate by focusing on exceptional cases of research use in policy-making, rather than the normal discharging of statutory business. As Kogan and Henkel noted, attempts to improve use of evidence can “ fail to note how in those areas of policy where data are diffuse, and analyses most likely to be strongly influenced by value preferences, problems must be identified collaboratively between policy-maker and scientist. It failed to acknowledge that policy makers have to work hard to identify problems, to specify research that might help solve them, and to receive and use the results of research ” [ 11 ]. While a comment evaluating the Rothschild experiment, the suggestion that EBP research does not reflect the range of knowledge-producing or policy-making processes can also be levied at much of the academic work subsequent to this important study.

Focusing on the use of research evidence also allows researchers to sidestep the rest of the policy process and avoid the context of decision-making more widely. Most studies focus on single elements of the policy-making process – dissemination of evidence, sources, and types of source, knowledge transfer, and priority setting – rather than trying to characterise the process as a whole. EBP research could draw here on an extensive research literature in policy analysis from political science, starting with studies of what is called ‘the policy cycle’ or ‘a stages approach’ [ 39 , 77 ]. Such studies can analyse who is making which decisions, about what, and when; the distinction between practice, management, governance, and policy is rarely spelt out [ 78 ]. Clear definitions of policy, use of research, and decision making has been encouraged for over 60 years, but remains evasive [ 79 ].

Considerable theoretical work has gone into producing taxonomies of factors influencing the utilisation of evidence, e.g., [ 36 , 80 ]. However, these rich theoretical discourses are not reflected in the bulk of the empirical literature, which, despite its breadth with respect to the sectors and categories of evidence types studied, finds fairly consistently that the main factors affecting use of evidence are (a) access to relevant and clear information and (b) good relationships between researchers and research users. This may be due to the methods used in the studies: most use only interviews or surveys to ask researchers and policy-makers about their perceptions about evidence use; very few use methods such as participant observation to observe how evidence is actually used in practice, or attempt to find documentary proof of research use (with exceptions; [ 76 ]). These lists, while important, cannot on their own lead us to an improved understanding of the role of evidence in the jigsaw of the policy process.

Another noticeable feature of the evidence base is the emphasis given to researchers’ own views of research utilisation, for example, a study aiming to look at everyday working practice, such as that of Taylor-Robinson et al. [ 81 ], samples primarily health inequalities/public health academics and practitioners rather than decision-makers themselves (such as councillors or executives), even though they themselves describe lack of contact between academics and policymakers as a barrier to use of evidence. The majority of academic studies in EBP research are, unsurprisingly, written by and for academics, with little involvement of policy-makers as co-authors; indicating that policy-makers are not involved in developing or carrying out relevant research.

Underlying assumptions of this research and critical reflections

This overview of the literature provides a starting point for a more critical engagement with the empirical literature on EBP. We provide a broad-brush characterisation of parts of the literature which helps to draw out common assumptions across the field as a whole and enable critical reflection on them. We focus on three such assumptions: 1) that the policy-evidence ‘gap’ needs ‘bridging’; 2) that policy is usually not based on any data, and policy requires research evidence, preferably evaluative intervention research; and 3) that greater use of evidence in policy-making will produce better outcomes at a population level. It is by no means the case that these assumptions are universally shared among EBP researchers, or that we are the first to identify these issues [ 61 , 82 , 83 ]. Nonetheless, a large proportion of the available research still rests on an uncritical acceptance of these assumptions. Below, we describe the effect of these assumptions, justify our rejection of them, and discuss the implications of taking a more critical approach.

Assumption 1: that a policy-evidence ‘gap’ exists

The ‘evidence-policy gap’ is a widely-acknowledged construction in policy-related research, asserting the existence of two separate communities with their own ecosystems and languages [ 61 , 83 – 85 ]. Much of the ‘knowledge translation’ literature, which attempts to take a broader perspective on EBP, fails to question the assumption that knowledge and practice are two separate practices, and the ‘joining’ or ‘bridging’ of these (depending on the authors’ preferred metaphor) is the task of EBP researchers; see, e.g., [ 20 , 61 , 83 , 84 , 86 ]. However, as Choi recognizes, evidence – and those associated with evidence – is just one voice among many [ 87 ]. We do not yet know how to make that voice more helpful nor more influential.

Recently, a shift away from this dichotomous debate has been made, with concepts such as ‘knowledge translation’ or ‘transfer’ being replaced by ideas about ‘learning’, ‘contribution’, and co-production [ 88 ]. These ideas frame the relationship between research and policy as a two-way negotiation in which both partners learn from the other – pragmatically and politically a step towards an equality of prioritisation and experience. Certainly, this represents a greater openness to seeing a broader range of data types as relevant – including, for example, contextual, descriptive data as well as evaluations and other forms of research evidence. Too often, however, this debate is hijacked by methodologists from opposing camps wishing to defend their own method in the face of criticism – whether real or perceived, e.g., [ 89 , 90 ]. Without clear definitions of ‘evidence’ and ‘impact’ or ‘learning’, these studies contribute to negative stereotypes on both sides, and perpetuate the gap they aim to bridge.

Assumption 2: that policy is usually not based on evidence

Despite the increased literature in the area, there is a surprising lack of evidence about how much evidence policymakers use. Studies have largely reported policymakers’ perceptions of their usage (e.g., [ 91 ]), acknowledge that it is impossible to tell how much evidence was used by policy participants [ 92 ], or rely on self-reported measures [ 93 ]. EBP researchers have tended to interpret this absence of any contradictory evidence as a confirmation of their belief that policymakers do not use evidence. This is both grossly unfair to policymakers who have been shown to draw on a wide range of information sources [ 39 ], hugely over-simplifies the relationship between evidence and policy, and, of course, contradicts the avowed principles of EBP researchers; viz, that beliefs ought to be based on evidence [ 76 ].

Implicit in the ‘barriers and facilitators’ approach is an assumption that if these factors were alleviated, research uptake would increase. However, this is to miss the key point, which is that most research in the area studies the use of research evidence by policymakers, not what knowledge or information policymakers use. This subtle shift in emphasis opens up new avenues of enquiry. Other information than research evidence might be more relevant and timely, two factors seen as top facilitators for policymakers’ use of evidence [ 94 ]. Policymakers may prefer to use local information or intelligence such as patient or practice level data, or that held by local councils (e.g., datasets of rent, crime, and transport) [ 95 ]. It seems likely that these sources of information have been undervalued by evaluation methodologists, who often value trial data above other types. A more naturalistic approach using empirical methods to study policymakers in vivo , would conceive of evidence as one of many influences on a decision.

Assumption 3: that use of more research evidence by policymakers would lead to ‘better’ policy

We are not the first to note that “[t] he assumption that the use of evidence would improve the outcome of the policy process remains relatively untested by any form of empirical analysis ” [ 96 – 98 ]. The absence of robust evaluation evidence showing that evidence utilisation actually leads to better outcomes is widely admitted. The bulk of the intervention evidence in EBP uses only research utilisation or uptake as an outcome (or, in some cases, merely attitudes and intentions regarding research use). Nonetheless, it is still widely claimed that decisions made in partnership between “ politicians and researchers & lay people are more likely to result in positive health outcomes ” [ 86 ] and many researchers continue to advocate for increased use of research evidence [ 99 ].

Such claims, where they are not treated as automatically self-evident, are usually supported either by anecdotal cases of increased evidence use leading to better outcomes, or by studies of the impacts of evidence use on process measures such as transparency of decision-making [ 100 ]. However, the value of process-oriented goals is surely questionable, if they cannot be shown to lead to improved health, wellbeing, social, or other outcomes for the putative beneficiaries of the policy in question. The typologies of ‘research impact’ which have dominated much work in this area (e.g., [ 101 ]) are of limited value without a more open debate about the correct metrics for evaluating EBP, based on a realistic view of the currently existing evidence base [ 102 ]. Moreover, much of the commentary around the ‘impact agenda’ focuses on the aspect of academic performance management, without wider examination of its connection to policy and knowledge practices and theories [ 16 ].

Even in the absence of robust evaluation data, it is clear that many of the existing theoretical rationales for how evidence utilisation is supposed to improve outcomes are inadequate. If the pipeline model of research use were correct, it would be possible to demonstrate the impact of research on policy and the value of research would be judged on its contribution to policy and its quantifiable impact [ 25 ]. This model “ fails the practitioner because the literature on which guidelines are based constitutes an unrepresentative sample of the varied circumstances and populations in which the intervention might be usable or unusable ” [ 38 ].

New directions for EBP research

Above, we describe three assumptions commonly found in the EBP literature. We critically discuss the reasons we believe these assumptions are flawed, and show how the existing research conducted on the basis of these assumptions is likely to fail to answer the most pressing problems in EBP. Here, we describe how these assumptions can form the basis for a new programme of research aiming to understand the relationship between science and policy. We explore the implications of re-framing future studies in a new direction, and suggest more explorative perspectives and participative methods.

Firstly, the assumption that the policy-evidence ‘gap’ needs ‘bridging’. Most of the studies identified perpetuate the division between the ‘two communities’ in one way or another. Approaching researchers and policy-makers separately and asking them for their accounts of evidence use may be likely to produce these conflicting accounts; similarly, asking researchers about their perceptions of what policy-makers do may not be the most sensitive way of exploring policy processes. It would be more interesting, and more novel, to approach policymakers from an unprejudiced stance, to describe their activities, and to identify how they populate policy areas and steer policies through [ 53 ]. Of course, we are not the first to suggest this [ 56 , 97 ] – but these studies are generally the exception rather than the rule.

Secondly, the assumption is often held that policy is usually not based on any data, and policy requires research evidence, preferably evaluative intervention research. By concerning themselves with questions such as “ how [can] the tension between scientific rigour and timely relevance to policy-making be handled ” [ 103 ], EBP researchers often fail to acknowledge their lack of knowledge about forms and models of the impact and contributions of evidence to policy processes, which can lead to the creation of unhelpful straw men.

Finally, the unspoken corollary to both these assumptions is that greater use of evidence in policy-making will produce ‘better policy’ and better outcomes at a population level. Leaving aside the question of what constitutes ‘better’ in a self-evidently political and therefore value-driven terrain, for researchers to convincingly argue for the increased use of evidence in policy making, they must be able to demonstrate the benefits of doing so. The growth of the ‘applied research’ sector claims to address this, but often restricts output to vague and untested policy and research recommendations, about which there is no evidence of effectiveness [ 99 ]. If the effects of these policy and research recommendations are not evaluated these could be misguided at best [ 104 ].

Therefore, we argue that the following issues are of outstanding importance and could form the basis for a new agenda of EBP research:

Refocus research on influences on and processes of policy rather than how to increase the amount of evidence used. Researchers in political and policy studies, anthropology and history of policy, and science and technology studies have provided a wealth of insights and rich empirical data on the functioning (or otherwise) of the policy-making process [ 13 , 53 , 56 , 68 – 70 , 88 , 105 – 108 ]. Understanding the daily lives and activities of policy actors can bring fresh insights into how ‘evidence’ is conceptualised, the potential roles it may play, and how it fits with the other drivers and triggers which affect policy [ 95 , 109 ]. Understanding the roles of exceptional individuals, such as policy entrepreneurs, and networks in the policy process is also recognised as a key research area [ 3 , 110 , 111 ].

Dialogue between these fields has not been very extensive or productive so far, largely due to scepticism on the part of policy scientists, e.g., [ 108 ], and a lack of engagement with this body of theory and empirical data, with recent exceptions [ 112 , 113 ]. To take only one example, policy researchers observe that policies rarely have a consistent and well-defined goal or aim; rather, “ solutions become joined to problems ” in a provisional and largely haphazard way, with the same policy taking on different goals at different times or in different contexts [ 50 , 114 ]. If this is the case, the question of how researchers can evaluate whether or not the policy has attained its goal, and use the results of this evaluation to inform future policy development, is largely moot.

Determine what information and evidence is normally used as part of policy processes. As we point out above, it is likely that the needs and practices of policymakers are rarely the subject of rigorous study, and are certainly more complex and nuanced than can be captured in surveys. Using ‘research’ and ‘policy’ as one-size-fits-all concepts underestimates the variety of activities and outputs involved in each type of process. A legislative manoeuvre is very different from a local tailoring of licensing hours; applied health research aiming to develop interventions about a specific condition is worlds away from contemplative studies of models of theories of change. Elucidation of the relationships at both ends of the spectra ought to be a research priority, in place of the dichotomising of activities as ‘research’ or ‘policy’. Furthermore, attention to context is vital; pressures faced by researchers and policymakers in low- and middle-income countries may be very different from those in Western settings. Finally, as argued above, although similar barriers and facilitators are often identified across policy settings [ 35 ], there is variation of practices and processes across policy areas. Analysis of these variations would be a whole research agenda in itself. A subtler interpretation of context, ‘policy’, and ‘research’ are necessary to understand processes, influences and impacts, and indeed to develop any meaningful abstractions and generalisations – should these be possible, which is by no means certain. A compromise between the complexity of policy making and development of useful frameworks needs to be found.

These questions are likely to require a broader range of methodologies than usually applied. Experimental, ethnographic, and conceptual studies all need to be applied to understand the impact of evidence on policy and policy processes more generally. Novel theoretical approaches could be phenomenological, psychosocial, or interpretive [ 115 – 117 ].

Develop conceptual clarity around and metrics to evaluate ‘impact’ of research on policy and populations. Without clear methods to understand how policy works and how it changes in response to information, it will be impossible for researchers to know whether they have had an effect on policy. Research usage could take many forms, from agenda-setting, to provision of policy options, to challenging debates, or refuting arguments. Impact could therefore be a change in policy, consistency in policy, changes in population level outcomes. Measures proposed thus far tend to focus on citations or mentions of work by policymakers. We feel this addresses only a narrow aspect of potential impacts, namely awareness of research, which may not translate into action. Furthermore, this metric-led approach tends to ignore the indirect means by which research and evidence of all kinds fit into policy – whether by sustaining the status quo, or by leading to decisions for change. More attention to the variety of impacts and effects leading from research may help to develop this debate.

Finally, our analysis of the literature suggests that new methods and organisations aiming to bring the processes of research and policy closer together are likely to further our understanding of the relationship between these two types of activities. Co-creation and co-production of knowledge are lauded as a more democratic, and potentially more useful, type of learning activity than many other knowledge exchange events [ 118 ]. If universities were to provide assistance for local policymakers in the analysis of existing data, a relationship of mutual benefit could start to develop – an end in itself, according to reviews of barriers and facilitators of evidence use [ 30 , 34 ]. Moreover, such organizations would provide natural laboratories for studying the role of institutional and organizational factors on the practices of policy formation and implementation, noted in our review as likely to affect evidence use.

Perhaps due to the political, financial, and ethical pressures on health policymakers to make good decisions, health policy leads the way in forming collaborative organisations to conduct research. Funders in the UK and Netherlands have developed specific types of collaborative engagement organisations (e.g., Collaborations for Leadership in Applied Health Research and Care, CLARHCs, or the health funder ZonMw), which aim to bring practitioners and researchers together for mutual benefit. There are examples of institutions which specifically allow researchers and policymakers to learn about each other’s priorities and ways of working [ 56 , 119 ]. In general, however, the literature above suggests that there are insufficient opportunities and incentives to form links with policymakers directly.

Conclusions

The existing literature on EBP has certainly contributed to the desirable outcome of better policy decisions and acceptance that evidence ought to play a role in those decisions. Now, we believe it is time for researchers to reflect on their assumptions, develop new perspectives, and use other methods to tackle the problems of EBP. There is a common lack of clarity about what researchers understand as ‘policy’, which can encompass decision making, project implementation and evaluation, and service reconfiguration. There is, in general, little evidence about management and organisation, despite these being potentially major factors affecting the policy process [ 120 ]. The assumptions governing the design and outcomes of research into policy will probably be significantly different in different institutional areas. It is often not clear what constitutes ‘a decision’, nor who is involved in it, or whether research evidence is relevant or timely. This muddles and prevents any engagement with discussions about what constitutes good and bad policy; or indeed how evidence ought to be used.

Rather than attempting to develop a one-size-fits-all (pipeline) model, research in this area should revert to observational methods with in-depth descriptions of practices and their inherited processes and provide an empirical basis for theoretical development which can inform future activities. Instead of repeating studies of perceptions of barriers and facilitators of use of research evidence, appropriate methods must be used to answer questions about when, why, how, and who finds what type of knowledge sound, timely, and relevant at different stages of the policy cycle.

The position of researchers who wish to influence policy is untenable unless there is engagement with the questions outlined above. We would also argue that researchers who advocate for change (for example, via policy and practice recommendations) without evaluating the (likely) impact of these changes may have a limited effect at best. At worst, they create distance between policy and research by demonstrating an understanding of the context within which they would like their newly-generated knowledge to be used. Without understanding the complex processes of policy and knowledge mobilisation, researchers who make policy and practice recommendations may simply be ignored. Ultimately, the role of researchers is not to judge the ‘quality’ of policy making on the basis of how much of their research is used. This stance is both unhelpful and divisive, blinding researchers to the important questions raised above regarding the types of information used, by whom, for what purpose, and under which circumstances. Rather, our role as scientists ought to be to investigate the processes surrounding the use of evidence and policy activities more widely and to disseminate findings in order to help others make informed decisions, of all kinds.

Abbreviations

Evidence based medicine

Evidence based policy.

Cooksey D: A Review of UK Health Research Funding. 2006, London: Stationery Office

Google Scholar  

Fox DM: Evidence of evidence-based health policy: the politics of systematic reviews in coverage decisions. Health Affairs. 2005, 24: 114-122.

PubMed   Google Scholar  

Mitton C, Adair C, McKenzie E, Patten S: Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007, 85: 729-768.

PubMed   PubMed Central   Google Scholar  

Cochrane A: Effectiveness and Efficiency: Random Reflections on Health Services. 1972, London: Nuffield Provincial Hospitals Trust

Champagne F, Lemieux-Charles L: Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives. 2008, Toronto: University of Toronto Press

Guyatt G, Cairns J, Churchill D: Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA. 1992, 268: 2420-2425.

Barton S: Which clinical studies provide the best evidence?. BMJ. 2000, 321 (7256): 255-256.

CAS   PubMed   PubMed Central   Google Scholar  

Chalmers I: If evidence-informed policy works in practice, does it matter if it doesn’t work in theory?. Evidence Policy. 2005, 1: 227-242.

Glasziou PP, Irwig LM: An evidence based approach to individualising treatment. BMJ. 1995, 311: 1356-1359.

Guyatt GH: Users’ guides to the medical literature. JAMA. 1995, 274: 1800-

CAS   PubMed   Google Scholar  

Kogan M, Henkel M, Britain G: Government and Research: The Rothschild Experiment in a Government Department. 1983, Portsmouth, NH: Heinemann Educational Books

Hanney SR, González-Block MA: Evidence-informed health policy: are we beginning to get there at last. Health Res Policy Syst. 2009, 7: 30-

Duffy MP: The Rothschild experience: health science policy and society in Britain. Sci Tech Hum Val. 1986, 11: 68-78.

Government Office for Science: Engaging with Government: Guide for Academics, BIS/11/1360. 2011, [ https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/283118/11-1360-guide-engaging-with-government-for-academics.pdf ]

Government Office for Science: Engaging with Academics: How to Further Strengthen Open Policy Making – A Guide For Policy Makers, BIS/13/581. 2013, [ https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/283129/13-581-engaging-with-academics-open-policy-making.pdf ]

Martin BR: The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster?. Res Eval. 2011, 20: 247-254.

Pettigrew AM: Scholarship with impact. Br J Manag. 2011, 22: 347-354.

HEFCE: Joint Statement on Impact by HEFCE, RCUK and UUK. 2011, [ http://www.hefce.ac.uk/news/newsarchive/2011/news62296.html ]

Armstrong R, Waters E, Roberts H, Oliver S, Popay J: The role and theoretical evolution of knowledge translation and exchange in public health. J Public Health. 2006, 28: 384-389.

Dobbins M, Robeson P, Ciliska D, Hanna S, Cameron R, O'Mara L, DeCorby K, Mercer S: A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implement Sci. 2009, 4: 23-

Ward V, House A, Hamer S: Knowledge brokering: exploring the process of transferring knowledge into action. BMC Health Serv Res. 2009, 9: 12-

Cabinet O, Treasury HM, Open Public S, Rt Hon D, Alexander MP, Rt Hon O, Letwin MP: What Works: Evidence Centres for Social Policy. 2013, London: HM Government

Walshe KWTR: Evidence-based management: from theory to practice in health care. Milbank Q. 2001, 79: 429-457.

Marston G, Watts R: Tampering with the evidence: a critical appraisal of evidence-based policy-making. The Drawing Board: Aust Rev Pub Affairs. 2003, 3: 143-163.

Black N, Donald A: Evidence based policy: proceed with care commentary: research must be taken seriously. BMJ. 2001, 323: 275-279.

Lavis J, Robertson D, Woodside J, McLeod C, Abelson J: How can research organizations more effectively transfer research knowledge to decision makers?. Milbank Q. 2003, 81: 221-248.

Muir Gray J: Evidence-based policy making: is about taking decision based on evidence and the needs and values of the population. BMJ. 2004, 329: 988-989.

Nutbeam D: Getting evidence into policy and practice to address health inequalities. Health Promot Int. 2004, 19: 137-140.

Smith G: How policy informs the evidence. BMJ. 2001, 322: 184-185.

Oliver K, Innvaer S, Lorenc T, Woodman J, Thomas J: A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014, 14: 2-

Orton L, Lloyd-Williams F, Taylor-Robinson D, O'Flaherty M, Capewell S: The use of research evidence in public health decision making processes: systematic review. PLoS ONE. 2011, 6: e21704-

Boaz A, Baeza J, Fraser A, European Implementation Score Collaborative Group (EIS): Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes. 2011, 4: 212-

Pawson R: Evidence-based Policy: A Realist Perspective. 2006, Thousand Oaks, CA: Sage Publications Ltd

Innvaer S, Vist G, Trommald M, Oxman A: Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002, 7: 239-244.

Lorenc T, Tyner E, Petticrew M, Duffy S, Martineau F, Philips G, Lock K: Cultures of evidence across policy sectors: systematic review of qualitative evidence. Eur J Public Health. 2014, doi:10.1093/eurpub/cku038

Weiss CH: The many meanings of research utilization. Publ Admin Rev. 1979, 39: 426-431.

Weiss CH, Bucuvalas MJ: Social Science Research and Decision-Making. 1980, New York: Columbia University Press

Green LW, Ottoson JM, Garcia C, Hiatt RA: Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009, 30: 151-174.

Hanney S, Gonzalez-Block M, Buxton M, Kogan M: The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Pol Syst. 2003, 1: 2-

Sabatier PA, Jenkins-Smith H: Policy Change and Learning: An Advocacy Coalition Approach. 1993, San Francisco: Westview Press

Dror Y: Public Policy Making Re-Examined. 1983, New Jersey: Transaction Publishers

Aaserud M, Lewin S, Innvaer S, Paulsen E, Dahlgren A, Trommald M, Duley L, Zwarenstein M, Oxman AD: Translating research into policy and practice in developing countries: a case study of magnesium sulphate for pre-eclampsia. BMC Health Serv Res. 2005, 5: 68-

Hivon ML: Use of health technology assessment in decision making: coresponsibility of users and producers?. Int J Tech Assess Health Care. 2005, 21: 268-275.

Jack SM: Knowledge transfer and exchange processes for environmental health issues in Canadian Aboriginal communities. Int J Environ Res Publ Health. 2010, 7: 651-674.

Priest N: Engaging policy makers in action on socially determined health inequities: developing evidence-informed cameos. Evid Policy. 2009, 5: 53-70.

Vingilis E, Hartford K, Schrecker T, Mitchell B, Lent B, Bishop J: Integrating knowledge generation with knowledge diffusion and utilization: a case study analysis of the Consortium for Applied Research and Evaluation in Mental Health. Canadian J Pub Health. 2003, 94: 468-471.

Sebba J: Enhancing impact on policy-making through increasing user engagement in research. Educational Research and Policy-Making: Exploring the Border Country between Research and Policy. 2007, London & NY: Routledge, 127-143.

Sabatier PA: Theories of the Policy Process. 2007, Boulder, CO: Westview Press

Shore C, Wright S, Per D: Policy Worlds: Anthropology and Analysis of Contemporary Power. 2011, Oxford: Berghahn

Kingdon JW: Agendas, Alternatives, and Public Policies. 2010, Boston: Little, Brown, 3

Weiss CH: Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts. 1995, Queenstown, MD: The Aspen Institute, 65-92.

Simon HA: Models of Man; Social and Rational. 1957, New York: John Wiley and Sons, Inc.

Lindblom CE: The science of “muddling through”. Publ Admin Rev. 1959, 19: 79-

True JL, Jones BD, Baumgartner FR: Punctuated-equilibrium theory: explaining stability and change in American policymaking. Theories of the Policy Process. 1999, Colorado: Westview Press Boulder, 97-115.

Simon HA: Models of Bounded Rationality: Empirically Grounded Economic Reason. 1982, Cambridge, MA: MIT press, 3

Lomas J, Brown A: Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health. Milbank Q. 2009, 87: 903-926.

Cairney P: How can policy theory have an impact on policy making? The role of theory-led academic-practitioner discussions. Teach Pub Adminis. 2014, doi:10.1177/0144739414532284

Buse K, Mays N, Walt G: Making Health Policy. 2012, New York: McGraw-Hill International

Stevens A: Telling policy stories: an ethnographic study of the use of evidence in policy-making in the UK. J Soc Policy. 2011, 40: 237-255.

Burris H, Parkhurst J, Du-Sarkodie Y, Mayaud P: Getting research into policy - Herpes simplex virus type-2 (HSV-2) treatment and HIV infection: international guidelines formulation and the case of Ghana. Health Res Pol Syst. 2011, 9 (Suppl 1): S5-

Smith K, Joyce K: Capturing complex realities: understanding efforts to achieve evidence-based policy and practice in public health. Evid Policy. 2012, 8: 57-78.

Weitkamp G, Van den Berg AE, Bregt AK, Van Lammeren RJA: Evaluation by policy makers of a procedure to describe perceived landscape openness. J Environ Manage. 2012, 95: 17-28.

Aoki-Suzuki C, Bengtsson M, Hotta Y: International comparison and suggestions for capacity development in industrializing countries. J Ind Ecol. 2012, 16: 467-480.

Cerveny LK, Blahna DJ, Stern MJ, Mortimer MJ, Predmore SA, Freeman J: The use of recreation planning tools in U.S. Forest Service NEPA assessments. Environ Manag. 2011, 48: 644-657.

Jenkins RAR: Bridging data and decision making: development of techniques for improving the HIV prevention community planning process. AIDS Behav. 2005, 9: S41-S53.

Williams I, McIver S, Moore D, Bryan S: The use of economic evaluations in NHS decision-making: a review and empirical investigation. Health Tech Assess. 2008, 12 (7): 1-175. iii, ix-x

Colon-Ramos U, Lindsay A, Monge-Rojas R, Greaney M, Campos H, Peterson K: Translating research into action: a case study on trans fatty acid research and nutrition policy in Costa Rica. Health Pol Plann. 2007, 22: 363-374.

Hunsmann M: Limits to evidence-based health policymaking: policy hurdles to structural HIV prevention in Tanzania. Soc Sci Med. 2012, 74: 1477-1485.

Kok MO, Schuit AJ: Contribution mapping: a method for mapping the contribution of research to enhance its impact. Health Res Pol Syst. 2012, 10: 1-16.

Wehrens R, Bekker M, Bal R: The construction of evidence-based local health policy through partnerships: research infrastructure, process, and context in the Rotterdam ‘Healthy in the City’ programme. J Publ Health Pol. 2010, 31: 447-460.

Wehrens R, Bekker M, Bal R: Coordination of research, policy and practice: a case study of collaboration in the field of public health. Sci Public Policy. 2011, 38: 755-766.

Berridge V: Health and Society in Britain since 1939. 1999, Cambridge: Cambridge University Press, 38

Elliott H, Popay J: How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health. 2000, 54: 461-468.

McDonough JE: Experiencing Politics: A Legislator’s Stories of Government and Health Care. 2000, Oakland, CA: University of California Press, 2

Haynes A, Derrick G, Redman S, Hall W, Gillespie J: Identifying trustworthy experts: how do policymakers find and assess public health researchers worth consulting or collaborating with?. PLoS ONE. 2012, 7: e32665-

Innvaer S: The use of evidence in public governmental reports on health policy: an analysis of 17 Norwegian official reports (NOU). BMC Health Serv Res. 2009, 9: 177-

Buse K, Mays N, Walt G: Making Health Policy. 2005, Maidenhead: Open University Press

Oliver K, Kislov R: Policy, evidence and theory: contextualising a glossary of policymaking. J Epidemiol Community Health. 2012, Response to A glossary of theories for understanding policymaking. Smith, Katikireddi 67:2 198-202 doi:10.1136/jech-2012-200990

Lasswell HD: Politics: Who Gets What, When, How. 1950, New York: P. Smith

Gold M: Pathways to the use of health services research in policy. Health Serv Res. 2009, 44: 1111-1136.

Taylor-Robinson D, Milton B, Lloyd-Williams F, O'Flaherty M, Capewell S: Planning ahead in public health? A qualitative study of the time horizons used in public health decision-making. BMC Public Health. 2008, 8: 415-

Exworthy M, Hunter D: The Challenge of joined-up government in tackling health inequalities. Int J Publ Admin. 2011, 34: 201-212.

Sanderson I: Evaluation, policy learning and evidence-based policy making. Public Administration. 2002, 80: 1-22.

Dopson S, Locock L, Gabbay J, Ferlie E, FitzGerald L: Evidence-based medicine and the implementation gap. Health (London). 2003, 7: 311-330.

Gorissen WHM, Schulpen TWJ, Kerkhoff AHM, van Heffen O: Bridging the gap between doctors and policymakers: The use of scientific knowledge in local school health care policy in the Netherlands. Eur J Public Health. 2005, 15: 133-139.

Armstrong R, Doyle J, Lamb C, Waters E: Multi-sectoral health promotion and public health: the role of evidence. J Public Health. 2006, 28: 168-172.

Choi BCK, Pang T, Lin V, Puska P, Sherman G, Goddard M, Ackland MJ, Sainsbury P, Stachenko S, Morrison H, Clottey C: Can scientists and policy makers work together?. J Epidemiol Community Health. 2005, 59: 632-637.

Jasanoff S: States of Knowledge: The Co-Production of Science and the Social Order. 2013, London: Routledge

Farrington : What works in preventing crime? Systematic reviews of experimental and quasi-experimental research. Ann Am Acad Pol Soc Sci. 2001, 578: 8-

Ritchie J, Spencer L: Qualitative data analysis for applied policy research. Analyzing Qualitative Data. Edited by: Bryman A, Burgess R. 1999, London: Routledge

Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M: Evidence for public health policy on inequalities: 1: The reality according to policymakers. J Epidemiol Community Health. 2004, 58: 811-816.

Ettelt S, Mays N: Health services research in Europe and its use for informing policy. J Health Serv Res Policy. 2011, 16 (Suppl 2): 48-60.

Coleman PN: Influence of evidence-based guidance on health policy and clinical practice in England. Qual Health Care. 2001, 10: 229-237.

Ritter A: How do drug policy makers access research evidence?. Int J Drug Pol. 2009, 20: 70-75.

Oliver K, de Vocht F: Defining ‘evidence’: a survey of public health policy makers’ needs and preferences. Eur J Public Health. 2013, In press

Bullock H, Mountford J, Stanley R, Britain G: Better policy-making. 2001, London: Cabinet Office

Boaz A, Grayson L, Levitt R, Solesbury W: Does evidence-based policy work? Learning from the UK experience. Evid Policy. 2008, 4: 233-253.

Cameron A, Lart R, Salisbury C, Purdy S, Thorp H, Stewart K, Peckham S, Calnan M, Purdy S, Thorp H: Policy makers’ perceptions on the use of evidence from evaluations. Evid Policy. 2011, 7: 429-448.

Macintyre S, Chalmers I, Horton R, Smith R: Using evidence to inform health policy: case study. BMJ. 2001, 322: 222-225.

Banks G: Evidence-Based Policy Making: What is it? How do we get it?. 2009, Canberra: Australian Government Productivity Commission

Davies TO, Nutley S, Walter I: Approaches to Assessing Research Impact. 2005. Report of the ESRC Symposium on Assessing the Non-Academic Impact of Research. 2005, [ http://www.esrc.ac.uk/_images/non-academic_impact_symposium_report_tcm8-3813.pdf ]

Penfield T, Baker MJ, Scoble R, Wykes MC: Assessment, evaluations, and definitions of research impact: A review. Res Eval. 2014, 23: 21-32.

Van Kammen J, de Savigny D, Sewankambo N: Using knowledge brokering to promote evidence-based policy-making: the need for support structures. Bull World Health Organ. 2006, 84: 608-612.

Coe R, Fitz-Gibbon C, Tymms P: Promoting Evidence-Based Education: The Role of Practitioners. 2000, Cardiff, Wales: Roundtable presentation to the British Educational Research Association’s Annual Conference

Berridge V: Passive smoking and its pre-history in Britain: policy speaks to science?. Soc Sci Med. 1999, 49: 1183-1195.

Bijker WE, Bal R, Hendriks R: The Paradox of Scientific Authority: The Role of Scientific Advice in Democracies. 2009, Cambridge, MA: MIT Press

Cairney P: A ‘multiple lenses’ approach to policy change: the case of tobacco policy in the UK. British Politics. 2007, 2: 45-68.

Jasanoff S: The Fifth Branch: Science Advisers as Policymakers. 1994, Cambridge, MA: Harvard University Press

Lipsky M: Street-Level Bureaucracy. 2010, New York: Russell Sage Foundation

Checkland K, Snow S, McDermott I, Harrison S, Coleman A: Animateurs and animation: what makes a good commissioning manager?. J Health Serv Res Policy. 2012, 17: 11-17.

Oliver K, de Vocht F, Money A, Everett MG: Who runs public health? A mixed-methods study combining network and qualitative analyses. J Public Health. 2013, 35: 453-459.

Head BW: Three lenses of evidence - based policy. Aust J Publ Admin. 2008, 67: 1-11.

Head BW: Reconsidering evidence-based policy: key issues and challenges. Pol Soc. 2010, 29: 77-94.

Cohen M, March J, Olsen J: A garbage can model of organisational choice. Admin Sci Q. 1972, 17: 1-25.

Fotaki M: Why do public policies fail so often? Exploring health policy-making as an imaginary and symbolic construction. Organization. 2010, 17: 703-720.

Yanow D: A Policy ethnographer’s reading of policy anthropology. Policy Worlds: Anthropology and the Analysis of Contemporary Power. 2011, New York: Berghahn Books, 300-313.

Yin RK: Case Study Research: Design and Methods. 2014, Thousand Oaks, CA: Sage Publications

Bourke A: Universities, civil society and the global agenda of community-engaged research. Globalisation, Soc Educ. 2013, 11: 498-519.

Dobrow MJ, Goel V, Lemieux-Charles L, Black NA: The impact of context on evidence utilization: a framework for expert groups developing health policy recommendations. Soc Sci Med. 2006, 63: 1811-1824.

Wendt C: Mapping European healthcare systems: a comparative analysis of financing, service provision and access to healthcare. J Eur Soc Pol. 2009, 19: 432-445.

Download references

Acknowledgements

We thank colleagues who have read and commented on the draft version, and the editor and reviewers for their extremely helpful comments.

Author information

Authors and affiliations.

School of Social Sciences, University of Manchester, Bridgeford Street, Manchester, M13 9PL, UK

Kathryn Oliver

Department of Science, Technology, Engineering and Public Policy (STEaPP), University College London, 66-72 Gower Street, London, WC1E 6BT, UK

Kathryn Oliver & Theo Lorenc

Faculty of Social Sciences, Oslo University College, P.O Box 1084, Blindern, 0317, OSLO, Norway

Simon Innvær

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kathryn Oliver .

Additional information

Competing interests.

The authors declare no competing interests. This study was not supported by any funding agency.

Authors’ contributions

All authors contributed to the conception, design, and analysis of the data which underpins this critical review. All authors contributed to the development of the argument and the final manuscript. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Oliver, K., Lorenc, T. & Innvær, S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Sys 12 , 34 (2014). https://doi.org/10.1186/1478-4505-12-34

Download citation

Received : 17 March 2014

Accepted : 30 June 2014

Published : 14 July 2014

DOI : https://doi.org/10.1186/1478-4505-12-34

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical analysis
  • Evidence-based policy
  • Knowledge utilization
  • Science and technology studies

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

empirical research critical analysis

Canvas | University | Ask a Librarian

  • Library Homepage
  • Arrendale Library

Empirical Research: Quantitative & Qualitative

  • Empirical Research

Introduction: What is Empirical Research?

Quantitative methods, qualitative methods.

  • Quantitative vs. Qualitative
  • Reference Works for Social Sciences Research
  • Contact Us!

 Call us at 706-776-0111

  Chat with a Librarian

  Send Us Email

  Library Hours

Empirical research  is based on phenomena that can be observed and measured. Empirical research derives knowledge from actual experience rather than from theory or belief. 

Key characteristics of empirical research include:

  • Specific research questions to be answered;
  • Definitions of the population, behavior, or phenomena being studied;
  • Description of the methodology or research design used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys);
  • Two basic research processes or methods in empirical research: quantitative methods and qualitative methods (see the rest of the guide for more about these methods).

(based on the original from the Connelly LIbrary of LaSalle University)

empirical research critical analysis

Empirical Research: Qualitative vs. Quantitative

Learn about common types of journal articles that use APA Style, including empirical studies; meta-analyses; literature reviews; and replication, theoretical, and methodological articles.

Academic Writer

© 2024 American Psychological Association.

  • More about Academic Writer ...

Quantitative Research

A quantitative research project is characterized by having a population about which the researcher wants to draw conclusions, but it is not possible to collect data on the entire population.

  • For an observational study, it is necessary to select a proper, statistical random sample and to use methods of statistical inference to draw conclusions about the population. 
  • For an experimental study, it is necessary to have a random assignment of subjects to experimental and control groups in order to use methods of statistical inference.

Statistical methods are used in all three stages of a quantitative research project.

For observational studies, the data are collected using statistical sampling theory. Then, the sample data are analyzed using descriptive statistical analysis. Finally, generalizations are made from the sample data to the entire population using statistical inference.

For experimental studies, the subjects are allocated to experimental and control group using randomizing methods. Then, the experimental data are analyzed using descriptive statistical analysis. Finally, just as for observational data, generalizations are made to a larger population.

Iversen, G. (2004). Quantitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.), Encyclopedia of social science research methods . (pp. 897-898). Thousand Oaks, CA: SAGE Publications, Inc.

Qualitative Research

What makes a work deserving of the label qualitative research is the demonstrable effort to produce richly and relevantly detailed descriptions and particularized interpretations of people and the social, linguistic, material, and other practices and events that shape and are shaped by them.

Qualitative research typically includes, but is not limited to, discerning the perspectives of these people, or what is often referred to as the actor’s point of view. Although both philosophically and methodologically a highly diverse entity, qualitative research is marked by certain defining imperatives that include its case (as opposed to its variable) orientation, sensitivity to cultural and historical context, and reflexivity. 

In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move beyond the data generated and their surface appearances. 

Sandelowski, M. (2004).  Qualitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.),  Encyclopedia of social science research methods . (pp. 893-894). Thousand Oaks, CA: SAGE Publications, Inc.

  • Next: Quantitative vs. Qualitative >>
  • Last Updated: Mar 22, 2024 10:47 AM
  • URL: https://library.piedmont.edu/empirical-research
  • Ebooks & Online Video
  • New Materials
  • Renew Checkouts
  • Faculty Resources
  • Library Friends
  • Library Services
  • Our Mission
  • Library History
  • Ask a Librarian!
  • Making Citations
  • Working Online

Friend us on Facebook!

Arrendale Library Piedmont University 706-776-0111

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

Definition and Introduction

Journal article analysis assignments require you to summarize and critically assess the quality of an empirical research study published in a scholarly [a.k.a., academic, peer-reviewed] journal. The article may be assigned by the professor, chosen from course readings listed in the syllabus, or you must locate an article on your own, usually with the requirement that you search using a reputable library database, such as, JSTOR or ProQuest . The article chosen is expected to relate to the overall discipline of the course, specific course content, or key concepts discussed in class. In some cases, the purpose of the assignment is to analyze an article that is part of the literature review for a future research project.

Analysis of an article can be assigned to students individually or as part of a small group project. The final product is usually in the form of a short paper [typically 1- 6 double-spaced pages] that addresses key questions the professor uses to guide your analysis or that assesses specific parts of a scholarly research study [e.g., the research problem, methodology, discussion, conclusions or findings]. The analysis paper may be shared on a digital course management platform and/or presented to the class for the purpose of promoting a wider discussion about the topic of the study. Although assigned in any level of undergraduate and graduate coursework in the social and behavioral sciences, professors frequently include this assignment in upper division courses to help students learn how to effectively identify, read, and analyze empirical research within their major.

Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students make the most of Scholarly Articles." Library Management 33 (2012): 525-535.

Benefits of Journal Article Analysis Assignments

Analyzing and synthesizing a scholarly journal article is intended to help students obtain the reading and critical thinking skills needed to develop and write their own research papers. This assignment also supports workplace skills where you could be asked to summarize a report or other type of document and report it, for example, during a staff meeting or for a presentation.

There are two broadly defined ways that analyzing a scholarly journal article supports student learning:

Improve Reading Skills

Conducting research requires an ability to review, evaluate, and synthesize prior research studies. Reading prior research requires an understanding of the academic writing style , the type of epistemological beliefs or practices underpinning the research design, and the specific vocabulary and technical terminology [i.e., jargon] used within a discipline. Reading scholarly articles is important because academic writing is unfamiliar to most students; they have had limited exposure to using peer-reviewed journal articles prior to entering college or students have yet to gain exposure to the specific academic writing style of their disciplinary major. Learning how to read scholarly articles also requires careful and deliberate concentration on how authors use specific language and phrasing to convey their research, the problem it addresses, its relationship to prior research, its significance, its limitations, and how authors connect methods of data gathering to the results so as to develop recommended solutions derived from the overall research process.

Improve Comprehension Skills

In addition to knowing how to read scholarly journals articles, students must learn how to effectively interpret what the scholar(s) are trying to convey. Academic writing can be dense, multi-layered, and non-linear in how information is presented. In addition, scholarly articles contain footnotes or endnotes, references to sources, multiple appendices, and, in some cases, non-textual elements [e.g., graphs, charts] that can break-up the reader’s experience with the narrative flow of the study. Analyzing articles helps students practice comprehending these elements of writing, critiquing the arguments being made, reflecting upon the significance of the research, and how it relates to building new knowledge and understanding or applying new approaches to practice. Comprehending scholarly writing also involves thinking critically about where you fit within the overall dialogue among scholars concerning the research problem, finding possible gaps in the research that require further analysis, or identifying where the author(s) has failed to examine fully any specific elements of the study.

In addition, journal article analysis assignments are used by professors to strengthen discipline-specific information literacy skills, either alone or in relation to other tasks, such as, giving a class presentation or participating in a group project. These benefits can include the ability to:

  • Effectively paraphrase text, which leads to a more thorough understanding of the overall study;
  • Identify and describe strengths and weaknesses of the study and their implications;
  • Relate the article to other course readings and in relation to particular research concepts or ideas discussed during class;
  • Think critically about the research and summarize complex ideas contained within;
  • Plan, organize, and write an effective inquiry-based paper that investigates a research study, evaluates evidence, expounds on the author’s main ideas, and presents an argument concerning the significance and impact of the research in a clear and concise manner;
  • Model the type of source summary and critique you should do for any college-level research paper; and,
  • Increase interest and engagement with the research problem of the study as well as with the discipline.

Kershaw, Trina C., Jennifer Fugate, and Aminda J. O'Hare. "Teaching Undergraduates to Understand Published Research through Structured Practice in Identifying Key Research Concepts." Scholarship of Teaching and Learning in Psychology . Advance online publication, 2020; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students make the most of Scholarly Articles." Library Management 33 (2012): 525-535; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946.

Structure and Organization

A journal article analysis paper should be written in paragraph format and include an instruction to the study, your analysis of the research, and a conclusion that provides an overall assessment of the author's work, along with an explanation of what you believe is the study's overall impact and significance. Unless the purpose of the assignment is to examine foundational studies published many years ago, you should select articles that have been published relatively recently [e.g., within the past few years].

Since the research has been completed, reference to the study in your paper should be written in the past tense, with your analysis stated in the present tense [e.g., “The author portrayed access to health care services in rural areas as primarily a problem of having reliable transportation. However, I believe the author is overgeneralizing this issue because...”].

Introduction Section

The first section of a journal analysis paper should describe the topic of the article and highlight the author’s main points. This includes describing the research problem and theoretical framework, the rationale for the research, the methods of data gathering and analysis, the key findings, and the author’s final conclusions and recommendations. The narrative should focus on the act of describing rather than analyzing. Think of the introduction as a more comprehensive and detailed descriptive abstract of the study.

Possible questions to help guide your writing of the introduction section may include:

  • Who are the authors and what credentials do they hold that contributes to the validity of the study?
  • What was the research problem being investigated?
  • What type of research design was used to investigate the research problem?
  • What theoretical idea(s) and/or research questions were used to address the problem?
  • What was the source of the data or information used as evidence for analysis?
  • What methods were applied to investigate this evidence?
  • What were the author's overall conclusions and key findings?

Critical Analysis Section

The second section of a journal analysis paper should describe the strengths and weaknesses of the study and analyze its significance and impact. This section is where you shift the narrative from describing to analyzing. Think critically about the research in relation to other course readings, what has been discussed in class, or based on your own life experiences. If you are struggling to identify any weaknesses, explain why you believe this to be true. However, no study is perfect, regardless of how laudable its design may be. Given this, think about the repercussions of the choices made by the author(s) and how you might have conducted the study differently. Examples can include contemplating the choice of what sources were included or excluded in support of examining the research problem, the choice of the method used to analyze the data, or the choice to highlight specific recommended courses of action and/or implications for practice over others. Another strategy is to place yourself within the research study itself by thinking reflectively about what may be missing if you had been a participant in the study or if the recommended courses of action specifically targeted you or your community.

Possible questions to help guide your writing of the analysis section may include:

Introduction

  • Did the author clearly state the problem being investigated?
  • What was your reaction to and perspective on the research problem?
  • Was the study’s objective clearly stated? Did the author clearly explain why the study was necessary?
  • How well did the introduction frame the scope of the study?
  • Did the introduction conclude with a clear purpose statement?

Literature Review

  • Did the literature review lay a foundation for understanding the significance of the research problem?
  • Did the literature review provide enough background information to understand the problem in relation to relevant contexts [e.g., historical, economic, social, cultural, etc.].
  • Did literature review effectively place the study within the domain of prior research? Is anything missing?
  • Was the literature review organized by conceptual categories or did the author simply list and describe sources?
  • Did the author accurately explain how the data or information were collected?
  • Was the data used sufficient in supporting the study of the research problem?
  • Was there another methodological approach that could have been more illuminating?
  • Give your overall evaluation of the methods used in this article. How much trust would you put in generating relevant findings?

Results and Discussion

  • Were the results clearly presented?
  • Did you feel that the results support the theoretical and interpretive claims of the author? Why?
  • What did the author(s) do especially well in describing or analyzing their results?
  • Was the author's evaluation of the findings clearly stated?
  • How well did the discussion of the results relate to what is already known about the research problem?
  • Was the discussion of the results free of repetition and redundancies?
  • What interpretations did the authors make that you think are in incomplete, unwarranted, or overstated?
  • Did the conclusion effectively capture the main points of study?
  • Did the conclusion address the research questions posed? Do they seem reasonable?
  • Were the author’s conclusions consistent with the evidence and arguments presented?
  • Has the author explained how the research added new knowledge or understanding?

Overall Writing Style

  • If the article included tables, figures, or other non-textual elements, did they contribute to understanding the study?
  • Were ideas developed and related in a logical sequence?
  • Were transitions between sections of the article smooth and easy to follow?

Overall Evaluation Section

The final section of a journal analysis paper should bring your thoughts together into a coherent assessment of the value of the research study . This section is where the narrative flow transitions from analyzing specific elements of the article to critically evaluating the overall study. Explain what you view as the significance of the research in relation to the overall course content and any relevant discussions that occurred during class. Think about how the article contributes to understanding the overall research problem, how it fits within existing literature on the topic, how it relates to the course, and what it means to you as a student researcher. In some cases, your professor will also ask you to describe your experiences writing the journal article analysis paper as part of a reflective learning exercise.

Possible questions to help guide your writing of the conclusion and evaluation section may include:

  • Was the structure of the article clear and well organized?
  • Was the topic of current or enduring interest to you?
  • What were the main weaknesses of the article? [this does not refer to limitations stated by the author, but what you believe are potential flaws]
  • Was any of the information in the article unclear or ambiguous?
  • What did you learn from the research? If nothing stood out to you, explain why.
  • Assess the originality of the research. Did you believe it contributed new understanding of the research problem?
  • Were you persuaded by the author’s arguments?
  • If the author made any final recommendations, will they be impactful if applied to practice?
  • In what ways could future research build off of this study?
  • What implications does the study have for daily life?
  • Was the use of non-textual elements, footnotes or endnotes, and/or appendices helpful in understanding the research?
  • What lingering questions do you have after analyzing the article?

NOTE: Avoid using quotes. One of the main purposes of writing an article analysis paper is to learn how to effectively paraphrase and use your own words to summarize a scholarly research study and to explain what the research means to you. Using and citing a direct quote from the article should only be done to help emphasize a key point or to underscore an important concept or idea.

Business: The Article Analysis . Fred Meijer Center for Writing, Grand Valley State University; Bachiochi, Peter et al. "Using Empirical Article Analysis to Assess Research Methods Courses." Teaching of Psychology 38 (2011): 5-9; Brosowsky, Nicholaus P. et al. “Teaching Undergraduate Students to Read Empirical Articles: An Evaluation and Revision of the QALMRI Method.” PsyArXi Preprints , 2020; Holster, Kristin. “Article Evaluation Assignment”. TRAILS: Teaching Resources and Innovations Library for Sociology . Washington DC: American Sociological Association, 2016; Kershaw, Trina C., Jennifer Fugate, and Aminda J. O'Hare. "Teaching Undergraduates to Understand Published Research through Structured Practice in Identifying Key Research Concepts." Scholarship of Teaching and Learning in Psychology . Advance online publication, 2020; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Reviewer's Guide . SAGE Reviewer Gateway, SAGE Journals; Sego, Sandra A. and Anne E. Stuart. "Learning to Read Empirical Articles in General Psychology." Teaching of Psychology 43 (2016): 38-42; Kershaw, Trina C., Jordan P. Lippman, and Jennifer Fugate. "Practice Makes Proficient: Teaching Undergraduate Students to Understand Published Research." Instructional Science 46 (2018): 921-946; Gyuris, Emma, and Laura Castell. "To Tell Them or Show Them? How to Improve Science Students’ Skills of Critical Reading." International Journal of Innovation in Science and Mathematics Education 21 (2013): 70-80; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison MacKenzie. "Strategies for Integrating Information Literacy and Academic Literacy: Helping Undergraduate Students Make the Most of Scholarly Articles." Library Management 33 (2012): 525-535.

Writing Tip

Not All Scholarly Journal Articles Can Be Critically Analyzed

There are a variety of articles published in scholarly journals that do not fit within the guidelines of an article analysis assignment. This is because the work cannot be empirically examined or it does not generate new knowledge in a way which can be critically analyzed.

If you are required to locate a research study on your own, avoid selecting these types of journal articles:

  • Theoretical essays which discuss concepts, assumptions, and propositions, but report no empirical research;
  • Statistical or methodological papers that may analyze data, but the bulk of the work is devoted to refining a new measurement, statistical technique, or modeling procedure;
  • Articles that review, analyze, critique, and synthesize prior research, but do not report any original research;
  • Brief essays devoted to research methods and findings;
  • Articles written by scholars in popular magazines or industry trade journals;
  • Pre-print articles that have been posted online, but may undergo further editing and revision by the journal's editorial staff before final publication; and
  • Academic commentary that discusses research trends or emerging concepts and ideas, but does not contain citations to sources.

Journal Analysis Assignment - Myers . Writing@CSU, Colorado State University; Franco, Josue. “Introducing the Analysis of Journal Articles.” Prepared for presentation at the American Political Science Association’s 2020 Teaching and Learning Conference, February 7-9, 2020, Albuquerque, New Mexico; Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36.

  • << Previous: Annotated Bibliography
  • Next: Giving an Oral Presentation >>
  • Last Updated: May 30, 2024 9:48 AM
  • URL: https://libguides.usc.edu/writingguide/assignments

What is Empirical Research? Definition, Methods, Examples

Appinio Research · 09.02.2024 · 36min read

What is Empirical Research Definition Methods Examples

Ever wondered how we gather the facts, unveil hidden truths, and make informed decisions in a world filled with questions? Empirical research holds the key.

In this guide, we'll delve deep into the art and science of empirical research, unraveling its methods, mysteries, and manifold applications. From defining the core principles to mastering data analysis and reporting findings, we're here to equip you with the knowledge and tools to navigate the empirical landscape.

What is Empirical Research?

Empirical research is the cornerstone of scientific inquiry, providing a systematic and structured approach to investigating the world around us. It is the process of gathering and analyzing empirical or observable data to test hypotheses, answer research questions, or gain insights into various phenomena. This form of research relies on evidence derived from direct observation or experimentation, allowing researchers to draw conclusions based on real-world data rather than purely theoretical or speculative reasoning.

Characteristics of Empirical Research

Empirical research is characterized by several key features:

  • Observation and Measurement : It involves the systematic observation or measurement of variables, events, or behaviors.
  • Data Collection : Researchers collect data through various methods, such as surveys, experiments, observations, or interviews.
  • Testable Hypotheses : Empirical research often starts with testable hypotheses that are evaluated using collected data.
  • Quantitative or Qualitative Data : Data can be quantitative (numerical) or qualitative (non-numerical), depending on the research design.
  • Statistical Analysis : Quantitative data often undergo statistical analysis to determine patterns , relationships, or significance.
  • Objectivity and Replicability : Empirical research strives for objectivity, minimizing researcher bias . It should be replicable, allowing other researchers to conduct the same study to verify results.
  • Conclusions and Generalizations : Empirical research generates findings based on data and aims to make generalizations about larger populations or phenomena.

Importance of Empirical Research

Empirical research plays a pivotal role in advancing knowledge across various disciplines. Its importance extends to academia, industry, and society as a whole. Here are several reasons why empirical research is essential:

  • Evidence-Based Knowledge : Empirical research provides a solid foundation of evidence-based knowledge. It enables us to test hypotheses, confirm or refute theories, and build a robust understanding of the world.
  • Scientific Progress : In the scientific community, empirical research fuels progress by expanding the boundaries of existing knowledge. It contributes to the development of theories and the formulation of new research questions.
  • Problem Solving : Empirical research is instrumental in addressing real-world problems and challenges. It offers insights and data-driven solutions to complex issues in fields like healthcare, economics, and environmental science.
  • Informed Decision-Making : In policymaking, business, and healthcare, empirical research informs decision-makers by providing data-driven insights. It guides strategies, investments, and policies for optimal outcomes.
  • Quality Assurance : Empirical research is essential for quality assurance and validation in various industries, including pharmaceuticals, manufacturing, and technology. It ensures that products and processes meet established standards.
  • Continuous Improvement : Businesses and organizations use empirical research to evaluate performance, customer satisfaction, and product effectiveness. This data-driven approach fosters continuous improvement and innovation.
  • Human Advancement : Empirical research in fields like medicine and psychology contributes to the betterment of human health and well-being. It leads to medical breakthroughs, improved therapies, and enhanced psychological interventions.
  • Critical Thinking and Problem Solving : Engaging in empirical research fosters critical thinking skills, problem-solving abilities, and a deep appreciation for evidence-based decision-making.

Empirical research empowers us to explore, understand, and improve the world around us. It forms the bedrock of scientific inquiry and drives progress in countless domains, shaping our understanding of both the natural and social sciences.

How to Conduct Empirical Research?

So, you've decided to dive into the world of empirical research. Let's begin by exploring the crucial steps involved in getting started with your research project.

1. Select a Research Topic

Selecting the right research topic is the cornerstone of a successful empirical study. It's essential to choose a topic that not only piques your interest but also aligns with your research goals and objectives. Here's how to go about it:

  • Identify Your Interests : Start by reflecting on your passions and interests. What topics fascinate you the most? Your enthusiasm will be your driving force throughout the research process.
  • Brainstorm Ideas : Engage in brainstorming sessions to generate potential research topics. Consider the questions you've always wanted to answer or the issues that intrigue you.
  • Relevance and Significance : Assess the relevance and significance of your chosen topic. Does it contribute to existing knowledge? Is it a pressing issue in your field of study or the broader community?
  • Feasibility : Evaluate the feasibility of your research topic. Do you have access to the necessary resources, data, and participants (if applicable)?

2. Formulate Research Questions

Once you've narrowed down your research topic, the next step is to formulate clear and precise research questions . These questions will guide your entire research process and shape your study's direction. To create effective research questions:

  • Specificity : Ensure that your research questions are specific and focused. Vague or overly broad questions can lead to inconclusive results.
  • Relevance : Your research questions should directly relate to your chosen topic. They should address gaps in knowledge or contribute to solving a particular problem.
  • Testability : Ensure that your questions are testable through empirical methods. You should be able to gather data and analyze it to answer these questions.
  • Avoid Bias : Craft your questions in a way that avoids leading or biased language. Maintain neutrality to uphold the integrity of your research.

3. Review Existing Literature

Before you embark on your empirical research journey, it's essential to immerse yourself in the existing body of literature related to your chosen topic. This step, often referred to as a literature review, serves several purposes:

  • Contextualization : Understand the historical context and current state of research in your field. What have previous studies found, and what questions remain unanswered?
  • Identifying Gaps : Identify gaps or areas where existing research falls short. These gaps will help you formulate meaningful research questions and hypotheses.
  • Theory Development : If your study is theoretical, consider how existing theories apply to your topic. If it's empirical, understand how previous studies have approached data collection and analysis.
  • Methodological Insights : Learn from the methodologies employed in previous research. What methods were successful, and what challenges did researchers face?

4. Define Variables

Variables are fundamental components of empirical research. They are the factors or characteristics that can change or be manipulated during your study. Properly defining and categorizing variables is crucial for the clarity and validity of your research. Here's what you need to know:

  • Independent Variables : These are the variables that you, as the researcher, manipulate or control. They are the "cause" in cause-and-effect relationships.
  • Dependent Variables : Dependent variables are the outcomes or responses that you measure or observe. They are the "effect" influenced by changes in independent variables.
  • Operational Definitions : To ensure consistency and clarity, provide operational definitions for your variables. Specify how you will measure or manipulate each variable.
  • Control Variables : In some studies, controlling for other variables that may influence your dependent variable is essential. These are known as control variables.

Understanding these foundational aspects of empirical research will set a solid foundation for the rest of your journey. Now that you've grasped the essentials of getting started, let's delve deeper into the intricacies of research design.

Empirical Research Design

Now that you've selected your research topic, formulated research questions, and defined your variables, it's time to delve into the heart of your empirical research journey – research design . This pivotal step determines how you will collect data and what methods you'll employ to answer your research questions. Let's explore the various facets of research design in detail.

Types of Empirical Research

Empirical research can take on several forms, each with its own unique approach and methodologies. Understanding the different types of empirical research will help you choose the most suitable design for your study. Here are some common types:

  • Experimental Research : In this type, researchers manipulate one or more independent variables to observe their impact on dependent variables. It's highly controlled and often conducted in a laboratory setting.
  • Observational Research : Observational research involves the systematic observation of subjects or phenomena without intervention. Researchers are passive observers, documenting behaviors, events, or patterns.
  • Survey Research : Surveys are used to collect data through structured questionnaires or interviews. This method is efficient for gathering information from a large number of participants.
  • Case Study Research : Case studies focus on in-depth exploration of one or a few cases. Researchers gather detailed information through various sources such as interviews, documents, and observations.
  • Qualitative Research : Qualitative research aims to understand behaviors, experiences, and opinions in depth. It often involves open-ended questions, interviews, and thematic analysis.
  • Quantitative Research : Quantitative research collects numerical data and relies on statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys.

Your choice of research type should align with your research questions and objectives. Experimental research, for example, is ideal for testing cause-and-effect relationships, while qualitative research is more suitable for exploring complex phenomena.

Experimental Design

Experimental research is a systematic approach to studying causal relationships. It's characterized by the manipulation of one or more independent variables while controlling for other factors. Here are some key aspects of experimental design:

  • Control and Experimental Groups : Participants are randomly assigned to either a control group or an experimental group. The independent variable is manipulated for the experimental group but not for the control group.
  • Randomization : Randomization is crucial to eliminate bias in group assignment. It ensures that each participant has an equal chance of being in either group.
  • Hypothesis Testing : Experimental research often involves hypothesis testing. Researchers formulate hypotheses about the expected effects of the independent variable and use statistical analysis to test these hypotheses.

Observational Design

Observational research entails careful and systematic observation of subjects or phenomena. It's advantageous when you want to understand natural behaviors or events. Key aspects of observational design include:

  • Participant Observation : Researchers immerse themselves in the environment they are studying. They become part of the group being observed, allowing for a deep understanding of behaviors.
  • Non-Participant Observation : In non-participant observation, researchers remain separate from the subjects. They observe and document behaviors without direct involvement.
  • Data Collection Methods : Observational research can involve various data collection methods, such as field notes, video recordings, photographs, or coding of observed behaviors.

Survey Design

Surveys are a popular choice for collecting data from a large number of participants. Effective survey design is essential to ensure the validity and reliability of your data. Consider the following:

  • Questionnaire Design : Create clear and concise questions that are easy for participants to understand. Avoid leading or biased questions.
  • Sampling Methods : Decide on the appropriate sampling method for your study, whether it's random, stratified, or convenience sampling.
  • Data Collection Tools : Choose the right tools for data collection, whether it's paper surveys, online questionnaires, or face-to-face interviews.

Case Study Design

Case studies are an in-depth exploration of one or a few cases to gain a deep understanding of a particular phenomenon. Key aspects of case study design include:

  • Single Case vs. Multiple Case Studies : Decide whether you'll focus on a single case or multiple cases. Single case studies are intensive and allow for detailed examination, while multiple case studies provide comparative insights.
  • Data Collection Methods : Gather data through interviews, observations, document analysis, or a combination of these methods.

Qualitative vs. Quantitative Research

In empirical research, you'll often encounter the distinction between qualitative and quantitative research . Here's a closer look at these two approaches:

  • Qualitative Research : Qualitative research seeks an in-depth understanding of human behavior, experiences, and perspectives. It involves open-ended questions, interviews, and the analysis of textual or narrative data. Qualitative research is exploratory and often used when the research question is complex and requires a nuanced understanding.
  • Quantitative Research : Quantitative research collects numerical data and employs statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys. Quantitative research is ideal for testing hypotheses and establishing cause-and-effect relationships.

Understanding the various research design options is crucial in determining the most appropriate approach for your study. Your choice should align with your research questions, objectives, and the nature of the phenomenon you're investigating.

Data Collection for Empirical Research

Now that you've established your research design, it's time to roll up your sleeves and collect the data that will fuel your empirical research. Effective data collection is essential for obtaining accurate and reliable results.

Sampling Methods

Sampling methods are critical in empirical research, as they determine the subset of individuals or elements from your target population that you will study. Here are some standard sampling methods:

  • Random Sampling : Random sampling ensures that every member of the population has an equal chance of being selected. It minimizes bias and is often used in quantitative research.
  • Stratified Sampling : Stratified sampling involves dividing the population into subgroups or strata based on specific characteristics (e.g., age, gender, location). Samples are then randomly selected from each stratum, ensuring representation of all subgroups.
  • Convenience Sampling : Convenience sampling involves selecting participants who are readily available or easily accessible. While it's convenient, it may introduce bias and limit the generalizability of results.
  • Snowball Sampling : Snowball sampling is instrumental when studying hard-to-reach or hidden populations. One participant leads you to another, creating a "snowball" effect. This method is common in qualitative research.
  • Purposive Sampling : In purposive sampling, researchers deliberately select participants who meet specific criteria relevant to their research questions. It's often used in qualitative studies to gather in-depth information.

The choice of sampling method depends on the nature of your research, available resources, and the degree of precision required. It's crucial to carefully consider your sampling strategy to ensure that your sample accurately represents your target population.

Data Collection Instruments

Data collection instruments are the tools you use to gather information from your participants or sources. These instruments should be designed to capture the data you need accurately. Here are some popular data collection instruments:

  • Questionnaires : Questionnaires consist of structured questions with predefined response options. When designing questionnaires, consider the clarity of questions, the order of questions, and the response format (e.g., Likert scale , multiple-choice).
  • Interviews : Interviews involve direct communication between the researcher and participants. They can be structured (with predetermined questions) or unstructured (open-ended). Effective interviews require active listening and probing for deeper insights.
  • Observations : Observations entail systematically and objectively recording behaviors, events, or phenomena. Researchers must establish clear criteria for what to observe, how to record observations, and when to observe.
  • Surveys : Surveys are a common data collection instrument for quantitative research. They can be administered through various means, including online surveys, paper surveys, and telephone surveys.
  • Documents and Archives : In some cases, data may be collected from existing documents, records, or archives. Ensure that the sources are reliable, relevant, and properly documented.

To streamline your process and gather insights with precision and efficiency, consider leveraging innovative tools like Appinio . With Appinio's intuitive platform, you can harness the power of real-time consumer data to inform your research decisions effectively. Whether you're conducting surveys, interviews, or observations, Appinio empowers you to define your target audience, collect data from diverse demographics, and analyze results seamlessly.

By incorporating Appinio into your data collection toolkit, you can unlock a world of possibilities and elevate the impact of your empirical research. Ready to revolutionize your approach to data collection?

Book a Demo

Data Collection Procedures

Data collection procedures outline the step-by-step process for gathering data. These procedures should be meticulously planned and executed to maintain the integrity of your research.

  • Training : If you have a research team, ensure that they are trained in data collection methods and protocols. Consistency in data collection is crucial.
  • Pilot Testing : Before launching your data collection, conduct a pilot test with a small group to identify any potential problems with your instruments or procedures. Make necessary adjustments based on feedback.
  • Data Recording : Establish a systematic method for recording data. This may include timestamps, codes, or identifiers for each data point.
  • Data Security : Safeguard the confidentiality and security of collected data. Ensure that only authorized individuals have access to the data.
  • Data Storage : Properly organize and store your data in a secure location, whether in physical or digital form. Back up data to prevent loss.

Ethical Considerations

Ethical considerations are paramount in empirical research, as they ensure the well-being and rights of participants are protected.

  • Informed Consent : Obtain informed consent from participants, providing clear information about the research purpose, procedures, risks, and their right to withdraw at any time.
  • Privacy and Confidentiality : Protect the privacy and confidentiality of participants. Ensure that data is anonymized and sensitive information is kept confidential.
  • Beneficence : Ensure that your research benefits participants and society while minimizing harm. Consider the potential risks and benefits of your study.
  • Honesty and Integrity : Conduct research with honesty and integrity. Report findings accurately and transparently, even if they are not what you expected.
  • Respect for Participants : Treat participants with respect, dignity, and sensitivity to cultural differences. Avoid any form of coercion or manipulation.
  • Institutional Review Board (IRB) : If required, seek approval from an IRB or ethics committee before conducting your research, particularly when working with human participants.

Adhering to ethical guidelines is not only essential for the ethical conduct of research but also crucial for the credibility and validity of your study. Ethical research practices build trust between researchers and participants and contribute to the advancement of knowledge with integrity.

With a solid understanding of data collection, including sampling methods, instruments, procedures, and ethical considerations, you are now well-equipped to gather the data needed to answer your research questions.

Empirical Research Data Analysis

Now comes the exciting phase of data analysis, where the raw data you've diligently collected starts to yield insights and answers to your research questions. We will explore the various aspects of data analysis, from preparing your data to drawing meaningful conclusions through statistics and visualization.

Data Preparation

Data preparation is the crucial first step in data analysis. It involves cleaning, organizing, and transforming your raw data into a format that is ready for analysis. Effective data preparation ensures the accuracy and reliability of your results.

  • Data Cleaning : Identify and rectify errors, missing values, and inconsistencies in your dataset. This may involve correcting typos, removing outliers, and imputing missing data.
  • Data Coding : Assign numerical values or codes to categorical variables to make them suitable for statistical analysis. For example, converting "Yes" and "No" to 1 and 0.
  • Data Transformation : Transform variables as needed to meet the assumptions of the statistical tests you plan to use. Common transformations include logarithmic or square root transformations.
  • Data Integration : If your data comes from multiple sources, integrate it into a unified dataset, ensuring that variables match and align.
  • Data Documentation : Maintain clear documentation of all data preparation steps, as well as the rationale behind each decision. This transparency is essential for replicability.

Effective data preparation lays the foundation for accurate and meaningful analysis. It allows you to trust the results that will follow in the subsequent stages.

Descriptive Statistics

Descriptive statistics help you summarize and make sense of your data by providing a clear overview of its key characteristics. These statistics are essential for understanding the central tendencies, variability, and distribution of your variables. Descriptive statistics include:

  • Measures of Central Tendency : These include the mean (average), median (middle value), and mode (most frequent value). They help you understand the typical or central value of your data.
  • Measures of Dispersion : Measures like the range, variance, and standard deviation provide insights into the spread or variability of your data points.
  • Frequency Distributions : Creating frequency distributions or histograms allows you to visualize the distribution of your data across different values or categories.

Descriptive statistics provide the initial insights needed to understand your data's basic characteristics, which can inform further analysis.

Inferential Statistics

Inferential statistics take your analysis to the next level by allowing you to make inferences or predictions about a larger population based on your sample data. These methods help you test hypotheses and draw meaningful conclusions. Key concepts in inferential statistics include:

  • Hypothesis Testing : Hypothesis tests (e.g., t-tests, chi-squared tests) help you determine whether observed differences or associations in your data are statistically significant or occurred by chance.
  • Confidence Intervals : Confidence intervals provide a range within which population parameters (e.g., population mean) are likely to fall based on your sample data.
  • Regression Analysis : Regression models (linear, logistic, etc.) help you explore relationships between variables and make predictions.
  • Analysis of Variance (ANOVA) : ANOVA tests are used to compare means between multiple groups, allowing you to assess whether differences are statistically significant.

Inferential statistics are powerful tools for drawing conclusions from your data and assessing the generalizability of your findings to the broader population.

Qualitative Data Analysis

Qualitative data analysis is employed when working with non-numerical data, such as text, interviews, or open-ended survey responses. It focuses on understanding the underlying themes, patterns, and meanings within qualitative data. Qualitative analysis techniques include:

  • Thematic Analysis : Identifying and analyzing recurring themes or patterns within textual data.
  • Content Analysis : Categorizing and coding qualitative data to extract meaningful insights.
  • Grounded Theory : Developing theories or frameworks based on emergent themes from the data.
  • Narrative Analysis : Examining the structure and content of narratives to uncover meaning.

Qualitative data analysis provides a rich and nuanced understanding of complex phenomena and human experiences.

Data Visualization

Data visualization is the art of representing data graphically to make complex information more understandable and accessible. Effective data visualization can reveal patterns, trends, and outliers in your data. Common types of data visualization include:

  • Bar Charts and Histograms : Used to display the distribution of categorical data or discrete data .
  • Line Charts : Ideal for showing trends and changes in data over time.
  • Scatter Plots : Visualize relationships and correlations between two variables.
  • Pie Charts : Display the composition of a whole in terms of its parts.
  • Heatmaps : Depict patterns and relationships in multidimensional data through color-coding.
  • Box Plots : Provide a summary of the data distribution, including outliers.
  • Interactive Dashboards : Create dynamic visualizations that allow users to explore data interactively.

Data visualization not only enhances your understanding of the data but also serves as a powerful communication tool to convey your findings to others.

As you embark on the data analysis phase of your empirical research, remember that the specific methods and techniques you choose will depend on your research questions, data type, and objectives. Effective data analysis transforms raw data into valuable insights, bringing you closer to the answers you seek.

How to Report Empirical Research Results?

At this stage, you get to share your empirical research findings with the world. Effective reporting and presentation of your results are crucial for communicating your research's impact and insights.

1. Write the Research Paper

Writing a research paper is the culmination of your empirical research journey. It's where you synthesize your findings, provide context, and contribute to the body of knowledge in your field.

  • Title and Abstract : Craft a clear and concise title that reflects your research's essence. The abstract should provide a brief summary of your research objectives, methods, findings, and implications.
  • Introduction : In the introduction, introduce your research topic, state your research questions or hypotheses, and explain the significance of your study. Provide context by discussing relevant literature.
  • Methods : Describe your research design, data collection methods, and sampling procedures. Be precise and transparent, allowing readers to understand how you conducted your study.
  • Results : Present your findings in a clear and organized manner. Use tables, graphs, and statistical analyses to support your results. Avoid interpreting your findings in this section; focus on the presentation of raw data.
  • Discussion : Interpret your findings and discuss their implications. Relate your results to your research questions and the existing literature. Address any limitations of your study and suggest avenues for future research.
  • Conclusion : Summarize the key points of your research and its significance. Restate your main findings and their implications.
  • References : Cite all sources used in your research following a specific citation style (e.g., APA, MLA, Chicago). Ensure accuracy and consistency in your citations.
  • Appendices : Include any supplementary material, such as questionnaires, data coding sheets, or additional analyses, in the appendices.

Writing a research paper is a skill that improves with practice. Ensure clarity, coherence, and conciseness in your writing to make your research accessible to a broader audience.

2. Create Visuals and Tables

Visuals and tables are powerful tools for presenting complex data in an accessible and understandable manner.

  • Clarity : Ensure that your visuals and tables are clear and easy to interpret. Use descriptive titles and labels.
  • Consistency : Maintain consistency in formatting, such as font size and style, across all visuals and tables.
  • Appropriateness : Choose the most suitable visual representation for your data. Bar charts, line graphs, and scatter plots work well for different types of data.
  • Simplicity : Avoid clutter and unnecessary details. Focus on conveying the main points.
  • Accessibility : Make sure your visuals and tables are accessible to a broad audience, including those with visual impairments.
  • Captions : Include informative captions that explain the significance of each visual or table.

Compelling visuals and tables enhance the reader's understanding of your research and can be the key to conveying complex information efficiently.

3. Interpret Findings

Interpreting your findings is where you bridge the gap between data and meaning. It's your opportunity to provide context, discuss implications, and offer insights. When interpreting your findings:

  • Relate to Research Questions : Discuss how your findings directly address your research questions or hypotheses.
  • Compare with Literature : Analyze how your results align with or deviate from previous research in your field. What insights can you draw from these comparisons?
  • Discuss Limitations : Be transparent about the limitations of your study. Address any constraints, biases, or potential sources of error.
  • Practical Implications : Explore the real-world implications of your findings. How can they be applied or inform decision-making?
  • Future Research Directions : Suggest areas for future research based on the gaps or unanswered questions that emerged from your study.

Interpreting findings goes beyond simply presenting data; it's about weaving a narrative that helps readers grasp the significance of your research in the broader context.

With your research paper written, structured, and enriched with visuals, and your findings expertly interpreted, you are now prepared to communicate your research effectively. Sharing your insights and contributing to the body of knowledge in your field is a significant accomplishment in empirical research.

Examples of Empirical Research

To solidify your understanding of empirical research, let's delve into some real-world examples across different fields. These examples will illustrate how empirical research is applied to gather data, analyze findings, and draw conclusions.

Social Sciences

In the realm of social sciences, consider a sociological study exploring the impact of socioeconomic status on educational attainment. Researchers gather data from a diverse group of individuals, including their family backgrounds, income levels, and academic achievements.

Through statistical analysis, they can identify correlations and trends, revealing whether individuals from lower socioeconomic backgrounds are less likely to attain higher levels of education. This empirical research helps shed light on societal inequalities and informs policymakers on potential interventions to address disparities in educational access.

Environmental Science

Environmental scientists often employ empirical research to assess the effects of environmental changes. For instance, researchers studying the impact of climate change on wildlife might collect data on animal populations, weather patterns, and habitat conditions over an extended period.

By analyzing this empirical data, they can identify correlations between climate fluctuations and changes in wildlife behavior, migration patterns, or population sizes. This empirical research is crucial for understanding the ecological consequences of climate change and informing conservation efforts.

Business and Economics

In the business world, empirical research is essential for making data-driven decisions. Consider a market research study conducted by a business seeking to launch a new product. They collect data through surveys , focus groups , and consumer behavior analysis.

By examining this empirical data, the company can gauge consumer preferences, demand, and potential market size. Empirical research in business helps guide product development, pricing strategies, and marketing campaigns, increasing the likelihood of a successful product launch.

Psychological studies frequently rely on empirical research to understand human behavior and cognition. For instance, a psychologist interested in examining the impact of stress on memory might design an experiment. Participants are exposed to stress-inducing situations, and their memory performance is assessed through various tasks.

By analyzing the data collected, the psychologist can determine whether stress has a significant effect on memory recall. This empirical research contributes to our understanding of the complex interplay between psychological factors and cognitive processes.

These examples highlight the versatility and applicability of empirical research across diverse fields. Whether in medicine, social sciences, environmental science, business, or psychology, empirical research serves as a fundamental tool for gaining insights, testing hypotheses, and driving advancements in knowledge and practice.

Conclusion for Empirical Research

Empirical research is a powerful tool for gaining insights, testing hypotheses, and making informed decisions. By following the steps outlined in this guide, you've learned how to select research topics, collect data, analyze findings, and effectively communicate your research to the world. Remember, empirical research is a journey of discovery, and each step you take brings you closer to a deeper understanding of the world around you. Whether you're a scientist, a student, or someone curious about the process, the principles of empirical research empower you to explore, learn, and contribute to the ever-expanding realm of knowledge.

How to Collect Data for Empirical Research?

Introducing Appinio , the real-time market research platform revolutionizing how companies gather consumer insights for their empirical research endeavors. With Appinio, you can conduct your own market research in minutes, gaining valuable data to fuel your data-driven decisions.

Appinio is more than just a market research platform; it's a catalyst for transforming the way you approach empirical research, making it exciting, intuitive, and seamlessly integrated into your decision-making process.

Here's why Appinio is the go-to solution for empirical research:

  • From Questions to Insights in Minutes : With Appinio's streamlined process, you can go from formulating your research questions to obtaining actionable insights in a matter of minutes, saving you time and effort.
  • Intuitive Platform for Everyone : No need for a PhD in research; Appinio's platform is designed to be intuitive and user-friendly, ensuring that anyone can navigate and utilize it effectively.
  • Rapid Response Times : With an average field time of under 23 minutes for 1,000 respondents, Appinio delivers rapid results, allowing you to gather data swiftly and efficiently.
  • Global Reach with Targeted Precision : With access to over 90 countries and the ability to define target groups based on 1200+ characteristics, Appinio empowers you to reach your desired audience with precision and ease.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Pareto Analysis Definition Pareto Chart Examples

30.05.2024 | 29min read

Pareto Analysis: Definition, Pareto Chart, Examples

What is Systematic Sampling Definition Types Examples

28.05.2024 | 32min read

What is Systematic Sampling? Definition, Types, Examples

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

empirical research critical analysis

Home Market Research

Empirical Research: Definition, Methods, Types and Examples

What is Empirical Research

Content Index

Empirical research: Definition

Empirical research: origin, quantitative research methods, qualitative research methods, steps for conducting empirical research, empirical research methodology cycle, advantages of empirical research, disadvantages of empirical research, why is there a need for empirical research.

Empirical research is defined as any research where conclusions of the study is strictly drawn from concretely empirical evidence, and therefore “verifiable” evidence.

This empirical evidence can be gathered using quantitative market research and  qualitative market research  methods.

For example: A research is being conducted to find out if listening to happy music in the workplace while working may promote creativity? An experiment is conducted by using a music website survey on a set of audience who are exposed to happy music and another set who are not listening to music at all, and the subjects are then observed. The results derived from such a research will give empirical evidence if it does promote creativity or not.

LEARN ABOUT: Behavioral Research

You must have heard the quote” I will not believe it unless I see it”. This came from the ancient empiricists, a fundamental understanding that powered the emergence of medieval science during the renaissance period and laid the foundation of modern science, as we know it today. The word itself has its roots in greek. It is derived from the greek word empeirikos which means “experienced”.

In today’s world, the word empirical refers to collection of data using evidence that is collected through observation or experience or by using calibrated scientific instruments. All of the above origins have one thing in common which is dependence of observation and experiments to collect data and test them to come up with conclusions.

LEARN ABOUT: Causal Research

Types and methodologies of empirical research

Empirical research can be conducted and analysed using qualitative or quantitative methods.

  • Quantitative research : Quantitative research methods are used to gather information through numerical data. It is used to quantify opinions, behaviors or other defined variables . These are predetermined and are in a more structured format. Some of the commonly used methods are survey, longitudinal studies, polls, etc
  • Qualitative research:   Qualitative research methods are used to gather non numerical data.  It is used to find meanings, opinions, or the underlying reasons from its subjects. These methods are unstructured or semi structured. The sample size for such a research is usually small and it is a conversational type of method to provide more insight or in-depth information about the problem Some of the most popular forms of methods are focus groups, experiments, interviews, etc.

Data collected from these will need to be analysed. Empirical evidence can also be analysed either quantitatively and qualitatively. Using this, the researcher can answer empirical questions which have to be clearly defined and answerable with the findings he has got. The type of research design used will vary depending on the field in which it is going to be used. Many of them might choose to do a collective research involving quantitative and qualitative method to better answer questions which cannot be studied in a laboratory setting.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

Quantitative research methods aid in analyzing the empirical evidence gathered. By using these a researcher can find out if his hypothesis is supported or not.

  • Survey research: Survey research generally involves a large audience to collect a large amount of data. This is a quantitative method having a predetermined set of closed questions which are pretty easy to answer. Because of the simplicity of such a method, high responses are achieved. It is one of the most commonly used methods for all kinds of research in today’s world.

Previously, surveys were taken face to face only with maybe a recorder. However, with advancement in technology and for ease, new mediums such as emails , or social media have emerged.

For example: Depletion of energy resources is a growing concern and hence there is a need for awareness about renewable energy. According to recent studies, fossil fuels still account for around 80% of energy consumption in the United States. Even though there is a rise in the use of green energy every year, there are certain parameters because of which the general population is still not opting for green energy. In order to understand why, a survey can be conducted to gather opinions of the general population about green energy and the factors that influence their choice of switching to renewable energy. Such a survey can help institutions or governing bodies to promote appropriate awareness and incentive schemes to push the use of greener energy.

Learn more: Renewable Energy Survey Template Descriptive Research vs Correlational Research

  • Experimental research: In experimental research , an experiment is set up and a hypothesis is tested by creating a situation in which one of the variable is manipulated. This is also used to check cause and effect. It is tested to see what happens to the independent variable if the other one is removed or altered. The process for such a method is usually proposing a hypothesis, experimenting on it, analyzing the findings and reporting the findings to understand if it supports the theory or not.

For example: A particular product company is trying to find what is the reason for them to not be able to capture the market. So the organisation makes changes in each one of the processes like manufacturing, marketing, sales and operations. Through the experiment they understand that sales training directly impacts the market coverage for their product. If the person is trained well, then the product will have better coverage.

  • Correlational research: Correlational research is used to find relation between two set of variables . Regression analysis is generally used to predict outcomes of such a method. It can be positive, negative or neutral correlation.

LEARN ABOUT: Level of Analysis

For example: Higher educated individuals will get higher paying jobs. This means higher education enables the individual to high paying job and less education will lead to lower paying jobs.

  • Longitudinal study: Longitudinal study is used to understand the traits or behavior of a subject under observation after repeatedly testing the subject over a period of time. Data collected from such a method can be qualitative or quantitative in nature.

For example: A research to find out benefits of exercise. The target is asked to exercise everyday for a particular period of time and the results show higher endurance, stamina, and muscle growth. This supports the fact that exercise benefits an individual body.

  • Cross sectional: Cross sectional study is an observational type of method, in which a set of audience is observed at a given point in time. In this type, the set of people are chosen in a fashion which depicts similarity in all the variables except the one which is being researched. This type does not enable the researcher to establish a cause and effect relationship as it is not observed for a continuous time period. It is majorly used by healthcare sector or the retail industry.

For example: A medical study to find the prevalence of under-nutrition disorders in kids of a given population. This will involve looking at a wide range of parameters like age, ethnicity, location, incomes  and social backgrounds. If a significant number of kids coming from poor families show under-nutrition disorders, the researcher can further investigate into it. Usually a cross sectional study is followed by a longitudinal study to find out the exact reason.

  • Causal-Comparative research : This method is based on comparison. It is mainly used to find out cause-effect relationship between two variables or even multiple variables.

For example: A researcher measured the productivity of employees in a company which gave breaks to the employees during work and compared that to the employees of the company which did not give breaks at all.

LEARN ABOUT: Action Research

Some research questions need to be analysed qualitatively, as quantitative methods are not applicable there. In many cases, in-depth information is needed or a researcher may need to observe a target audience behavior, hence the results needed are in a descriptive analysis form. Qualitative research results will be descriptive rather than predictive. It enables the researcher to build or support theories for future potential quantitative research. In such a situation qualitative research methods are used to derive a conclusion to support the theory or hypothesis being studied.

LEARN ABOUT: Qualitative Interview

  • Case study: Case study method is used to find more information through carefully analyzing existing cases. It is very often used for business research or to gather empirical evidence for investigation purpose. It is a method to investigate a problem within its real life context through existing cases. The researcher has to carefully analyse making sure the parameter and variables in the existing case are the same as to the case that is being investigated. Using the findings from the case study, conclusions can be drawn regarding the topic that is being studied.

For example: A report mentioning the solution provided by a company to its client. The challenges they faced during initiation and deployment, the findings of the case and solutions they offered for the problems. Such case studies are used by most companies as it forms an empirical evidence for the company to promote in order to get more business.

  • Observational method:   Observational method is a process to observe and gather data from its target. Since it is a qualitative method it is time consuming and very personal. It can be said that observational research method is a part of ethnographic research which is also used to gather empirical evidence. This is usually a qualitative form of research, however in some cases it can be quantitative as well depending on what is being studied.

For example: setting up a research to observe a particular animal in the rain-forests of amazon. Such a research usually take a lot of time as observation has to be done for a set amount of time to study patterns or behavior of the subject. Another example used widely nowadays is to observe people shopping in a mall to figure out buying behavior of consumers.

  • One-on-one interview: Such a method is purely qualitative and one of the most widely used. The reason being it enables a researcher get precise meaningful data if the right questions are asked. It is a conversational method where in-depth data can be gathered depending on where the conversation leads.

For example: A one-on-one interview with the finance minister to gather data on financial policies of the country and its implications on the public.

  • Focus groups: Focus groups are used when a researcher wants to find answers to why, what and how questions. A small group is generally chosen for such a method and it is not necessary to interact with the group in person. A moderator is generally needed in case the group is being addressed in person. This is widely used by product companies to collect data about their brands and the product.

For example: A mobile phone manufacturer wanting to have a feedback on the dimensions of one of their models which is yet to be launched. Such studies help the company meet the demand of the customer and position their model appropriately in the market.

  • Text analysis: Text analysis method is a little new compared to the other types. Such a method is used to analyse social life by going through images or words used by the individual. In today’s world, with social media playing a major part of everyone’s life, such a method enables the research to follow the pattern that relates to his study.

For example: A lot of companies ask for feedback from the customer in detail mentioning how satisfied are they with their customer support team. Such data enables the researcher to take appropriate decisions to make their support team better.

Sometimes a combination of the methods is also needed for some questions that cannot be answered using only one type of method especially when a researcher needs to gain a complete understanding of complex subject matter.

We recently published a blog that talks about examples of qualitative data in education ; why don’t you check it out for more ideas?

Since empirical research is based on observation and capturing experiences, it is important to plan the steps to conduct the experiment and how to analyse it. This will enable the researcher to resolve problems or obstacles which can occur during the experiment.

Step #1: Define the purpose of the research

This is the step where the researcher has to answer questions like what exactly do I want to find out? What is the problem statement? Are there any issues in terms of the availability of knowledge, data, time or resources. Will this research be more beneficial than what it will cost.

Before going ahead, a researcher has to clearly define his purpose for the research and set up a plan to carry out further tasks.

Step #2 : Supporting theories and relevant literature

The researcher needs to find out if there are theories which can be linked to his research problem . He has to figure out if any theory can help him support his findings. All kind of relevant literature will help the researcher to find if there are others who have researched this before, or what are the problems faced during this research. The researcher will also have to set up assumptions and also find out if there is any history regarding his research problem

Step #3: Creation of Hypothesis and measurement

Before beginning the actual research he needs to provide himself a working hypothesis or guess what will be the probable result. Researcher has to set up variables, decide the environment for the research and find out how can he relate between the variables.

Researcher will also need to define the units of measurements, tolerable degree for errors, and find out if the measurement chosen will be acceptable by others.

Step #4: Methodology, research design and data collection

In this step, the researcher has to define a strategy for conducting his research. He has to set up experiments to collect data which will enable him to propose the hypothesis. The researcher will decide whether he will need experimental or non experimental method for conducting the research. The type of research design will vary depending on the field in which the research is being conducted. Last but not the least, the researcher will have to find out parameters that will affect the validity of the research design. Data collection will need to be done by choosing appropriate samples depending on the research question. To carry out the research, he can use one of the many sampling techniques. Once data collection is complete, researcher will have empirical data which needs to be analysed.

LEARN ABOUT: Best Data Collection Tools

Step #5: Data Analysis and result

Data analysis can be done in two ways, qualitatively and quantitatively. Researcher will need to find out what qualitative method or quantitative method will be needed or will he need a combination of both. Depending on the unit of analysis of his data, he will know if his hypothesis is supported or rejected. Analyzing this data is the most important part to support his hypothesis.

Step #6: Conclusion

A report will need to be made with the findings of the research. The researcher can give the theories and literature that support his research. He can make suggestions or recommendations for further research on his topic.

Empirical research methodology cycle

A.D. de Groot, a famous dutch psychologist and a chess expert conducted some of the most notable experiments using chess in the 1940’s. During his study, he came up with a cycle which is consistent and now widely used to conduct empirical research. It consists of 5 phases with each phase being as important as the next one. The empirical cycle captures the process of coming up with hypothesis about how certain subjects work or behave and then testing these hypothesis against empirical data in a systematic and rigorous approach. It can be said that it characterizes the deductive approach to science. Following is the empirical cycle.

  • Observation: At this phase an idea is sparked for proposing a hypothesis. During this phase empirical data is gathered using observation. For example: a particular species of flower bloom in a different color only during a specific season.
  • Induction: Inductive reasoning is then carried out to form a general conclusion from the data gathered through observation. For example: As stated above it is observed that the species of flower blooms in a different color during a specific season. A researcher may ask a question “does the temperature in the season cause the color change in the flower?” He can assume that is the case, however it is a mere conjecture and hence an experiment needs to be set up to support this hypothesis. So he tags a few set of flowers kept at a different temperature and observes if they still change the color?
  • Deduction: This phase helps the researcher to deduce a conclusion out of his experiment. This has to be based on logic and rationality to come up with specific unbiased results.For example: In the experiment, if the tagged flowers in a different temperature environment do not change the color then it can be concluded that temperature plays a role in changing the color of the bloom.
  • Testing: This phase involves the researcher to return to empirical methods to put his hypothesis to the test. The researcher now needs to make sense of his data and hence needs to use statistical analysis plans to determine the temperature and bloom color relationship. If the researcher finds out that most flowers bloom a different color when exposed to the certain temperature and the others do not when the temperature is different, he has found support to his hypothesis. Please note this not proof but just a support to his hypothesis.
  • Evaluation: This phase is generally forgotten by most but is an important one to keep gaining knowledge. During this phase the researcher puts forth the data he has collected, the support argument and his conclusion. The researcher also states the limitations for the experiment and his hypothesis and suggests tips for others to pick it up and continue a more in-depth research for others in the future. LEARN MORE: Population vs Sample

LEARN MORE: Population vs Sample

There is a reason why empirical research is one of the most widely used method. There are a few advantages associated with it. Following are a few of them.

  • It is used to authenticate traditional research through various experiments and observations.
  • This research methodology makes the research being conducted more competent and authentic.
  • It enables a researcher understand the dynamic changes that can happen and change his strategy accordingly.
  • The level of control in such a research is high so the researcher can control multiple variables.
  • It plays a vital role in increasing internal validity .

Even though empirical research makes the research more competent and authentic, it does have a few disadvantages. Following are a few of them.

  • Such a research needs patience as it can be very time consuming. The researcher has to collect data from multiple sources and the parameters involved are quite a few, which will lead to a time consuming research.
  • Most of the time, a researcher will need to conduct research at different locations or in different environments, this can lead to an expensive affair.
  • There are a few rules in which experiments can be performed and hence permissions are needed. Many a times, it is very difficult to get certain permissions to carry out different methods of this research.
  • Collection of data can be a problem sometimes, as it has to be collected from a variety of sources through different methods.

LEARN ABOUT:  Social Communication Questionnaire

Empirical research is important in today’s world because most people believe in something only that they can see, hear or experience. It is used to validate multiple hypothesis and increase human knowledge and continue doing it to keep advancing in various fields.

For example: Pharmaceutical companies use empirical research to try out a specific drug on controlled groups or random groups to study the effect and cause. This way, they prove certain theories they had proposed for the specific drug. Such research is very important as sometimes it can lead to finding a cure for a disease that has existed for many years. It is useful in science and many other fields like history, social sciences, business, etc.

LEARN ABOUT: 12 Best Tools for Researchers

With the advancement in today’s world, empirical research has become critical and a norm in many fields to support their hypothesis and gain more knowledge. The methods mentioned above are very useful for carrying out such research. However, a number of new methods will keep coming up as the nature of new investigative questions keeps getting unique or changing.

Create a single source of real data with a built-for-insights platform. Store past data, add nuggets of insights, and import research data from various sources into a CRM for insights. Build on ever-growing research with a real-time dashboard in a unified research management platform to turn insights into knowledge.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Trend Report

Trend Report: Guide for Market Dynamics & Strategic Analysis

Cannabis Industry Business Intelligence

Cannabis Industry Business Intelligence: Impact on Research

May 28, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Privacy Policy

Research Method

Home » Critical Analysis – Types, Examples and Writing Guide

Critical Analysis – Types, Examples and Writing Guide

Table of Contents

Critical Analysis

Critical Analysis

Definition:

Critical analysis is a process of examining a piece of work or an idea in a systematic, objective, and analytical way. It involves breaking down complex ideas, concepts, or arguments into smaller, more manageable parts to understand them better.

Types of Critical Analysis

Types of Critical Analysis are as follows:

Literary Analysis

This type of analysis focuses on analyzing and interpreting works of literature , such as novels, poetry, plays, etc. The analysis involves examining the literary devices used in the work, such as symbolism, imagery, and metaphor, and how they contribute to the overall meaning of the work.

Film Analysis

This type of analysis involves examining and interpreting films, including their themes, cinematography, editing, and sound. Film analysis can also include evaluating the director’s style and how it contributes to the overall message of the film.

Art Analysis

This type of analysis involves examining and interpreting works of art , such as paintings, sculptures, and installations. The analysis involves examining the elements of the artwork, such as color, composition, and technique, and how they contribute to the overall meaning of the work.

Cultural Analysis

This type of analysis involves examining and interpreting cultural artifacts , such as advertisements, popular music, and social media posts. The analysis involves examining the cultural context of the artifact and how it reflects and shapes cultural values, beliefs, and norms.

Historical Analysis

This type of analysis involves examining and interpreting historical documents , such as diaries, letters, and government records. The analysis involves examining the historical context of the document and how it reflects the social, political, and cultural attitudes of the time.

Philosophical Analysis

This type of analysis involves examining and interpreting philosophical texts and ideas, such as the works of philosophers and their arguments. The analysis involves evaluating the logical consistency of the arguments and assessing the validity and soundness of the conclusions.

Scientific Analysis

This type of analysis involves examining and interpreting scientific research studies and their findings. The analysis involves evaluating the methods used in the study, the data collected, and the conclusions drawn, and assessing their reliability and validity.

Critical Discourse Analysis

This type of analysis involves examining and interpreting language use in social and political contexts. The analysis involves evaluating the power dynamics and social relationships conveyed through language use and how they shape discourse and social reality.

Comparative Analysis

This type of analysis involves examining and interpreting multiple texts or works of art and comparing them to each other. The analysis involves evaluating the similarities and differences between the texts and how they contribute to understanding the themes and meanings conveyed.

Critical Analysis Format

Critical Analysis Format is as follows:

I. Introduction

  • Provide a brief overview of the text, object, or event being analyzed
  • Explain the purpose of the analysis and its significance
  • Provide background information on the context and relevant historical or cultural factors

II. Description

  • Provide a detailed description of the text, object, or event being analyzed
  • Identify key themes, ideas, and arguments presented
  • Describe the author or creator’s style, tone, and use of language or visual elements

III. Analysis

  • Analyze the text, object, or event using critical thinking skills
  • Identify the main strengths and weaknesses of the argument or presentation
  • Evaluate the reliability and validity of the evidence presented
  • Assess any assumptions or biases that may be present in the text, object, or event
  • Consider the implications of the argument or presentation for different audiences and contexts

IV. Evaluation

  • Provide an overall evaluation of the text, object, or event based on the analysis
  • Assess the effectiveness of the argument or presentation in achieving its intended purpose
  • Identify any limitations or gaps in the argument or presentation
  • Consider any alternative viewpoints or interpretations that could be presented
  • Summarize the main points of the analysis and evaluation
  • Reiterate the significance of the text, object, or event and its relevance to broader issues or debates
  • Provide any recommendations for further research or future developments in the field.

VI. Example

  • Provide an example or two to support your analysis and evaluation
  • Use quotes or specific details from the text, object, or event to support your claims
  • Analyze the example(s) using critical thinking skills and explain how they relate to your overall argument

VII. Conclusion

  • Reiterate your thesis statement and summarize your main points
  • Provide a final evaluation of the text, object, or event based on your analysis
  • Offer recommendations for future research or further developments in the field
  • End with a thought-provoking statement or question that encourages the reader to think more deeply about the topic

How to Write Critical Analysis

Writing a critical analysis involves evaluating and interpreting a text, such as a book, article, or film, and expressing your opinion about its quality and significance. Here are some steps you can follow to write a critical analysis:

  • Read and re-read the text: Before you begin writing, make sure you have a good understanding of the text. Read it several times and take notes on the key points, themes, and arguments.
  • Identify the author’s purpose and audience: Consider why the author wrote the text and who the intended audience is. This can help you evaluate whether the author achieved their goals and whether the text is effective in reaching its audience.
  • Analyze the structure and style: Look at the organization of the text and the author’s writing style. Consider how these elements contribute to the overall meaning of the text.
  • Evaluate the content : Analyze the author’s arguments, evidence, and conclusions. Consider whether they are logical, convincing, and supported by the evidence presented in the text.
  • Consider the context: Think about the historical, cultural, and social context in which the text was written. This can help you understand the author’s perspective and the significance of the text.
  • Develop your thesis statement : Based on your analysis, develop a clear and concise thesis statement that summarizes your overall evaluation of the text.
  • Support your thesis: Use evidence from the text to support your thesis statement. This can include direct quotes, paraphrases, and examples from the text.
  • Write the introduction, body, and conclusion : Organize your analysis into an introduction that provides context and presents your thesis, a body that presents your evidence and analysis, and a conclusion that summarizes your main points and restates your thesis.
  • Revise and edit: After you have written your analysis, revise and edit it to ensure that your writing is clear, concise, and well-organized. Check for spelling and grammar errors, and make sure that your analysis is logically sound and supported by evidence.

When to Write Critical Analysis

You may want to write a critical analysis in the following situations:

  • Academic Assignments: If you are a student, you may be assigned to write a critical analysis as a part of your coursework. This could include analyzing a piece of literature, a historical event, or a scientific paper.
  • Journalism and Media: As a journalist or media person, you may need to write a critical analysis of current events, political speeches, or media coverage.
  • Personal Interest: If you are interested in a particular topic, you may want to write a critical analysis to gain a deeper understanding of it. For example, you may want to analyze the themes and motifs in a novel or film that you enjoyed.
  • Professional Development : Professionals such as writers, scholars, and researchers often write critical analyses to gain insights into their field of study or work.

Critical Analysis Example

An Example of Critical Analysis Could be as follow:

Research Topic:

The Impact of Online Learning on Student Performance

Introduction:

The introduction of the research topic is clear and provides an overview of the issue. However, it could benefit from providing more background information on the prevalence of online learning and its potential impact on student performance.

Literature Review:

The literature review is comprehensive and well-structured. It covers a broad range of studies that have examined the relationship between online learning and student performance. However, it could benefit from including more recent studies and providing a more critical analysis of the existing literature.

Research Methods:

The research methods are clearly described and appropriate for the research question. The study uses a quasi-experimental design to compare the performance of students who took an online course with those who took the same course in a traditional classroom setting. However, the study may benefit from using a randomized controlled trial design to reduce potential confounding factors.

The results are presented in a clear and concise manner. The study finds that students who took the online course performed similarly to those who took the traditional course. However, the study only measures performance on one course and may not be generalizable to other courses or contexts.

Discussion :

The discussion section provides a thorough analysis of the study’s findings. The authors acknowledge the limitations of the study and provide suggestions for future research. However, they could benefit from discussing potential mechanisms underlying the relationship between online learning and student performance.

Conclusion :

The conclusion summarizes the main findings of the study and provides some implications for future research and practice. However, it could benefit from providing more specific recommendations for implementing online learning programs in educational settings.

Purpose of Critical Analysis

There are several purposes of critical analysis, including:

  • To identify and evaluate arguments : Critical analysis helps to identify the main arguments in a piece of writing or speech and evaluate their strengths and weaknesses. This enables the reader to form their own opinion and make informed decisions.
  • To assess evidence : Critical analysis involves examining the evidence presented in a text or speech and evaluating its quality and relevance to the argument. This helps to determine the credibility of the claims being made.
  • To recognize biases and assumptions : Critical analysis helps to identify any biases or assumptions that may be present in the argument, and evaluate how these affect the credibility of the argument.
  • To develop critical thinking skills: Critical analysis helps to develop the ability to think critically, evaluate information objectively, and make reasoned judgments based on evidence.
  • To improve communication skills: Critical analysis involves carefully reading and listening to information, evaluating it, and expressing one’s own opinion in a clear and concise manner. This helps to improve communication skills and the ability to express ideas effectively.

Importance of Critical Analysis

Here are some specific reasons why critical analysis is important:

  • Helps to identify biases: Critical analysis helps individuals to recognize their own biases and assumptions, as well as the biases of others. By being aware of biases, individuals can better evaluate the credibility and reliability of information.
  • Enhances problem-solving skills : Critical analysis encourages individuals to question assumptions and consider multiple perspectives, which can lead to creative problem-solving and innovation.
  • Promotes better decision-making: By carefully evaluating evidence and arguments, critical analysis can help individuals make more informed and effective decisions.
  • Facilitates understanding: Critical analysis helps individuals to understand complex issues and ideas by breaking them down into smaller parts and evaluating them separately.
  • Fosters intellectual growth : Engaging in critical analysis challenges individuals to think deeply and critically, which can lead to intellectual growth and development.

Advantages of Critical Analysis

Some advantages of critical analysis include:

  • Improved decision-making: Critical analysis helps individuals make informed decisions by evaluating all available information and considering various perspectives.
  • Enhanced problem-solving skills : Critical analysis requires individuals to identify and analyze the root cause of a problem, which can help develop effective solutions.
  • Increased creativity : Critical analysis encourages individuals to think outside the box and consider alternative solutions to problems, which can lead to more creative and innovative ideas.
  • Improved communication : Critical analysis helps individuals communicate their ideas and opinions more effectively by providing logical and coherent arguments.
  • Reduced bias: Critical analysis requires individuals to evaluate information objectively, which can help reduce personal biases and subjective opinions.
  • Better understanding of complex issues : Critical analysis helps individuals to understand complex issues by breaking them down into smaller parts, examining each part and understanding how they fit together.
  • Greater self-awareness: Critical analysis helps individuals to recognize their own biases, assumptions, and limitations, which can lead to personal growth and development.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Discriminant Analysis

Discriminant Analysis – Methods, Types and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Examining the Transformative Influence of Digital Finance on Green Technological Innovation: Empirical Insights from China

  • Published: 21 May 2024

Cite this article

empirical research critical analysis

  • Beibei Zhao   ORCID: orcid.org/0009-0002-0322-7684 1  

52 Accesses

Explore all metrics

In the pursuit of high-quality development and sustainability, China has been at the forefront of scientific and technological advancements. This research paper delves into the dynamic interplay between digital finance and green technological innovation, offering valuable insights for policymakers, enterprises, and the academic community. While flourishing in many aspects, China’s innovation ecosystem faces challenges related to limited independent innovation capacity and slow market transformation. This paper sheds light on the often-overlooked but critical aspect of financial support as the foundation for enterprises’ technological innovation efforts. It explores how digital finance, as an innovative financial service model, plays a pivotal role in overcoming challenges related to funding, financing costs, and unstable financing channels, particularly in the context of green technology innovation. The study employs a threshold model for nonlinear characteristic testing, unraveling the dynamic impact of digital finance on green innovation. Also, it integrates heterogeneity analysis with traditional financial resource mismatch scenarios, showcasing the inclusive characteristics of digital finance in fostering green innovation across various dimensions. Empirical examinations substantiate the positive effect of digital financial development on green innovation and provide empirical support for the integration of digital finance into the innovation landscape. This paper underscores the vital role of digital finance in enhancing the quality of enterprise green innovation, promoting sustainability, and contributing to China’s high-quality development goals. It proposes key recommendations, including deepening integration, strengthening digital technologies, and building an inclusive financial system to seamlessly integrate scientific and technological innovation with digital financial services. These recommendations pave the way for substantive and sustainable green innovation, aligning with China’s vision of innovation-driven development and low-carbon transformation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

empirical research critical analysis

Green innovations and environmentally friendly technologies: examining the role of digital finance on green technology innovation

empirical research critical analysis

The dynamic impact of digital finance on green innovation: evidence enterprise-level empirical data in China

empirical research critical analysis

How does the development of digital inclusive finance in China affect green technology innovation? A theoretical mechanism study and empirical analysis

Data availability.

The data sets used in this survey are readily available and accessible to interested parties.

Alawi, S. M., Abbassi, W., Saqib, R., & Sharif, M. (2022). Impact of financial innovation and institutional quality on financial development in emerging markets. Journal of Risk and Financial Management, 15 (3), 115. https://doi.org/10.3390/jrfm15030115

Article   Google Scholar  

Bai, Y., Song, S., Jiao, J., & Yang, R. (2019). The impacts of government R&D subsidies on green innovation: Evidence from Chinese energy-intensive firms. Journal of Cleaner Production, 233 , 819–829.

Cai, X., Zhu, B., Zhang, H., Li, L., & Xie, M. (2020). Can direct environmental regulation promote green technology innovation in heavily polluting industries? Evidence from Chinese listed companies. Science of the Total Environment, 746 , 140810.

Demertzis, M., Merler, S., & Wolff, G. B. (2018). Capital Markets Union and the fintech opportunity. Journal of Financial Regulation, 4 (1), 157–165. https://doi.org/10.1093/jfr/fjx012

Fan, Y., Wei, Y., Xu, L., Chen, X., & Liang, X. (2023). The special issue:’financial innovation for emission trading scheme’. Financial Innovation, 9 (1), 58. https://doi.org/10.1186/s40854-023-00463-9

Haddad, V., Ho, P., & Loualiche, E. (2022). Bubbles and the value of innovation. Journal of Financial Economics, 145 (1), 69–84. https://doi.org/10.3386/w29917

Laeven, L., Levine, R., & Michalopoulos, S. (2015). Financial innovation and endogenous growth. Journal of Financial Intermediation, 24 (1), 1–24. https://doi.org/10.1016/j.jfi.2014.04.001

Lee, I., & Shin, Y. J. (2018). Fintech: Ecosystem, business models, investment decisions, and challenges. Business Horizons, 61 (1), 35–46. https://doi.org/10.1016/j.bushor.2017.09.003

Li, K. (2022). Sustainable innovation on urban landscape construction and ecological water environment governance process. Ecological Chemistry and Engineering S, 29 (3), 287–304. https://doi.org/10.2478/eces-2022-0021

Liu, J., Chen, Y., & Liang, F. H. (2023). The effects of digital economy on breakthrough innovations: Evidence from Chinese listed companies. Technological Forecasting and Social Change, 196 , 122866.

Liu, Y., Zhu, J., Li, E. Y., Meng, Z., & Song, Y. (2020). Environmental regulation, green technological innovation, and eco-efficiency: The case of Yangtze river economic belt in China. Technological Forecasting and Social Change, 155 , 119993.

Mylonidis, N., Chletsos, M., & Barbagianni, V. (2019). Financial exclusion in the USA: Looking beyond demographics. Journal of Financial Stability, 40 , 144–158.

Shao, S., Hu, Z., Cao, J., Yang, L., & Guan, D. (2020). Environmental regulation and enterprise innovation: a review. Business Strategy and the Environment, 29 (3), 1465–1478.

Shin, W., & Choi, B. (2022). Digital competency, innovative medical research, and institutional environment: a global context. Sustainability, 14 (24), 16887. https://doi.org/10.3390/su142416887

Twardowski, J. P., Gruss, I., Bereś, P., Hurej, M., & Klukowski, Z. (2022). An assessment of environmental risk of Bt-maize on rove beetle communities. Ecological Chemistry and Engineering S, 29 (2), 257–266. https://doi.org/10.2478/eces-2022-0019

Wang, X., & Zou, H. (2018). Study on the effect of wind power industry policy types on the innovation performance of different ownership enterprises: Evidence from China. Energy Policy, 122 , 241–252. https://doi.org/10.1016/j.enpol.2018.07.050

Zhang, L., Zhang, S., & Guo, Y. (2019). The effects of equity financing and debt financing on technological innovation: evidence from developed countries. Baltic Journal of Management, 14 (4), 698–715.

Zhong, Z., & Chen, Z. (2023). Business environment, technological innovation and government intervention: influences on high-quality economic development. Management Decision, 61 (8), 2413–2441.

Zhou, J., Cui, F., & Wang, W. (2022). The spatial effect of financial innovation on intellectualized transformational upgrading of manufacturing industry: an empirical evidence from China. Sustainability, 14 (13), 7665. https://doi.org/10.3390/su14137665

Download references

1. Special Project on the Research Base of Ecological Civilization Construction and County Economic Development in Hebei Province (No.: 2022XJZX09);

2. Special Project on the Research Base of Ecological Civilization Construction and County Economic Development in Hebei Province (No.: 2024XJZX07).

Author information

Authors and affiliations.

Hengshui University, Hengshui, 053000, Hebei, China

Beibei Zhao

You can also search for this author in PubMed   Google Scholar

Contributions

The conceptualization, investigation, data collection and analysis, and text writing of this paper were all completed by Beibei Zhao. The author read and approved the final manuscript.

Corresponding author

Correspondence to Beibei Zhao .

Ethics declarations

Ethics approval.

This article does not contain any studies with human participants or animals.

Consent to Participate

The author declares that all the authors have informed consent.

Conflicts of Interest

The author declares no conflicts of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Zhao, B. Examining the Transformative Influence of Digital Finance on Green Technological Innovation: Empirical Insights from China. J Knowl Econ (2024). https://doi.org/10.1007/s13132-024-02088-4

Download citation

Received : 01 March 2024

Accepted : 13 May 2024

Published : 21 May 2024

DOI : https://doi.org/10.1007/s13132-024-02088-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Digital finance
  • Green innovation
  • Technological advancement
  • Financial support
  • Sustainable growth
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Empirical Research: Definition, Methods, Types and Examples

    empirical research critical analysis

  2. Empirical Research: Definition, Methods, Types and Examples

    empirical research critical analysis

  3. What Is Empirical Research? Definition, Types & Samples in 2024

    empirical research critical analysis

  4. What Is Empirical Research? Definition, Types & Samples

    empirical research critical analysis

  5. 15 Empirical Evidence Examples (2024)

    empirical research critical analysis

  6. How to Write a Critical Analysis Essay Step by Step

    empirical research critical analysis

VIDEO

  1. Empirical Analysis

  2. An Empirical Analysis of the Interconnection Queue

  3. Deep Learning

  4. What is Empirical Analysis ?

  5. Essential components of a well structured CIPD assignment

  6. What is Empirical Research

COMMENTS

  1. Critical Analysis: The Often-Missing Step in Conducting Literature

    The research process for conducting a critical analysis literature review has three phases ; (a) the deconstruction phase in which the individually reviewed studies are broken down into separate discreet data points or variables (e.g., breastfeeding duration, study design, sampling methods); (b) the analysis phase that includes both cross-case ...

  2. Critically reviewing literature: A tutorial for new researchers

    A critical review is a detailed analysis and assessment of the strengths and weaknesses of the ideas and information in written text. Research students who propose a "conceptual" paper (i.e. a paper with no empirical data) as their first publication will soon find that the contribution(s) and publication success of conceptual papers often ...

  3. Empirical Research

    Empirical research, in other words, involves the process of employing working hypothesis that are tested through experimentation or observation. ... F. M. A. (2014). A critical analysis of empiricism. Open Journal of Philosophy, 2014(4), 225-230. Article Google Scholar Kant, I. (1783). Prolegomena to any future metaphysic (trans: Bennett, J ...

  4. Empirical research

    Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient , t-test, chi square , and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions.

  5. The 'Empirical' in the Empirical Turn: A Critical Analysis

    In Sect. 4.4, we concluded that, according to the empirical turn philosophers, an empirical study means a study of a concrete technology in common practice. These studies are sometimes based on a philosopher's impressionistic case-study and sometimes on a methodological case-study conducted by social scientists.

  6. An Introduction to Critical Approaches

    The critical approach has, at its heart, an abiding interest in issues of justice, equity and equality. The critical nature of this approach allows for it to be used not merely as an approach to conducting qualitative research, but also as a method and, in some cases, as a methodology in its own right.

  7. A Phenomenological Paradigm for Empirical Research in Psychiatry and

    Still, the most challenging aspect of phenomenological empirical research in psychiatry and psychology is the proper method for analyzing patients' reports. Neither the original Husserlian question of phenomenological philosophizing nor the phenomenological method of philosophical analysis appears appropriate for empirical application.

  8. New directions in evidence-based policy research: a critical analysis

    The main focus of much theoretical and empirical work in EBP has traditionally been, implicitly or explicitly, on research evidence uptake, primarily peer-reviewed research carried out by university-based academics [30, 31].However, a third of studies included in our review examined non-research data, for example, public health surveillance data, strategic needs assessments and other impact ...

  9. PDF What Is Empirical Social Research?

    real world. Third, social research involves . analysis, meaning the researcher interprets the data and draws conclusions from them. Thus, writing what is typically called a "research paper" does not fit our definition of empirical research because doing so typically involves summarizing the analyses of other authors, not forming a new

  10. (PDF) A Critical Analysis of Empiricism

    PDF | On Jan 1, 2014, F. M. Anayet Hossain published A Critical Analysis of Empiricism | Find, read and cite all the research you need on ResearchGate

  11. Empirical Clinical Practice: A Critical Analysis

    A Critical Analysis ONE OF THE MOST significant developments in social work over the past 15 years has been the establish ment of the empirical clinical practice ... Is anxiety about the future that the empirical research model should ships rely on words to convey that rela similar to anxiety about the past? Should guide the testing of ...

  12. Empirical Research: Quantitative & Qualitative

    In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move ...

  13. Critical Science: A systematic literature review of empirical research

    This systematic literature review examines empirical CSA research published between 1979 and 2022 to better understand what the CSA means for the profession. A total of 71 articles met the ...

  14. Organizing Your Social Sciences Research Assignments

    Journal article analysis assignments require you to summarize and critically assess the quality of an empirical research study published in a scholarly [a.k.a., academic, peer-reviewed] journal. ... Woodward-Kron, Robyn. "Critical Analysis and the Journal Article Review Assignment." Prospect 18 (August 2003): 20-36; MacMillan, Margy and Allison ...

  15. Critical Theory in Social Research: A Theoretical and ...

    Though Hussain et al. view that critical research is highly fitted to qualitative research designs, critical realism (CR) has gained higher acceptance as a philosophical framework in the investigation process of social sciences for its empirical potentiality to explain social contexts and situations and suggest operable strategies to recover ...

  16. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants ...

  17. What is Empirical Research? Definition, Methods, Examples

    Critical Thinking and Problem Solving: Engaging in empirical research fosters critical thinking skills, problem-solving abilities, and a deep appreciation for evidence-based decision-making. ... Empirical Research Data Analysis. Now comes the exciting phase of data analysis, where the raw data you've diligently collected starts to yield ...

  18. What is empirical analysis and how does it work?

    Empirical analysis is an evidence-based approach to the study and interpretation of information. The empirical approach relies on real-world data, metrics and results rather than theories and concepts.

  19. Empirical Research: Definition, Methods, Types and Examples

    Text analysis: Text analysis method is a little new compared to the other types. Such a method is used to analyse social life by going through images or words used by the individual. ... With the advancement in today's world, empirical research has become critical and a norm in many fields to support their hypothesis and gain more knowledge ...

  20. Critical Analysis

    Critical Analysis Format is as follows: I. Introduction. Provide a brief overview of the text, object, or event being analyzed. Explain the purpose of the analysis and its significance. Provide background information on the context and relevant historical or cultural factors. II.

  21. PDF CONDUCTING AND EVALUATING CRITICAL INTERPRETIVE RESEARCH ...

    The collection, analysis, and interpretation of empirical materials are always conducted within some broader understanding of what constitutes legitimate inquiry and valid knowledge. In the Information Systems field, there are well- ... Even though calling for a union of critical research and interpretivism, Klein is

  22. What Is Empirical Research? Definition, Types & Samples in 2024

    It assumes a critical role in enhancing internal validity. ... Analyzing Discourse: Textual Analysis for Social Research. Abingdon-on-Thames: Routledge. Google Books; ... Empirical Research and Writing: A Political Science Student's Practical Guide. Thousand Oaks, CA: Sage, 1-19.

  23. PDF Introduction to Empirical Data Analysis

    ency analyses: factor analysis and cluster analysis. Sect.1.1.3.2 briey explains the basic idea of these two methods. 1.1.2 Types of Data and Special Types of Variables Data are the 'raw material' of multivariate data analysis. In empirical research, we distin-guish between different types of data • cross-sectional data and time series data,

  24. Sustainability

    The analysis unveils the critical role of technological innovation and policy reforms in advancing toward a sustainable, competitive, and climate-neutral economy. The research demonstrates the pivotal role of empirical studies in guiding policy formulation and implementation, showing how targeted measures in resource and energy productivity ...

  25. Examining the Transformative Influence of Digital Finance on ...

    This research paper delves into the dynamic interplay between digital finance and green technological innovation, offering valuable insights for policymakers, enterprises, and the academic community. ... Through rigorous empirical analysis, the author aims to provide empirical evidence supporting the role of digital finance in facilitating ...