From John W. Creswell \(2016\). 30 Essential Skills for the Qualitative Researcher \ . Thousand Oaks, CA: Sage.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.1; 2022 Dec

Direct observation methods: A practical guide for health researchers

Gemmae m. fix.

a VA Center for Healthcare Organization and Implementation Research, Bedford and Boston, MA, USA

b General Internal Medicine, Boston University School of Medicine, Boston, MA, USA

c Department of Psychiatry, Harvard Medical School, Boston, MA, USA

Mollie A. Ruben

d Department of Psychology, University of Maine, Orono, ME, USA

Megan B. McCullough

e Department of Public Health, University of Massachusetts Lowell, Lowell, MA, USA

To provide health research teams with a practical, methodologically rigorous guide on how to conduct direct observation.

Synthesis of authors’ observation-based teaching and research experiences in social sciences and health services research.

This article serves as a guide for making key decisions in studies involving direct observation. Study development begins with determining if observation methods are warranted or feasible. Deciding what and how to observe entails reviewing literature and defining what abstract, theoretically informed concepts look like in practice. Data collection tools help systematically record phenomena of interest. Interdisciplinary teams--that include relevant community members-- increase relevance, rigor and reliability, distribute work, and facilitate scheduling. Piloting systematizes data collection across the team and proactively addresses issues.

Observation can elucidate phenomena germane to healthcare research questions by adding unique insights. Careful selection and sampling are critical to rigor. Phenomena like taboo behaviors or rare events are difficult to capture. A thoughtful protocol can preempt Institutional Review Board concerns.

This novel guide provides a practical adaptation of traditional approaches to observation to meet contemporary healthcare research teams’ needs.

Graphical abstract

Unlabelled Image

  • • Health research study designs benefit from observations of behaviors and contexts
  • • Direct observation methods have a long history in the social sciences
  • • Social science approaches should be adapted for health researchers’ unique needs
  • • Health research observations should be feasible, well-defined and piloted
  • • Multidisciplinary teams, data collection tools and detailed protocols enhance rigor

1. Introduction

Health research studies increasingly include direct observation methods [ [1] , [2] , [3] , [4] , [5] ]. Observation provides unique information about human behavior related to healthcare processes, events, norms and social context. Behavior is difficult to study; it is often unconscious or susceptible to self-report biases. Interviews or surveys are limited to what participants share. Observation is particularly useful for understanding patients’, providers’ or other key communities’ experiences because it provides an “emic,” insider perspective and lends itself to topics like patient-centered care research [ 1 , 5 , 6 ]. This insider perspective allows researchers to understand end users’ experiences of a problem. For example, patients may be viewed as “non-compliant,” while observations can reveal daily lived experiences that impede adherence to recommended care [ [7] , [8] , [9] , [10] ]. Observation can examine the organization and structure of healthcare delivery in ways that are different from, and complementary to, methods like surveys, interviews, or database reviews. However, there is limited guidance for health researchers on how to use observation.

Observation has a long history in the social sciences, with participant observation as a defining feature of ethnography [ [11] , [12] , [13] ]. Observation in healthcare research differs from the social sciences. Traditional social science research may be conducted by a single individual, while healthcare research is often conducted by multidisciplinary teams. In social science studies, extended time in the field is expected [ 11 ]. In contrast, healthcare research timelines are often compressed and conducted contemporaneous with other work. Compared to social science research questions, healthcare studies are typically targeted with narrowly defined parameters.

These disciplinary differences may pose challenges for healthcare researchers interested in using observation. Given observation’s history in the social sciences there is a need to tailor observation to the healthcare context, with attention to the dynamics and needs of the research team. This paper provides contemporary healthcare research teams a practical, methodologically rigorous guide on when and how to conduct observation.

This article synthesizes the authors’ experiences conducting observation in social science and health services research studies, key literature and experiences teaching observation. The authors have diverse training in anthropology (GF, MM), systems engineering (BK) and psychology (MR). To develop this guide, we reflected on our own experiences, identified literature in our respective fields, found common considerations across these, and had consensus-reaching discussions. We compiled this information into a format initially delivered through courses, workshops, and conferences. In keeping with this pedagogical approach, the format below follows the linear process of study development.

Following the trajectory of a typical health research project, from study development through data collection, analysis and dissemination ( Fig. 1 ), we describe how to design and conduct observation in healthcare related settings. We conclude with data analysis, dissemination of findings, and other key guidance. Importantly, while illustrated as a linear process, many steps inform each other. For example, analysis and dissemination, can inform data collection.

Fig. 1

Direct observation across a health research study.

3.1. Study development

3.1.1. study design and research questions.

In developing research using observation, the first step is determining if observation is appropriate. Observation is ideal for studies about naturally occurring behaviors, actions, or events. These include explorations of patient or provider behaviors, interactions, teamwork, clinical processes, or spatial arrangements. The phenomena must be feasible to collect. Sensitive or taboo topics like substance use or sexual practices are better suited to other approaches, like one-on-one interviews or anonymous surveys. Additionally, the phenomena must occur frequently enough to be captured. Trying to observe rare events requires considerable time while yielding little data. Early in the study design process, the scope and resources should be considered. The project budget and the timeline need to account for staffing, designing data collection tools, and pilot testing.

Research questions establish the study goals and inform the methods to accomplish them. In a study examining patients’ experiences of recovery from open heart surgery, the ethnographic study design included medical record data, in-depth interviews, surveys, and observations of patients in their homes, collected over three months following surgery [ 7 ]. By observing patients in their homes GF saw how the household shaped post-surgical diet and exercise. Table 1 provides additional examples of healthcare studies using observation, often as part of a larger, mixed-method design [ 14 , 15 ].

Example studies that use observation.

3.1.2. Data collection procedures

The phenomena to observe should be clearly defined. Research team discussions create a unified understanding of the phenomena, clarify what to observe and record, and ensure data collection consistency. This explication specifies what to look for during observation. For example, a team might operationalize the concept of patient-centered care into specific actions, like how the provider greets the patient. Further, additional nuances within broader domains (e.g., patient-centered care) could be identified while observations are ongoing. The team may identify unanticipated ways that providers enact patient-centered care (e.g., raising non-clinical, but relevant psychosocial topics- like vacations or hobbies- prior to gathering biomedical information). It is also important to look for negative instances, or behaviors that did not happen that should have, or surprising, unexpected findings. A surprise finding during observation was the impetus for further analysis examining how HIV providers think about their patients. While observing HIV care, a provider made an unexpected, judgmental comment about patients who seek pre-exposure prophylaxis (PrEP) to prevent HIV. This statement was documented in the fieldnotes (see 3.1.3 for a further description of fieldnotes) and later discussed with the team, leading to review of other study data and an eventual paper (see Fix et al 2018) [ 1 ]. Leaving room, both literally on the template and conceptually, can provide space for new, unexpected insights.

The sampling strategy outlines the frequency and duration of what is observed and recorded. It requires determining the unit of observation and the observation period. Units of observation are sometimes called “slices” of data. Ambady and Rosenthal [ 20 ] coined the term thin slices, using brief exposures of behavior (6s, 15s, and 30s) to predict teacher effectiveness. While thin slices are predominantly used in psychology, healthcare researchers can apply this concept by recording data for set blocks of time in a larger process, such as recording emergency department activity for the first 15 minutes of each hour.

The unit of observation can be a person (e.g., patient, provider), their behavior (e.g., smiling, eye rolling), an event (e.g., shift change) or interaction (e.g., clinical encounter). Using interactions as the unit of observation requires consideration for repeat observations of some individuals. For example, a fixed number of providers may be repeatedly observed with different patients.

Observation frequency will depend on the frequency of the phenomena. Enough data is needed for variation while also achieving “saturation,” a concept from qualitative methods, which means the point in data collection when no new information is obtained [ 21 ]. For quantitative studies, when examining the relationship between a direct observation measure (e.g., patient smiling) and an outcome (e.g., patient satisfaction), effect sizes from past research should dictate the number of interactions needed to achieve power to detect an effect. The duration of observation (the data slice) can be constrained using parameters as broad as a clinic workday, to distinct events like a clinical encounter.

Observation data can be collected on a continuous, rolling basis, or at predefined intervals. Continuous sampling is analogous to a motion picture—the recorded data mirrors the flow of information captured in a video [ 22 ]. Continuous observation is ideal for understanding what happens throughout an event. It is labor intensive and time-consuming and may result in a small number of observations, although each observation can yield considerable data. For example, a team may want to know about the patient-centeredness of patient-provider interactions. Continuous sampling of a clinical encounter could start when the patient arrives through when they leave, with detailed data collected about both the verbal and nonverbal communication. This could be considered an N of one observation but would yield substantial data. This information could be collected over a continuous day of encounters across several providers and patients, resulting in a considerable amount of data for a small group of people.

In contrast, instantaneous sampling can be conceptualized as snapshots, and is analogous to the thin slice methodology. Psychology research sometimes uses random intervals, while in healthcare research it may be preferable to use predetermined criteria or intervals [ 23 ]. Instantaneous sampling is economical and data collection can happen flexibly across a variety of individuals or times of day or weeks. Disadvantages include losing some of the context that is gained through continuous sampling.

3.1.3. Data collection tools

Data collection tools enable systematic observations, codifying what to observe and record. These tools vary from open-ended to highly structured, depending on the research question(s) and what is known a priori. We describe below three general tool categories—descriptive fieldnotes, semi-structured templates, and structured templates.

3.1.3.1. Descriptive fieldnotes

Descriptive fieldnotes, common in anthropology, are open-ended notes recorded with minimal a priori fields. Descriptive fieldnotes are ideal for research questions where less is known. An almost blank page is used to record the phenomena of interest. Key information such as date, time, location, people present and who recorded the information are useful for later analysis. These notes are jotted sequentially in real-time to maximize data collection, and are filled out and edited later for clarity and details. The flexible and open format facilitates the capture of unanticipated events or interactions.

Descriptive fieldnotes describe in detail what is observed (e.g., who is present, paraphrased statements), while leaving out interpretation. Analytic notes, that interpret what is being observed, can accompany the descriptive notes (e.g., the doctor is frowning and seems skeptical of what the patient is saying), but these analytic notes should be clearly marked as interpretation. One author (GF) demarcates interpretive portions of her fieldnotes using [closed brackets] to identify this portion of the fieldnote as distinct from the descriptive data. Interpretive notes should explain why the observer thinks this might be the case, using supporting data from the observation. Building on the example above, an accompanying interpretive note might say, “[the doctor raised their eyebrows, and does not seem to believe what the patient is saying, similar to what was observed in another encounter- see site 5 fieldnote). This information can be valuable during analysis to contextualize what was recorded and used in a later report or paper. Observation experience builds comfort and expertise with the open-ended, unstructured format.

3.1.3.2. Semi-structured templates

A semi-structured template comprises both open-ended and structured fields ( Fig. 2 ). It includes the same key information described above (i.e., date, time, etc.), then provides prompts for a priori concepts underlying the research questions, often derived from a theoretical model. These literature-based, theoretical concepts should be clearly defined and operationalized. For example, drawing from Street et al’s [ 24 ] framework for patient-centered communication, we can use their six functions (fostering the patient-clinician relationship, exchanging information, responding to emotions, managing uncertainty, making decisions, and enabling self-management) to develop categories for semi-structured coding a template. Like descriptive fieldnotes, the template also provides open-ended space for capturing contextual details about the a priori data recorded in the structured section.

Fig 2

Semi-Structured Observation Template.

3.1.3.3. Structured templates

A structured template in the form of a checklist or recording sheet captures specific, pre-determined phenomena. Structured templates are most useful when the phenomena are known. These templates are commonly used in psychology and engineering. Structured observations are more deductive and based on theoretical models or literature-based concepts. The template prompts the observer to record whether a phenomenon occurred, its frequency, and sometimes its duration or quality. See Keen [ 5 ] or Roter [ 25 ] for example structured templates for recording patient-centered care or patient-provider communication.

All templates should include key elements like the date, time and observer. Descriptive fieldnotes and semi-structured templates should be briefly filled out during the observation, and then written more thoroughly immediately afterwards. Setting aside time during data collection, such as a few hours at the end of each day, facilitates completion of this step. Recording information immediately, rather than weeks or months later, enhances data quality by minimizing recall bias. If written too much later, the recorder might fill in holes in their memory with inaccurate information. Further, small details, written while memories are fresh, may seem unremarkable but later provide critical insights.

For the semi-structured and structured templates, which contain prepopulated fields, there should be an accompanying “codebook” of definitions describing the parameters for each field. For example, building on the previous example using Street et al’s constructs, the code “responding to emotions” could identify instances where patients appear to be sad or worried and the provider responds to these emotions (also termed empathic opportunities and empathic responses) by eliciting, exploring, and validating the patients’ emotions [ 25 , 26 ]. This process operationally defines each concept and facilitates more reliable data capture. If space allows, the codebook can be included in the template and referenced during data collection. Codebooks should be updated through team discussion and as observations are piloted. Definitions from the codebook can be used in later reports and manuscripts.

3.2. Piloting

Given the real-world context within which observation data is collected, pilot-testing helps ensure that ideas work in practice. Piloting provides an opportunity to ensure the research plan works and reduce wasted resources. For example, piloting could reveal issues with the sampling plan (e.g., the phenomena do not happen frequently enough), staffing capacity (e.g., there are too many people to follow) or the codebook (e.g., few of the items specified in the data collection template are observed). Further, piloting gives the team a chance to systematize data collection and address issues before they interfere with the overall study integrity. This process guides what refinements need to be made to the data collection procedures. Piloting should be done at least once in a setting comparable to the intended setting.

3.3. Collecting data, analysis and dissemination

Healthcare studies are commonly conducted by interdisciplinary teams. The observation team should include at minimum two people, including someone with prior observation experience. Having more than one person collecting data increases capacity, distributes the workload and facilitates scheduling flexibility. Multiple observers complement each other’s perspectives and can provide diverse analytic insights. The observers should be engaged early in the research process. Having regular debriefing meetings during data collection ensures data quality and reliability in data collection. Adding key members of relevant communities to the team, such as patients or providers, can further enhance the relevance and help the research team think about the implications of the work.

Observational data collection often takes place in fast-paced clinical settings. For paper-based data collection, consolidating the materials on a clipboard and/or using colored papers or tabs, facilitates access. An electronic tablet to enter information directly bypasses the need for later, manual data entry.

Data analysis should be considered early in the research process. The analytic plan will be informed by both the principles of the epistemological tradition from which the overall study design is drawn and the research questions. Studies using observation are premised on a range of epistemological traditions. Analytical approaches, standards, and terminology differ between anthropologically informed qualitative observations recorded using descriptive fieldnotes versus structured, quantitative checklists premised on psychological or systems engineering principles. A full description of analysis is thus beyond the scope of this guide. Analytic strategies can be found in discipline-specific texts, such as Musante and DeWalt [ 27 ], anthropology; Suen and Ary [ 28 ], psychology; or Lopetegui et al [ 29 ], systems engineering. Regardless of discplinary tradition, analytic decisions should be made based on the study design, research question(s), and objective(s).

Dissemination is a key, final step of the research process. Observation data lends itself to a rich description of the phenomena of interest. In health research, this data is often part of a larger mixed methods study. The observation protocol should be described in a manuscript’s methods section; the results should report on what was observed. Similar to reporting of interview data, the observed data should include key descriptors germane to the research question, like actors, site number, or setting. See Fix et al [ 1 ] and McCullough et al [ 4 ] for examples on how to include semi-structured, qualitative observation data in a manuscript and Waisel et al [ 17 ] and Kuhn et al [ 19 ] for examples of reporting structured, quantitative data in a manuscript.

3.4. Institutional review boards

Healthcare Institutional Review Boards may be unfamiliar with observation. Being explicit about data collection can proactively address concerns. The protocol should detail which individuals will be observed, if and how they will be consented and what will and will not be recorded. Using a reference like the Health Insurance Portability and Accountability Act (HIPAA) identifiers (e.g., name, street address) can guide what identifiable information is collected. The protocol should also describe how the team will protect data, especially while in the field (e.g., “immediately after data collection, written informed consents will be taken to an office and locked in a filing cabinet”).

There are unique risks in studies using observation because data is collected in “the field.” Precautions attentive to these settings protect both participants and research team members. A detailed protocol should describe steps to address potential issues, including rare or distressing events, or what to do if a team member witnesses a clinical emergency or a participant discloses trauma. Additionally, team members may need to debrief after distressing experiences.

4. Discussion & conclusion

4.1. discussion.

The ability to improve healthcare is limited if real-world data are not taken into account. Observation methods can elucidate phenomena germane to healthcare’s most vexing problems. Considerable literature documents the discrepancy between what people report and their behavior [ [30] , [31] , [32] ]. Direct observation can provide important insights into human behavior. In their ethnographic evaluation of an HIV intervention, Evans and Lambert [ 31 ] found, “observation of actual intervention practices can reveal insights that may be hard for [participants] to articulate or difficult to pinpoint, and can highlight important points of divergence and convergence from intervention theory or planning documents.” Further, they saw ethnographic methods as a tool to understand “hidden” information in what they call “private contexts of practice.” While in Rich et al.’s work [ 32 ], asthmatic children were asked about exposure to smoking. Despite not reporting smoking in the home, videos recorded by the children—part of the study design—documented smokers outside their home. The use of observation can help explain research questions as diverse as patients’ health behaviors [ 7 , 10 , 32 ], healthcare delivery [ 3 , 4 ] or the outcomes of a clinical trial [ 9 , 33 ].

A common critique in healthcare research is that observing behavior will change behavior, a concept known as the Hawthorne Effect. Goodwin’s study [ 34 ], using direct observation of physician-patient interactions, explicitly examined this phenomena and found a limited effect. We authors have observed numerous instances of unexpected behavior of healthcare employees such as making disparaging comments about patients, eye rolling, or eating in sterile areas. Thus, those of us who conduct observation often say that if behavior change were as easy as observing people, we could simply place observers in problematic healthcare settings.

The descriptions above on how to use observation are applicable to fields like health services research and implementation and improvement sciences which have similarly adapted other social science approaches.[ [35] , [36] , [37] , [38] , [39] , [40] ] Notably, unlike the social sciences, many health researchers work in teams and thus this guide is written for team-based work. Yet, health researchers sometimes also conduct observations without support from a larger team. While this may be done because of resource constraints, it may raise concerns about the validity of the observations. First, social sciences have a long history of solo researchers collecting and analyzing data, yielding robust, rigorous findings [ 13 , [41] , [42] , [43] ]. Using strategies, such as those outlined above (i.e., writing detailed, descriptive fieldnotes immediately; keeping interpretations separate from the data; looking for negative cases) can enhance rigor. Further, constructs like validity are rooted in quantitative, positivist epistemologies and need to be adapted for naturalistic study designs, like those that include direct observation [ 44 ].

4.2. Innovation

Social science-informed research designs, such as those that include observation, are needed to tackle the dynamic, complex, “wicked problem” that impede high quality healthcare [ 45 ]. Thoughtful, rigorous use of observation tailored to the unique context of healthcare can provide important insights into healthcare delivery problems and ultimately improve healthcare.

Additionally, observation provides several ways to involve key communities, like patients or providers, as participants. Observing patient participants can provide information about healthcare processes or structures, and inform research about patient experiences of care or the extent of patient-centeredness. With the movement towards engaging end users in research, these individuals can contribute more meaningfully [ 46 , 47 ]. As team members, they can define the problem, inform what to observe, how to observe, help interpret data and disseminate findings.

4.3. Conclusion

Observation’s long history in the social sciences provides a robust body of work with strategies that can be inform healthcare research. Yet, traditional social science approaches, such as extended, independent fieldwork may be untenable in healthcare settings. Thus, adapting social science approaches can better meet healthcare researchers’ needs.

This paper provides an innovative, yet practical adaptation of social science approaches to observation that can be feasibly used by health researchers. Team meetings, developing data collection tools and protocols, and piloting, each enhance study quality. During development, teams should determine if observation is an appropriate method. If so, the team should then discuss what and how to collect the data, as described above. Piloting improves data collection procedures. While many aspects of observation can be tailored to health research, analysis is informed by epistemological traditions. Having clear steps for health researchers to follow can increase the rigor or credibility of observation.

Rigorous utilization of observation can enrich healthcare research by adding unique insights into complex problems. This guide provides a practical adaptation of traditional approaches to observation to meet healthcare researchers’ needs and transform healthcare delivery.

This work was supported by the US Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development. Dr. Fix is a VA HSR&D Career Development awardee at the Bedford VA (CDA 14-156). Drs. Fix, Kim and McCullough are employed at the Center for Healthcare Organization and Implementation Research, where Dr. Ruben was a postdoctoral fellow. The authors received no financial support for the research, authorship, and/or publication of this article.

Declaration of Competing Interest

All authors declared no conflict of interests.

Acknowledgements

This work has been previously presented as workshops at the 2015 Veteran Affairs Health Services Research & Development / Quality Enhancement Research Initiative National Meeting (Philadelphia, PA) and the 2016 Academy Health Annual Research Meeting (Boston, MA). We would like to acknowledge Dr. Shihwe Wang for participating in the 2015 workshop; Dr. Adam Rose for encouragement and helpful comments; and the VA Anthropology Group for advancing the utilization of direct observation in the US Department of Veteran Affairs. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

Duke University Libraries

Qualitative Research: Observation

  • Getting Started
  • Focus Groups
  • Observation
  • Case Studies
  • Data Collection
  • Cleaning Text
  • Analysis Tools
  • Institutional Review

Participant Observation

observation protocol qualitative research template

Photo: https://slideplayer.com/slide/4599875/

Field Guide

  • Participant Observation Field Guide

What is an observation?

A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched).

  • Researcher becomes a participant in the culture or context being observed.
  • Requires researcher to be accepted as part of culture being observed in order for success

Direct Observation

  • Researcher strives to be as unobtrusive as possible so as not to bias the observations; more detached.
  • Technology can be useful (i.e video, audiorecording).

Indirect Observation

  • Results of an interaction, process or behavior are observed (for example, measuring the amount of plate waste left by students in a school cafeteria to determine whether a new food is acceptable to them).

Suggested Readings and Film

  • Born into Brothels . (2004) Oscar winning documentary, an example of participatory observation, portrays the life of children born to prostitutes in Calcutta. New York-based photographer Zana Briski gave cameras to the children of prostitutes and taught them photography
  • Davies, J. P., & Spencer, D. (2010).  Emotions in the field: The psychology and anthropology of fieldwork experience . Stanford, CA: Stanford University Press.
  • DeWalt, K. M., & DeWalt, B. R. (2011).  Participant observation : A guide for fieldworkers .   Lanham, Md: Rowman & Littlefield.
  • Reinharz, S. (2011).  Observing the observer: Understanding our selves in field research . NY: Oxford University Press.
  • Schensul, J. J., & LeCompte, M. D. (2013).  Essential ethnographic methods: A mixed methods approach . Lanham, MD: AltaMira Press.
  • Skinner, J. (2012).  The interview: An ethnographic approach . NY: Berg.
  • << Previous: Focus Groups
  • Next: Case Studies >>
  • Last Updated: Mar 1, 2024 10:13 AM
  • URL: https://guides.library.duke.edu/qualitative-research

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Research Design Review

A discussion of qualitative & quantitative research design, observation guide, facilitating reflexivity in observational research: the observation guide & grid.

Observational research is “successful” to the extent that it satisfies the research objectives by capturing relevant events and participants along with the constructs of interest.  Fortunately, there are two tools – the observation guide and the observation grid – that serve to keep the observer on track towards these objectives and generally facilitate the ethnographic data gathering process.

Not unlike the outlines interviewers and moderators use to help steer the course of their in-depth interviews and group discussions, the observation guide serves two important purposes: 1) It reminds the observer of the key points of observation as well as the topics of interest associated with each, and 2) It acts as the impetus for a reflexive exercise in which the observer can reflect on his/her own relationship and contribution to the observed at any moment in time (e.g., how the observer was affected by the observations).  An observation guide is an important tool regardless of the observer’s role.  For each of the five observer roles * – nonparticipant (off-site or on-site) and participant (passive, participant-observer, or complete) observation – the observation guide helps to maintain the observer’s focus while also giving the observer leeway to reflect on the particular context associated with each site.

Observation grid

* Roller & Lavrakas, 2015. Applied Qualitative Research Design: A Total Quality Framework Approach . New York: Guilford Press.

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Report this content
  • View site in Reader
  • Manage subscriptions
  • Collapse this bar

Search form

  • About Faculty Development and Support
  • Programs and Funding Opportunities

Consultations, Observations, and Services

  • Strategic Resources & Digital Publications
  • Canvas @ Yale Support
  • Learning Environments @ Yale
  • Teaching Workshops
  • Teaching Consultations and Classroom Observations
  • Teaching Programs
  • Spring Teaching Forum
  • Written and Oral Communication Workshops and Panels
  • Writing Resources & Tutorials
  • About the Graduate Writing Laboratory
  • Writing and Public Speaking Consultations
  • Writing Workshops and Panels
  • Writing Peer-Review Groups
  • Writing Retreats and All Writes
  • Online Writing Resources for Graduate Students
  • About Teaching Development for Graduate and Professional School Students
  • Teaching Programs and Grants
  • Teaching Forums
  • Resources for Graduate Student Teachers
  • About Undergraduate Writing and Tutoring
  • Academic Strategies Program
  • The Writing Center
  • STEM Tutoring & Programs
  • Humanities & Social Sciences
  • Center for Language Study
  • Online Course Catalog
  • Antiracist Pedagogy
  • NECQL 2019: NorthEast Consortium for Quantitative Literacy XXII Meeting
  • STEMinar Series
  • Teaching in Context: Troubling Times
  • Helmsley Postdoctoral Teaching Scholars
  • Pedagogical Partners
  • Instructional Materials
  • Evaluation & Research
  • STEM Education Job Opportunities
  • Yale Connect
  • Online Education Legal Statements

You are here

Classroom observation protocols & teaching inventories.

A variety of published tools can assist instructors when assessing their teaching practices. Many such tools, including classroom observation protocols and teaching inventories, have been utilized in science, technology, engineering and math (STEM) courses, but are easily adaptable to other disciplines.

In observation protocols, an observer witnesses classroom teaching or views a videotape of instruction. While doing so, the observer fills out the protocol, typically consisting of questions that (1) ask whether particular teaching and learning behaviors were observed, (2) use a Likert-scale to capture the extent to which the behavior was seen in the classroom, and/or (3) allow for open-ended general feedback. Because observation protocols are typically designed to measure particular approaches, instructors should be careful to choose one for its specific assessment purpose. In contrast, teaching inventories can often be completed quickly by the instructor to obtain an overall assessment of practices. Teaching inventories are often used in more low-key, self-assessing reflective teaching approaches.

Instructors may keep in mind several benefits and challenges with classroom observation protocols and inventories. Training is often required if an observation protocol is used for research or other purposes where reliability between observers is essential. Also, while using a protocol just once provides a snapshot view of the classroom,  multiple observations can enhance the reliability of the assessments. Some protocols may also pose judgment on instruction, which can be awkward to share with the instructor being assessed. Teaching inventories, while often quick to complete, involve self-report of teaching practices, and can lack total objectivity. They also tend to focus on evidence of quantity (e.g. how often a particular behavior is observed) over quality. Where possible, coupling teaching inventories with observation protocols may be desirable.

A variety of published observation protocols and teaching inventories have been implemented and researched extensively in higher education.

Classroom Observation Protocols

  • Assesses multiple dimensions of teaching and is customizable. 
  • http://tdop.wceruw.org/
  • Measures student- versus teacher-centered practices. 
  • http://physicsed.buffalostate.edu/AZTEC/RTOP/RTOP_full/index.htm
  • COPUS (Classroom Observation Protocol for Undergraduate STEM) (Smith, et. al, 2013)

Teaching Inventories

  • Measures whether the instructors’ teaching practices are focused on information transmission or conceptual change. 
  • Characterizes general teaching practices in math and science. 
  • Describes practices to develop anti-racist teaching approaches​

Poorvu Center staff are available to discuss these and a variety of other teaching inventories that might be of use to instructors.

Recommendations

  • Choose Carefully - Instructors should choose a teaching inventory and/or observation protocol that measures the behaviors for which feedback is desired. Additionally, instructor and observer should meet beforehand to align goals, ensuring that the observer knows to pay particular attention to specific practices. 
  • Debrief -  Instructor and observer should schedule a time to debrief soon after the observation. This often happens over coffee, in a no-judgment, evaluation-free climate.
  • Compare Data Points - Instructors may consider using both a teaching inventory for self-assessment purposes, and have an observer use a teaching protocol in class. When these instruments assess similar items, the outcomes/feedback can be useful to compare.
  • Assess Again - After receiving feedback from the observer and reflecting upon practices, instructors might consider asking the observer to re-assess practices during a subsequent class in which changes have been made.
  • Consider Total Alignment - Instructors can assess the syllabus and the flow of course design in tandem. The “Downloads” section at the bottom of this page includes an assessment for considering, as an example, the degree of inclusivity in the syllabus and course design.

References and Additional Resources

Blonder, B., Bowles, T., De Master, K., Fanshel, R. Z., Girotto, M., Kahn, A., Keenan, T., Mascarenhas, M., Mgbara, W., Pickett, S., Potts, M., & Rodriguez, M. (2022). Advancing Inclusion and Anti-Racism in the College Classroom: A rubric and resource guide for instructors (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5874656

Osthoff, E., Clune, W., Ferrare, J., Kretchmar, K., & White, P. (2009). Implementing immersion: Design, professional development, classroom enactment and learning effects of an extended science inquiry unit in an urban district. Madison: University of Wisconsin–Madison, Wisconsin Center for Educational Research.

Piburn, M., Sawada, D., Falconer, K., Turley, J. Benford, R., Bloom, I. (2000). Reformed Teaching Observation Protocol (RTOP). ACEPT IN-003.

Trigwell, K., Prosser, M. Development and Use of the Approaches to Teaching Inventory. (2004). Educational Psychology Review, 16(4): 409-424.

Smith, M., Jones, F., Gilbert, S., and Wieman, C. (2013). The Classroom Observation Protocol for Undergraduate STEM (COPUS): A New Instrument to Characterize University STEM Classroom Practices. CBE-Life Sciences Education, Vol. 12.4.

Wieman C., Gilbert S. (2014). The Teaching Practices Inventory: A New Tool for Characterizing College and University Teaching in Mathematics and Science. CBE-Life Sciences Education, 13(3):552–569.

Downloads 

observation protocol qualitative research template

YOU MAY BE INTERESTED IN

observation protocol qualitative research template

Reserve a Room

The Poorvu Center for Teaching and Learning partners with departments and groups on-campus throughout the year to share its space. Please review the reservation form and submit a request.

Nancy Niemi in conversation with a new faculty member at the Greenberg Center

Instructional Enhancement Fund

The Instructional Enhancement Fund (IEF) awards grants of up to $500 to support the timely integration of new learning activities into an existing undergraduate or graduate course. All Yale instructors of record, including tenured and tenure-track faculty, clinical instructional faculty, lecturers, lectors, and part-time acting instructors (PTAIs), are eligible to apply. Award decisions are typically provided within two weeks to help instructors implement ideas for the current semester.

observation protocol qualitative research template

The Poorvu Center for Teaching and Learning routinely supports members of the Yale community with individual instructional consultations and classroom observations.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 7, Issue 9
  • Protocol for a qualitative study exploring perspectives on the INternational CLassification of Diseases (11th revision); Using lived experience to improve mental health Diagnosis in NHS England: INCLUDE study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Corinna Hackmann 1 ,
  • Amanda Green 1 ,
  • Caitlin Notley 2 ,
  • Amorette Perkins 1 ,
  • Geoffrey M Reed 3 ,
  • Joseph Ridler 1 ,
  • Jon Wilson 1 , 2 ,
  • Tom Shakespeare 2
  • 1 Department of Research and Development , Norfolk and Suffolk NHS Foundation Trust, Hellesdon Hospital , Norwich , UK
  • 2 Department of Clinical Psychology , Norwich Medical School, University of East Anglia , Norwich , UK
  • 3 Department of Psychiatry , Global Mental Health Program, Columbia University Medical Centre , New York , New York , USA
  • Correspondence to Dr Corinna Hackmann; Corinna.hackmann{at}nsft.nhs.uk

Introduction Developed in dialogue with WHO, this research aims to incorporate lived experience and views in the refinement of the International Classification of Diseases Mental and Behavioural Disorders 11th Revision (ICD-11). The validity and clinical utility of psychiatric diagnostic systems has been questioned by both service users and clinicians, as not all aspects reflect their lived experience or are user friendly. This is critical as evidence suggests that diagnosis can impact service user experience, identity, service use and outcomes. Feedback and recommendations from service users and clinicians should help minimise the potential for unintended negative consequences and improve the accuracy, validity and clinical utility of the ICD-11.

Methods and analysis The name INCLUDE reflects the value of expertise by experience as all aspects of the proposed study are co-produced. Feedback on the planned criteria for the ICD-11 will be sought through focus groups with service users and clinicians. The data from these groups will be coded and inductively analysed using a thematic analysis approach. Findings from this will be used to form the basis of co-produced recommendations for the ICD-11. Two service user focus groups will be conducted for each of these diagnoses: Personality Disorder, Bipolar I Disorder, Schizophrenia, Depressive Disorder and Generalised Anxiety Disorder. There will be four focus groups with clinicians (psychiatrists, general practitioners and clinical psychologists).

Ethics and dissemination This study has received ethical approval from the Coventry and Warwickshire HRA Research Ethics Committee (16/WM/0479). The output for the project will be recommendations that reflect the views and experiences of experts by experience (service users and clinicians). The findings will be disseminated via conferences and peer-reviewed publications. As the ICD is an international tool, the aim is for the methodology to be internationally disseminated for replication by other groups.

Trial registration number ClinicalTrials.gov: NCT03131505 .

  • International Classification of Diseases
  • Personality Disorders
  • Anxiety Disorders

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

https://doi.org/10.1136/bmjopen-2017-018399

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

This study is the first to gather expert by experience views on the proposed criteria to be fed into the revision process of the International Classification of Diseases.

All aspects of the proposed study have been co-produced with experts by experience and agreed with a representative from WHO.

Qualitative focus group data will be thematically analysed to form the basis of co-produced recommendations to be fed back to WHO.

The themes and resulting recommendations will be limited to five diagnostic categories and will only reflect views from the UK.

Introduction

Diagnostic systems have a number of functions both from the perspective of the clinician and service user. 1–3 Diagnosis offers indications for treatment, may guide expectation regarding prognosis and can help people to make sense of their experiences of living with mental health (MH) difficulties. 1 2 In order for a diagnostic system to be useful, it is critical that it reflects the day-to-day experiences of people living with the symptoms. Service users have reported relief derived from diagnostic definitions that resonate with and explain their experiences. 1 4 On the other hand, some feel their diagnosis does not ‘fit’ with or describe their experiences, and thus has limited utility other than being a ‘tick box’ exercise of labelling and categorising. 5–7 To date, it appears that no revision of the major systems for psychiatric diagnosis (International Classification of Diseases (ICD) or Diagnostic and Statistical Manual of Mental Disorders) has sought feedback from service users prior to publication.

Diagnostic systems are designed for clinicians; despite this, service users can easily access the diagnostic criteria. Research shows that the labels, language and descriptions used in these systems can impact people’s self-perception, their interpretations of how other people view them and their understanding of the implications of having a diagnosis, including the prognosis and potential for recovery. 5 8 9 These interpretations can have a direct impact on factors such as self-worth and self-stigmatisation, social and occupational functioning, recovery and service use. 5 6 10 For example, service users have reported that terms like ‘disorder’ and ‘enduring’ suggest permanency, impeding their hope for recovery. 5 Similarly, others have reported that the descriptions and terms used in diagnostic systems (eg, language like ‘deviant’, ‘incompetent’, ‘disregard for social obligations’ and ‘limited capacity’) can be stigmatising and unhelpful, leading to feelings of rejection, anger and possible avoidance of services. 5 6 8 Clarity on the perceptions of individuals receiving a diagnosis, in terms of the language, meaning and implications of what is included in the system, may help to minimise possible negative consequences.

Evidence suggests that clinicians also have concerns regarding the validity and clinical utility of the current diagnostic systems. 3 11–13 For instance, health professionals have reported that some diagnostic definitions feel arbitrary, artificial or unreflective of the typical presentations they observe in practice. 11 12 Other evidence suggests that clinicians find the categories difficult to use, particularly for distinguishing between disorders. 9 12 14 Clinicians have also expressed reservations regarding the terminology and associated stigma, particularly for conditions such as Schizophrenia and Personality Disorder. 13 15 These findings are from studies that have been conducted after the criteria have been released. Prospective input from clinicians on the proposed criteria as part of the process of revision may therefore improve the validity and clinical utility of diagnostic systems.

The value of expertise by experience is increasingly recognised by policymakers, 16–18 service providers and researchers. 19 20 Many have argued that processes of diagnosis could be improved by including perspectives of those with lived experience. 10 21 It has been suggested that within the diagnostic categories, "the traditional language is useful for listing and sorting but not for living and experiencing. ‘Naming' a thing is not the same as 'knowing'a thing" (p90) 22 and therefore categories could be improved by viewing service users as ‘authors of knowledge from whom others have something to learn’ (p291). 21 Likewise, it has been argued that diagnostic systems could be improved by addressing problems identified by practising clinicians. 3

Input regarding the proposed content for the ICD-11 from service users and clinicians should be used to support the process of revision and improvement. Feedback and clarity from service users on (1) whether the content of the system is in line with their experience of symptoms and (2) their interpretations of the content and language should facilitate the development of a system that is more accurate and valid, with minimised unintended negative impact.

Aims and objectives

This research project will use a focus group methodology to ask service users and clinicians who use the ICD diagnostic tool (psychiatrists and general practitioners) their views on the proposed content for the ICD-11. Data collected through collaborative discussion in the groups will be inductively analysed, and resulting themes will be triangulated with an advisory group (involving additional service users and clinicians). The output will be recommendations for improvement to ICD-11 content that have been co-produced with a feedback group (of different service users and clinicians).

Research questions

What are the views and perspectives of service users and clinicians on the content of the ICD-11?

How could the system be improved for the benefit of service users and clinicians?

Methods and analysis

Study design.

This is a qualitative study. Data will be collected through focus groups. Focus groups are an appropriate method of data collection to answer the study research questions seeking to explore views and perspectives of service users and clinicians, where our analysis will aim to define key themes and points of consensus or divergence gathered through interaction, 23 24 drawing on participants own perspectives and choice of language. 25 Participants will be given a copy of the proposed diagnostic criteria relevant to their diagnosis to discuss in the group. This will include both the technical version (as it is proposed for the ICD-11) and a lay translation of the criteria. Thematic analysis 26–28 will be used to identify emergent recurring and/or salient themes in the focus group data. The themes will form the basis for co-produced recommendations to support the development of the ICD-11. Data collection for this study commenced in February 2017 and analyses are planned to be completed and fed back to WHO by the end of December 2017.

Co-production

The research team that developed this project includes a service user expert by experience (AG), two academics (TS, CN), two research clinicians (a consultant psychiatrist (JW) and a clinical psychologist (CH)) and two research assistant psychologists (AP, JR). A service user expert by experience research team member will be involved in all aspects of the research, including design, facilitating focus groups, analysis, write-up and dissemination.

In developing the project, team members consulted a local service user governor, service users and the service user involvement leads at the hosting National Health Service organisation. This input helped shape the design (changing and broadening the process of recruitment of service users and supporting the use of focus groups) and the initial selection of the diagnoses that were included.

Co-production with service users, clinicians and researchers will continue throughout the project. Data analysis will be co-produced through involvement of the service user expert by experience on the research team and the advisory and feedback groups.

Diagnoses under investigation

With agreement from WHO, five diagnoses have been selected for exploration: Personality Disorder, Bipolar I Disorder, Schizophrenia, Depressive Disorder and Generalised Anxiety Disorder. These diagnoses include a wide range of symptom phenomena. Personality Disorder, Bipolar I Disorder and Schizophrenia are found to be more stigmatised, rejected and negatively viewed than other diagnoses, meaning they may have a particularly negative impact and be more consistently associated with harm. 29 30 Depressive Disorder and Generalised Anxiety Disorder are highly prevalent, making the largest contribution to the burden of disease in middle-income and high-income countries, including the UK. 31

Lay translation

The lay translations of the criteria have been produced by members of the research team including psychiatrists and other clinicians, and approved by a representative from WHO to ensure they reflect the proposed ICD-11. Documents have been created presenting lay translations alongside the technical version as it is written in the ICD-11, so that participants are easily able to refer to either source. Copies of these are available in English for researchers wishing to replicate this study.

Recruitment

Sampling will be purposive and include a number of pathways to ensure maximum inclusivity. Recruitment of service users will be both via clinicians in a MH trust and self-referral via a number of routes. Promotion of the study will be via clinicians, service user involvement leads in a MH trust and non-governmental organisations (NGOs). Clinicians in the MH trust will be asked to identify potential participants and seek consent to be contacted by the research team. Service user involvement leads will disseminate information about the study to service users and the membership of a MH trust (which includes many previous service users), providing a telephone number and email address to self-refer if interested. NGOs will promote the study using the same materials. The study will be promoted through recruitment posters, service user involvement forums and on social media. Clinicians will be recruited via team leaders, word of mouth and email communications promoting the project.

Once self-referral or consent to contact has been established, a member of the research team will make contact, provide potential participants with a brief overview of the study, and answer any questions. If the individual wishes to be involved in the study, they will be sent a copy of the participant information sheet via post or email. This information sheet outlines the purpose and nature of the study, and the ethical safeguards regarding data protection and privacy. Potential participants will have at least 72 hours to consider whether they would like to be involved in the study. If the individual would like to take part in the study, researchers will arrange to meet them at least 1 week before the focus group to complete the consent process and give them the relevant proposed diagnostic criteria to read and consider.

Sample size

There will be two service user focus groups for each of the five diagnoses. Additionally, there will be four clinician focus groups. The ICD system is primarily used by medical doctors in the UK, although clinical psychologists have been included in this study as they also apply the system in their work. 32 In this study, the diagnostic criteria presented to participants are divided into distinct discussion points. During the focus groups, these discussion points will be addressed one by one and participants will be asked for their feedback through predefined questions and prompts. This includes asking people their views of the proposed features, the language used, the positives and negatives of what is included and how the classification might be improved for the benefit of service users. In light of this, the number of groups was agreed based on research stating that using more standardised interviews decreases variability and thus requires fewer focus groups. 33 In total, there will be 14 groups, containing three to six participants each. This will give a total sample of 42–84 participants (30–60 service users and 12–24 clinicians). The advisory group will comprise three to five additional service users and three clinicians. Lastly, the feedback group will comprise five service users and three clinicians. The focus group size was chosen to allow participants opportunity to discuss their views and experiences in detail, while increasing recruitment feasibility. 34 The sample size should be sufficient in providing data to meet the aims and to cover a range of views. Evidence suggests that the majority of themes are discovered in the first two to three focus groups. 35

Inclusion and exclusion criteria

Adult service users (18 years and older) may be included in the focus groups if they have formally received at least one of the five diagnoses under investigation and have accessed services within the last 5 years (including those currently in receipt of services). People with multiple diagnoses may only take part in one focus group, but will be given the option of which group. Clinicians will have had experience working in MH, including the use of the psychiatric diagnoses under investigation. Individuals may only participate in either one focus group, the advisory group or the feedback group.

Individuals will be excluded if they are under the age of 18 years, lack the capacity to consent, or have an inability to speak fluent English (as fluent English is required to participate in the focus groups). Individuals will also be excluded if their participation is deemed unsafe to themselves or others by their lead clinician or clinicians on the research team.

Data collection

Focus groups are the most applicable method for data collection to meet our research aims, as attitudes, opinions and beliefs are more likely to be revealed in the reflective process facilitated by the social interaction that a focus group entails than by other methods. 23–25 Additionally, focus groups have proved to be a useful way of exploring stigma issues in MH, 36 and service users are often familiar with group settings for discussing MH issues.

The summary of the new diagnostic guidelines and lay translation will enable participants to reflect on both the content and the language of the proposed criteria. During the groups, topic guides will encourage participants to discuss and share views of the relevant diagnostic category. This includes their overarching views, thoughts and feelings; as well as, specific reflections on areas such as the language used, aspects that may be helpful or unhelpful, and suggestions for improvement.

Each focus group will be led by an experienced and trained member of the research team and have an assistant facilitator. Service user focus groups will last 60–90 min, and clinician focus groups will last 2–2.5 hours to account for the discussion of multiple diagnoses.

The focus groups will be audio-recorded and transcribed verbatim. The transcripts will first be read and descriptively openly coded (using the same language as participants where possible) by the lead researcher. Approximately 25% of the transcripts will be independently open coded by another member of the research team, as a validity check. Codes will be compared and discussed until consensus is reached. The five diagnoses will initially be analysed separately to produce themes that are relevant to each diagnosis. Following this, these themes will be compared with identify common themes relevant to all the diagnostic categories. Analysis of data will mainly be descriptive. We will take a critical realist epistemological stance to analysis, recognising that there are multiple individual realities, but taking a pragmatic approach to analysing data at face value, drawing on the perspectives of individuals as they choose to represent themselves through discussion. 37 Thematic analysis will be used to inductively code themes that reoccur or appear important. 26–28 The concept of salience will be referred to here, to guide coding that is conceptually and inherently significant, not just frequently occurring. A qualitative data management software system (NVIVO-11) will be used to facilitate data analysis.

In addition to descriptive data for thematic coding, focus groups generate data that is conversational. Analysis of this requires an inductive approach that focuses on instances in the data where there is marked agreement (consensus), disagreement or divergence. These instances will be identified as ‘critical moments’. The sample size is small and purposive. Consequently, summary quantified coding matrices will not be produced. Instead there will be a focus on the 'critical moments' to direct the analysis and eventual findings, reporting on the issues that are of central importance to the participants.

Following analysis of each focus group, a second stage analysis will be conducted to compare and contrast findings across groups. The analysis will seek out consensus, disagreement and inconsistency within service user and clinician focus groups, and between diagnoses. This second stage analysis will involve discussions within the research team to refine the themes and to develop higher level themes, that is, grouping the open codes into meaningful conceptual categories. This will allow tentative conclusions to be drawn about aspects of the diagnostic criteria which may be particularly pertinent for some groups and less important for others. It will also enable conclusions to be drawn regarding generic language or overall responses to the diagnostic criteria, in comparison to more nuanced reactions to diagnostically specific categories.

The output from the analysis will be higher level themes and categories that form the basis of recommendations for the ICD-11. These themes will be triangulated with the advisory group. The resulting themes will be discussed with the feedback group in order to co-produce the recommendations. These recommendations will be contextualised with a description of the themes and identified areas of agreement and disagreement for feedback to WHO.

Data protection

All confidential data will be kept for 5 years on password-protected computers and/or locked filing cabinets only accessible to members of the research team. During transcription, audio-recordings will be anonymised, with all identifiable information removed prior to using the software analysis tool. All audio-recordings will be destroyed immediately after transcription.

Ethics and dissemination

Ethical considerations.

Written informed consent to participate and be audio-recorded will be obtained from all participants. Data management and storage will be subject to the UK Data Protection Act 1998. Ethical approval for the current study was obtained from the Coventry and Warwickshire Research Ethics Committee (Rec Ref: 16/WM/0479).

Declaration of Helsinki

This study complies with the Declaration of Helsinki, adopted by the 18th World Medical Association (WMA) General Assembly, Helsinki, Finland, June 1964 and last revised by the 64th WMA General Assembly, Fortaleza, Brazil, October (2013).

Output and dissemination

This research has been designed to obtain feedback with recommendations for the ICD-11, and to develop a methodology that can be replicated in other countries that use the ICD system. Additionally, the findings, and learning in terms of the process of co-producing and conducting research with experts by experience, will be disseminated via peer-reviewed publications, conferences, media and lay reports.

Service user involvement in MH is a priority. 19 Studies have found that both clinicians and service users have questioned the accuracy, validity and clinical utility of the ICD and other psychiatric diagnostic tools. 3 8 9 11 12 38 Despite this, to date, service user and clinician feedback has not been obtained prior to revision of the ICD manual. In light of this, is not clear whether the content resonates with the experiences of people giving and receiving the diagnoses, could lack clinical utility, or even, cause harm (eg, in terms of the language used).

Limitations

This study is designed to input feedback from service users and clinicians in the forthcoming revision of the ICD. The usefulness of the data and resulting recommendations is dependent on input, that is, reflective of the views of service users and clinicians that the new system will impact. The current study will include two focus groups for each disorder in an attempt to minimise bias 35 and to account for group-think processes that may occur within individual groups. Taking a critical realist epistemological stance is a pragmatic approach to work with discursive data created through the interactional context of a focus group. It is acknowledged that there are multiple competing realities and perspectives that may differ across time and context, and the analysis findings will be limited to the time and context of this study. Transferability of findings is nonetheless maximised by triangulation to ensure the inclusion of multiple stakeholder perspectives, enabled by the advisory and feedback groups of experts by experience that will co-produce the recommendations reported to WHO. Interpretation of the feedback will take into account potential limitations regarding the generalisability of the findings. The current project is exploring only five of the diagnoses that are included in the ICD-11. The ICD is internationally used, and the current project will reflect the experiences and views of service users and clinicians in the UK only. Future research may include both additional diagnostic categories and encapsulate expertise by experience and relevant clinicians in different countries.

The current study will use feedback from experts by experience to co-produce recommendations for the revised diagnostic system proposed for the ICD-11. This feedback aims to improve the accuracy, validity and clinical utility of the manual, and minimise the potential for unintended negative consequences. This qualitative approach has not been previously employed by any countries that use the ICD system. Our vision is that this process will become a routine feature in future revisions of all diagnostic systems.

Acknowledgments

We would like to thank the library services at Norfolk and Suffolk Foundation Trust for aiding the searching and retrieval of documents. We would like to thank Kevin James (service user governor), Lesley Drew and Sharon Picken (service user involvement leads) for their input during the development of the project. We would also really like to thank Dr Bonnie Teague who generously offered the benefit of her wisdom and proof reading skills.

  • Kilbride M ,
  • Welford M , et al
  • Johnstone L ,
  • Bonnington O ,
  • Stalker K ,
  • Ferguson I ,
  • Castillo H ,
  • van Rijswijk E ,
  • van Hout H ,
  • van de Lisdonk E , et al
  • Shadbolt N ,
  • Starcevic V , et al
  • Milton AC ,
  • Kelly B , et al
  • 16. ↵ Department of Health . Putting People First: Planning together – peer support and self-directed support . London : Department of Health , 2010 .
  • 17. ↵ Department of Health . No Health without Mental Health: A cross-government mental health outcomes strategy for people of all ages . London : Department of Health , 2011 .
  • 18. ↵ Department of Health . Closing the Gap: Priorities for essential change in mental health . London : Social Care, Local Government and Care Partnership Directorate , 2014 .
  • Simpson EL ,
  • Beresford P
  • Malone P , et al
  • Wilkinson S
  • MacQueen KM ,
  • Stevens S ,
  • Serfaty M , et al
  • Carlyle D , et al
  • 31. ↵ World Health Organization . The global burden of disease: 2004 update . Switzerland : World Health Organization , 2008 .
  • 32. ↵ The British Psychological Society . Diagnosis – policy and guidance . http://www.bps.org.uk/system/files/documents/diagnosis-policyguidance.pdf ( accessed Jul 2017 ).
  • Schulze B ,
  • Angermeyer MC
  • Denzin NK ,

Contributors CH is the chief investigator for this project and wrote the protocol. TS is supervising the project and helped to develop all aspects of the project. AG is the expert by experience on the research team, and led on developing the co-production, and the public and patient involvement. CN led the development of the methodology. AP had a specific contribution to the literature review. GMR is the WHO consultant for the project. GMR developed the original idea for the project and has had input into the development of the lay criteria. JR provided input to ethical considerations and the lay criteria. JW led on the development of the lay criteria. All authors supported the development and critical review of the protocol.

Competing interests None declared.

Ethics approval Coventry and Warwickshire HRA Research Ethics Committee (16/WM/0479).

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

This site uses session cookies and persistent cookies to improve the content and structure of the site.

By clicking “ Accept All Cookies ”, you agree to the storing of cookies on this device to enhance site navigation and content, analyse site usage, and assist in our marketing efforts.

By clicking ' See cookie policy ' you can review and change your cookie preferences and enable the ones you agree to.

By dismissing this banner , you are rejecting all cookies and therefore we will not store any cookies on this device.

Qualitative protocol guidance and template: consultation

This HRA consultation has closed and is displayed for reference only. CTIMP protocol guidance and templates can be found in the  protocol page of the research planning section of the site.

The HRA continues to work toward improving the quality and consistency of health research in the UK. As part of this, throughout 2015 the HRA developed a suite of protocol guidance and templates for different study types. Moving into the next phase we have produced protocol guidance and a template for qualitative studies.

The HRA is aware that the quality and content of protocols for Qualitative research varies widely. Strong feedback to the HRA highlighted that this considerable variability of protocols was causing delays to reviews. In response to this the HRA facilitated work to develop guidance and a template to assist organisations and individuals to improve the consistency and quality of their qualitative protocols.

A protocol which contains all the elements that review bodies consider is less likely to be delayed during the review process because the reviewers are less likely to require clarification from the applicant.

A multidisciplinary group from research active organisations provided expertise to the project which has produced this detailed guidance and template which are published for use and comment.

Is it mandatory to use this guidance and template?

No. The use of this collated consensus guidance and template is not mandatory. The guidance and template are published as standards to encourage and enable responsible research. The documents will:

  • Support researchers developing protocols where the sponsor does not already use a template
  • Support sponsors wishing to develop template protocols in line with validated guidance
  • Support sponsors to review their existing protocol template to assess whether it is in line with national guidance.

Can sponsors continue to use their own protocol templates?

Yes. The HRA acknowledge that institutions have specific needs, including specialised additional material, and may have their own templates. The HRA asks that sponsors advise those preparing protocols how their template has regard for the HRA guidance and template. In addition, the HRA recommends that each protocol states clearly how it meets HRA guidance:

  • The protocol has regard for the HRA guidance and order of content
  • The protocol has regard for the HRA guidance
  • The protocol does not have regard to the HRA guidance and order of content

What are the benefits of using the guidance and template?

By clearly defining the expected components of a protocol, the guidance and template help researchers to be sure that they have covered all the elements required by sponsors, Research Ethics Committees and NHS sites. In the future this will also apply to applications for HRA Approval. Protocols which have regard for the guidance and template are less likely to raise queries that can cause delays.

Who can use the template? The template can be used by all individuals and sponsoring organisations involved in authoring Qualitative research projects.

How do I provide feedback?

The feedback period has now closed. 

  • Privacy notice
  • Terms & conditions
  • Accessibility statement
  • Feedback or concerns

IMAGES

  1. research protocol template

    observation protocol qualitative research template

  2. FREE 10+ Qualitative Research Samples & Templates in MS Word

    observation protocol qualitative research template

  3. Qualitative Methods.pdf

    observation protocol qualitative research template

  4. How To Write A Findings Section For Qualitative Research

    observation protocol qualitative research template

  5. FREE 9+ Qualitative Research Report Templates in PDF

    observation protocol qualitative research template

  6. A Guided Reading Observation Template

    observation protocol qualitative research template

VIDEO

  1. Qualitative Methods & Participant Observation

  2. Qualitative Research & Observation Method by Prof. Raksha Singh, IGNTU, Amarkantak

  3. Observation as a data collection technique (Urdu/Hindi)

  4. what is the difference between qualitative and quantitative research

  5. Qualitative Research Tools

  6. English Learner Institute I

COMMENTS

  1. PDF Chapter 14 Conducting a Good Observation prior permission. Violators

    Step 2: Develop the Observational Protocol. Design an observational protocol as a method for recording observational notes in the field. Include in this protocol both "descriptive" (e.g., notes about what happened) and "reflective" (i.e., notes about your experiences, hunches, and learnings) notes.

  2. Qualitative Protocol Guidance and Template

    Qualitative Protocol Guidance and Template We would find your feedback useful to help us refine this document. Feedback can be emailed to [email protected] Please contact us via this email address if you would prefer to provide feedback in person or by telephone, we can arrange a time to speak with you.

  3. Direct observation methods: A practical guide for health researchers

    Dissemination is a key, final step of the research process. Observation data lends itself to a rich description of the phenomena of interest. In health research, this data is often part of a larger mixed methods study. The observation protocol should be described in a manuscript's methods section; the results should report on what was observed.

  4. Observation

    A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched). Participant Observation. Researcher becomes a participant in the culture or context being observed.

  5. Observations in Qualitative Inquiry: When What You See Is Not What You

    Observation in qualitative research "is one of the oldest and most fundamental research methods approaches. This approach involves collecting data using one's senses, especially looking and listening in a systematic and meaningful way" (McKechnie, 2008, p. 573).Similarly, Adler and Adler (1994) characterized observations as the "fundamental base of all research methods" in the social ...

  6. PDF Guidelines for completing a research protocol for observational studies

    Many of the methodological aspects of designing a research study and writing the protocol can benefit from the advice of a statistician. Such advice should be sought at an early stage and is available for UCL/UCLH/RFH researchers through the Biostatistics group at the Joint UCLH/UCL/RFH Biomedical Research Unit. 1. Title Page 1.1 Title

  7. PDF RESEARCH PROTOCOL TEMPLATE

    A research protocol outlines the plan for how a study is run. The study plan is developed to answer research questions. It provides evidence for feasibility of a study, detailed objectives, design, methodology, analytical/statistical considerations and how the study will be conducted and evaluated. A well-written and complete protocol is ...

  8. What Is Qualitative Observation?

    Qualitative observation is a type of observational study, often used in conjunction with other types of research through triangulation. It is often used in fields like social sciences, education, healthcare, marketing, and design. This type of study is especially well suited for gaining rich and detailed insights into complex and/or subjective ...

  9. PDF Qualitative Protocol Development Tool

    Protocol version 1.2 Qualitative Protocol Development Tool The research protocol forms an essential part of a research project. It is a full description of the research study and will act as a 'manual' for members of the research team to ensure adherence to the methods outlined.

  10. Qualitative Protocol Development Tool

    The research protocol forms an essential part of a research project. It is a full description of the research study and will act as a 'manual' for members of the research team to ensure adherence to the methods outlined. As the study gets underway, it can then be used to monitor the study's progress and evaluate its outcomes. The protocol ...

  11. PDF APPENDIX 4. RAPID QUALITATIVE PROTOCOL (TEMPLATE)

    A Guide to Improving MDA Using Qualitative Methods A-49 APPENDIX 4. RAPID QUALITATIVE PROTOCOL (TEMPLATE) Title: Provide the title for the rapid qualitative approach. Principal investigators: Identify the key people responsible for the adaptive learning approach. Background: Write a brief paragraph summarizing the findings of the desk review. . Highlight what is

  12. observation guide

    An observation guide is an important tool regardless of the observer's role. For each of the five observer roles * - nonparticipant (off-site or on-site) and participant (passive, participant-observer, or complete) observation - the observation guide helps to maintain the observer's focus while also giving the observer leeway to reflect ...

  13. Qualitative Protocol Guidance and Template

    Qualitative Protocol Guidance and Template . We would find your feedback useful to help us refine this document. Feedback can be emailed to . ... practice this means satisfying itself the research protocol, research team and the research environment have passed appropriate scientific quality, satisfying itself that the study has ethical ...

  14. (PDF) Ten Key Steps to Writing a Protocol for a Qualitative Research

    The aims of the present paper are a) to demonstrate the key steps required towriting a protocol for a qualitative research study b) to assist nurses and other health professionals in effectively ...

  15. DOCX Children's Hospital of Philadelphia Research Institute

    PK !¿˜å  ö [Content_Types].xml ¢ ( Ä-ÝNã0 …ï'ö "ß® —² jÚ v¹d'€ píIk ÿÈž }{ÆM ! M õ¦Râ9ç|ãi« O_L•=AˆÚÙ‚ æC- •Ni ...

  16. (PDF) Templates in Qualitative Research Methods: How Have We Got Here

    2. ABSTRACT. The domain of qualitative research is replete with templates; standard protocols for the. analysis of qualitative data. T he use of such templates has sometimesbeen considered as ...

  17. What is observation protocol in qualitative research?

    Fuller Theological Seminary. One factor to consider in doing observation is how your presence might affect the performance of the group being observed. In some cases, you can practice what is ...

  18. PDF Session 5 Slides

    Session 5 Slides - Observation and Qualitative Fieldwork

  19. Classroom Observation Protocols & Teaching Inventories

    In observation protocols, an observer witnesses classroom teaching or views a videotape of instruction. While doing so, the observer fills out the protocol, typically consisting of questions that (1) ask whether particular teaching and learning behaviors were observed, (2) use a Likert-scale to capture the extent to which the behavior was seen ...

  20. Protocol for a qualitative study exploring perspectives on the

    Introduction Developed in dialogue with WHO, this research aims to incorporate lived experience and views in the refinement of the International Classification of Diseases Mental and Behavioural Disorders 11th Revision (ICD-11). The validity and clinical utility of psychiatric diagnostic systems has been questioned by both service users and clinicians, as not all aspects reflect their lived ...

  21. Qualitative protocol guidance and template: consultation

    CTIMP protocol guidance and templates can be found in the protocol page of the research planning section of the site. The HRA continues to work toward improving the quality and consistency of health research in the UK. As part of this, throughout 2015 the HRA developed a suite of protocol guidance and templates for different study types.

  22. (PDF) Observational Guide

    Interview Protocol and Observational Guide 2. Introduction. This document provides information about the interview protocol and observational guide. for research on applications of big data in ...

  23. Protocol Templates

    Protocol Templates Download Version; Descriptive Study Template: This template should only be used for for studies limited to (1) the use of existing data or specimens, (2) where the only study procedure is a retrospective chart review or use of existing biological samples and (3) where the analysis plan is limited to purely descriptive summary statistics.