• Privacy Policy

Research Method

Home » One-to-One Interview – Methods and Guide

One-to-One Interview – Methods and Guide

Table of Contents

One-to-One Interview in Research

One-to-One Interview

Definition:

A one-to-one interview is a research method in which the interviewer meets with one respondent at a time to ask questions.

This type of interview is used to collect qualitative data from respondents about their opinions, beliefs, or experiences. One-to-one interviews are usually conducted in person, but they can also be done over the phone or online.

One-to-One Interview Methods

Here are some common methods used for one-to-one interviews:

Structured Interviews

These interviews involve a predetermined set of questions that are asked in a specific order. The questions are usually closed-ended, and the interviewer may use a rating scale to measure the interviewee’s responses.

Unstructured Interviews

These interviews are more flexible and allow the interviewer to ask open-ended questions and follow-up on the interviewee’s responses. Unstructured interviews can be useful for exploring a topic in-depth and gaining a deeper understanding of the interviewee’s experiences and perspectives.

Semi-Structured Interviews

These interviews combine elements of structured and unstructured interviews. The interviewer may have a set of predetermined questions but can also ask follow-up questions and explore the interviewee’s responses in more detail.

Behavioral Interviews

These interviews are used to assess a candidate’s past behavior and performance in specific situations. The interviewer will ask the interviewee to provide examples of how they have handled certain situations in the past, and then evaluate their responses based on a set of predetermined criteria.

Informational Interviews

These interviews are used to gather information about a specific topic or industry. The interviewee may be an expert in the field or have experience working in the industry. Informational interviews are useful for gaining insights into a specific field or industry and can help the interviewer make more informed decisions.

One-to-One Interview Types

There are several types of one-to-one interviews, including:

Job Interviews

One-to-one job interviews are used to assess a candidate’s qualifications, skills, and fit for a specific position. The interviewer may ask questions about the candidate’s past experience, education, and job-related skills.

Performance Evaluations

These interviews are conducted to evaluate an employee’s job performance. The interviewer will assess the employee’s strengths and weaknesses and provide feedback on their performance.

Research Interviews

Research interviews are used to gather data for research purposes. The interviewer will ask questions about the interviewee’s experiences, opinions, and attitudes on a specific topic.

Counseling Interviews

These interviews are conducted to provide guidance and support to the interviewee. The interviewer may ask questions about the interviewee’s thoughts, feelings, and behaviors, and provide feedback and advice.

Investigative Interviews

Investigative interviews are used to gather information about a specific event or incident. The interviewer may ask questions about the interviewee’s involvement or knowledge of the event and gather information to make an informed decision.

How to Conduct One-to-One Interview

Here are some steps you can follow to conduct a successful one-to-one interview:

Define the Purpose of the Interview

Before conducting the interview, it’s important to have a clear understanding of what you want to achieve. Define the purpose of the interview and make sure your questions are aligned with your goals.

Choose the right location

Select a quiet and comfortable location for the interview, free from distractions and interruptions.

Introduce yourself and explain the purpose of the interview

Start the interview by introducing yourself, explaining the purpose of the interview, and reassuring the interviewee that their participation is voluntary.

Ask Open-ended Questions

Ask open-ended questions that encourage the interviewee to share their thoughts, opinions, and experiences. Avoid leading questions that may bias the interviewee’s responses.

Listen Actively

Pay attention to what the interviewee is saying and show interest and respect for their perspective. Allow them to fully answer your questions without interrupting or cutting them off.

Take notes during the interview to capture the key points and insights. This will help you to remember important details and analyze the information later.

Thank the Interviewee

At the end of the interview, thank the interviewee for their time and contribution. Provide them with an opportunity to ask any questions or raise any concerns they may have.

Analyze the Information

Review your notes and analyze the information gathered from the interview. Look for patterns and themes that emerge and use this information to inform your decision-making or further research.

When to use One-to-One

One-to-One interviews can be used in various situations, including:

One-to-One interviews are often used in research studies to gather in-depth information from participants. They allow researchers to explore topics in greater depth and get a better understanding of the participant’s experiences and perspectives.

Employers use One-to-One interviews to evaluate the qualifications, skills, and fit of job candidates. It allows them to get a better understanding of the candidate’s personality, communication style, and work experience.

One-to-One interviews are often used to assess employee performance. Supervisors can discuss performance goals, provide feedback, and identify areas for improvement.

One-to-One interviews are a common tool used by counselors and therapists to help clients address personal or emotional issues. They provide a safe space for clients to discuss their feelings and develop coping strategies.

Customer Feedback

One-to-One interviews can be used to gather feedback from customers about products or services. Companies can use this feedback to improve their offerings and provide better customer service.

Advantages of One-to-One Interviews

There are several advantages to using One-to-One interviews:

In-depth Information

One-to-One interviews allow for more in-depth information to be gathered compared to other methods such as surveys or focus groups. The interviewer can ask follow-up questions and explore topics in greater detail.

Flexibility

One-to-One interviews are flexible and can be tailored to the specific needs of the interviewer and interviewee. The interviewer can adapt their questions and approach to suit the individual they are speaking to.

Personalization

One-to-One interviews allow for a personalized approach, where the interviewer can build rapport with the interviewee and make them feel more comfortable sharing their thoughts and experiences.

Confidentiality

One-to-One interviews can be conducted in a private setting, which allows the interviewee to speak more freely about sensitive or personal topics. This can help to build trust between the interviewer and interviewee.

Better Understanding

One-to-One interviews can help to build a better understanding of the interviewee’s perspective, experiences, and needs. This can be valuable in research, counseling, or customer feedback contexts.

Immediate Feedback

In One-to-One interviews, the interviewer can provide immediate feedback to the interviewee, which can be helpful in counseling or performance evaluation situations.

Disadvantages of One-to-One Interviews

While One-to-One interviews offer many advantages, there are also some potential disadvantages to consider:

Time-consuming

One-to-One interviews can be time-consuming, both in terms of preparation and conducting the interview. This can be a disadvantage when trying to gather information from a large number of participants.

Resource-intensive

One-to-One interviews can require a significant amount of resources, including time, staff, and equipment. This can be a disadvantage for organizations with limited resources.

One-to-One interviews can be susceptible to interviewer bias, where the interviewer’s personal beliefs, experiences, or expectations can influence the interviewee’s responses. This can lead to inaccurate or unreliable information.

Social desirability bias

Interviewees may feel pressure to provide socially desirable responses to please the interviewer, which can lead to inaccurate or incomplete information.

Limited generalizability

One-to-One interviews provide in-depth information about a specific individual but may not be generalizable to the broader population. This can limit the applicability of the findings.

Respondent burden

One-to-One interviews can place a burden on the interviewee, particularly if the interview is lengthy or requires a significant amount of emotional energy. This can lead to participant fatigue or withdrawal.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

one to one interview research method example

Home Market Research

One-On-One Interviews: Techniques, Questions, Pros & Cons

One-on-one interviews

The job interview is a key part of the hiring process, where employers evaluate candidates’ qualifications, skills, and fit with the company culture. One-on-one interviews, in particular, provide a personalized and focused interaction between the interviewer and the candidate. Conducting interviews is crucial for research methods as well.

In the initial interview, the interviewer should approach the process thoughtfully, using a combination of open-ended and probing questions to gather valuable insights. It’s important to know how previous interviews influence the respondent’s perspective and avoid asking leading questions that could unintentionally impact their responses.

A skilled interviewer asks questions precisely and empathetically, creating an environment that encourages open and honest communication. Adequate preparation for the first interview involves a thorough understanding of the research objectives, enabling the interviewer to steer the conversation with purpose and extract meaningful data.

In this blog post, we will know what this one-on-one interview is, the techniques, questions, and the pros and cons associated with one-on-one interviews.

What is a One-On-One Interview?

A one-on-one interview is a qualitative research method in which an interviewer engages in a face-to-face conversation with a single participant or interviewee. This format allows for a focused and personalized interaction between the interviewer and the interviewed individual. 

One-on-one interviews are commonly used in various fields, including social research, market research, journalism, and job recruitment.

In a one-on-one interview, the interviewer typically prepares questions or topics to discuss with the interviewee. These questions guide the conversation, but there is often room for flexibility and exploration of unexpected insights. The goal is to gather detailed information, opinions, experiences, or perspectives from the interviewee.

These interviews can be structured, semi-structured, or unstructured:

  • Structured Interview: The interviewer follows a predetermined set of questions in a fixed order. This approach ensures consistency in data collection and allows for easier participant comparisons.
  • Semi-Structured Interview: The interviewer has core questions but can explore additional topics or probe deeper into specific responses. This approach combines the benefits of structure with the opportunity for more in-depth exploration.
  • Unstructured Interview: The conversation is open-ended, with no predetermined questions. The interviewer relies on the natural flow of the conversation, allowing the interviewee to express themselves freely.

One-on-one interviews are valuable for obtaining detailed, rich data, exploring complex issues, and understanding the individual perspectives of participants. They are particularly useful when a deep understanding of a subject is needed and when researchers want to explore nuances, emotions, and personal experiences.

These interviews can be conducted in person, over the phone, or through video conferencing, depending on logistical considerations and the preferences of the participants and researchers. 

The success of a one-on-one interview often depends on the interviewer’s skills in building rapport, asking probing questions, and creating an environment where the interviewee feels comfortable sharing their thoughts and experiences.

Techniques for One-On-One Interviews

One-on-one interviews are a common and valuable method for assessing candidates during hiring. Employers use various techniques to make these interviews effective in evaluating a candidate’s skills, qualifications, and cultural fit. Here are some strategies for conducting successful one-on-one interviews:

1. Structured Interviews

In a structured interview, the interviewer follows a predetermined set of questions for all candidates. This consistency allows for a fair and objective evaluation of each candidate based on the same criteria.

The questions are carefully crafted to assess specific competencies, skills, and experiences relevant to the job.

2. Behavioral Interviews

This technique probes a candidate’s past behavior to predict future performance. Interviewers ask questions that require candidates to share specific examples of how they have handled situations in the past.

The STAR method (Situation, Task, Action, Result) is often used to structure responses and better understand a candidate’s abilities.

3. Situational Interviews

Situational interviews present hypothetical scenarios related to the job, and candidates are asked how they would approach or solve these situations.

This technique assesses a candidate’s problem-solving skills, decision-making process, and ability to apply knowledge to real-world situations.

4. Case Interviews

Common in industries such as consulting and finance, case interviews present candidates with real or hypothetical business problems. Candidates are expected to analyze the situation and provide solutions.

This technique evaluates analytical thinking, strategic planning, and the ability to apply knowledge to practical challenges.

5. Competency-Based Interviews

Competency-based interviews focus on specific competencies required for the job. Interviewers assess how well candidates align with these key competencies through targeted questions.

This technique helps evaluate a candidate’s suitability for the role based on the identified core competencies.

6. Role-Playing Exercises

In certain situations, role-playing exercises can be employed to simulate on-the-job scenarios. This allows the interviewer to observe how candidates handle specific tasks or interpersonal interactions.

Role-playing provides insights into a candidate’s practical skills, adaptability, and ability to perform under pressure.

7. Panel Interviews

While not strictly one-on-one, panel interviews involve multiple interviewers questioning a candidate simultaneously. Each panel member may represent different aspects of the job or the organization.

This technique provides diverse perspectives and allows for a more comprehensive evaluation.

8. Informational Interviews

Informational interviews focus on gathering more information about the candidate’s background, experiences, and aspirations. It creates a more conversational setting to understand the candidate’s motivations and career goals.

This technique helps assess cultural fit and whether the candidate’s values align with the organization’s.

One-On-One Interview Questions to Ask

The questions you ask during a one-on-one interview can vary based on the specific role and the skills you seek in a candidate. Here’s a set of questions across different categories that you might find helpful:

Background and Experience

  • Can you walk me through your resume and highlight key experiences?
  • How did you become interested in this field/industry?
  • What specific skills or qualifications make you a good fit for this role?

Behavioral Questions

  • Can you provide an example of a challenging situation at work and how you handled it?
  • How do you prioritize and manage your time when faced with multiple tasks or projects?
  • Describe a successful project you worked on and your role in its success.

Problem-Solving and Decision-Making

  • How do you approach problem-solving in the workplace?
  • Can you share a situation where you had to make a tough decision and how you reached that decision?
  • What steps do you take when faced with a difficult task or challenge?

Teamwork and Collaboration

  • How do you contribute to a team environment?
  • Can you describe a time when you had to work with diverse personalities on a team?
  • What strategies do you use to resolve conflicts within a team?

Motivation and Goals

  • What motivates you in your work?
  • Where do you see yourself professionally in the next 5 years?
  • How do you stay updated on industry trends and developments?

One-On-One Interview Advantages and Disadvantages

  • Depth of Information: One-on-one interviews allow for an in-depth topic exploration. The interviewer can delve into details, probe responses, and seek clarification to gain a comprehensive understanding.
  • Personalization: The interviewer can tailor questions to the individual, considering their background, experiences, and perspectives. This personalization can lead to richer and more nuanced insights.
  • Flexibility: The format allows flexibility in timing, location, and pace. It’s easier to adapt the interview based on the respondent’s reactions, ensuring a more natural and comfortable conversation.
  • Building Rapport: Close interaction in a one-on-one setting can help create rapport between the interviewer and the interviewee. This rapport may encourage the interviewee to be more open and honest.
  • Non-verbal Cues: The interviewer can observe and interpret non-verbal cues such as body language, facial expressions, and tone of voice, providing additional layers of information.

Disadvantages

  • Bias and Subjectivity: The presence of a single interviewer can introduce bias. Personal biases, conscious or unconscious, may influence the questions asked and the interpretation of responses.
  • Limited Perspectives: A one-to-one interview may not capture a given topic’s full range of perspectives. Group settings or other methods might be more effective in revealing diverse viewpoints.
  • Resource-Intensive: Conducting individual interviews can be time-consuming and resource-intensive, especially if a large sample size is required. This can limit the feasibility of using this method in certain research or data collection projects. 
  • Interviewee Comfort Level: Some individuals may feel uncomfortable being the sole focus of attention, leading to potential response distortion or the withholding of information.
  • Difficulty in Generalization: Findings from one-on-one interviews may only sometimes be easily generalized to a broader population. The insights gained might be specific to the individual interviewed.
  • Interviewer Influence: The interviewer’s presence and style can influence the interviewee’s responses. Some may conform to societal expectations or give socially desirable answers.

One-on-one interviews remain a popular and effective method for evaluating candidates. Employers can gain valuable insights into candidates’ suitability for a role by employing various techniques and asking relevant questions. 

Balancing the benefits of this interview format’s inherent subjectivity and potential limitations is essential. Combining one-on-one interviews with other assessment methods can provide a more comprehensive and accurate hiring process.

Leveraging the QuestionPro Research Suite can significantly enhance the efficiency and effectiveness of one-on-one interviews. With its versatile survey and feedback capabilities, this platform empowers interviewers to create structured, tailored questionnaires, ensuring a focused exploration of candidate skills and experiences. 

The real-time feedback feature facilitates immediate insights, enabling interviewers to adapt and make informed decisions during the interview process. The platform’s user-friendly interface and analytics tools further streamline the assessment process, making QuestionPro Research Suite a valuable asset for organizations seeking a comprehensive and data-driven approach to one-on-one interviews.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Employee Engagement Survey Tools

Top 10 Employee Engagement Survey Tools

employee engagement software

Top 20 Employee Engagement Software Solutions

May 3, 2024

customer experience software

15 Best Customer Experience Software of 2024

May 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Introduction to Research Methods

6 qualitative research and interviews.

So we’ve described doing a survey and collecting quantitative data. But not all questions can best be answered by a survey. A survey is great for understanding what people think (for example), but not why they think what they do. If your research is intending to understand the underlying motivations or reasons behind peoples actions, or to build a deeper understanding on the background of a subject, an interview may be the more appropriate data collection method.

Interviews are a method of data collection that consist of two or more people exchanging information through a structured process of questions and answers. Questions are designed by the researcher to thoughtfully collect in-depth information on a topic or set of topics as related to the central research question. Interviews typically occur in-person, although good interviews can also be conducted remotely via the phone or video conferencing. Unlike surveys, interviews give the opportunity to ask follow-up questions and thoughtfully engage with participants on the spot (rather than the anonymous and impartial format of survey research).

And surveys can be used in qualitative or quantitative research – though they’re more typically a qualitative technique. In-depth interviews , containing open-ended questions and structured by an interview guide . One can also do a standardized interview with closed-ended questions (i.e. answer options) that are structured by an interview schedule as part of quantitative research. While these are called interviews they’re far closer to surveys, so we wont cover them again in this chapter. The terms used for in-depth interviews we’ll cover in the next section.

6.1 Interviews

In-depth interviews allow participants to describe experiences in their own words (a primary strength of the interview format). Strong in-depth interviews will include many open-ended questions that allow participants to respond in their own words, share new ideas, and lead the conversation in different directions. The purpose of open-ended questions and in-depth interviews is to hear as much as possible in the person’s own voice, to collect new information and ideas, and to achieve a level of depth not possible in surveys or most other forms of data collection.

Typically, an interview guide is used to create a soft structure for the conversation and is an important preparation tool for the researcher. You can not go into an interview unprepared and just “wing it”; what the interview guide allows you to do is map out a framework, order of topics, and may include specific questions to use during the interview. Generally, the interview guide is thought of as just that — a guide to use in order to keep the interview focused. It is not set in stone and a skilled researcher can change the order of questions or topics in an interviews based on the organic conversation flow.

Depending on the experience and skill level of the researcher, an interview guide can be as simple as a list of topics to cover. However, for consistency and quality of research, the interviewer may want to take the time to at least practice writing out questions in advance to ensure that phrasing and word choices are as clear, objective, and focused as possible. It’s worth remembering that working out the wording of questions in advance allows researchers to ensure more consistency across interview. The interview guide below, taken from the wonderful and free textbook Principles of Sociological Inquiry , shows an interview guide that just has topics.

one to one interview research method example

Alternatively, you can use a more detailed guide that lists out possible questions, as shown below. A more detailed guide is probably better for an interviewer that has less experience, or is just beginning to work on a given topic.

one to one interview research method example

The purpose of an interview guide is to help ask effective questions and to support the process of acquiring the best possible data for your research. Topics and questions should be organized thematically, and in a natural progression that will allow the conversation to flow and deepen throughout the course of the interview. Often, researchers will attempt to memorize or partially memorize the interview guide, in order to be more fully present with the participant during the conversation.

6.2 Asking good Questions

Remember, the purposes of interviews is to go more in-depth with an individual than is possible with a generalized survey. For this reason, it is important to use the guide as a starting point but not to be overly tethered to it during the actual interview process. You may get stuck when respondents give you shorter answers than you expect, or don’t provide the type of depth that you need for your research. Often, you may want to probe for more specifics. Think about using follow up questions like “How does/did that affect you?” or “How does X make you feel?” and “Tell me about a time where X…”

For example, if I was researching the relationship between pets and mental health, some strong open-ended questions might be: * How does your pet typically make you feel when you wake up in the morning? * How does your pet generally affect your mood when you arrive home in the evening? * Tell me about a time when your pet had a significant impact on your emotional state.

Questions framed in this manner leave plenty of room for the respondent to answer in their own words, as opposed to leading and/or truncated questions, such as: * Does being with your pet make you happy? * After a bad day, how much does seeing your pet improve your mood? * Tell me about how important your pet is to your mental health.

These questions assume outcomes and will not result in high quality research. Researchers should always avoid asking leading questions that give away an expected answer or suggest particular responses. For instance, if I ask “we need to spend more on public schools, don’t you think?” the respondent is more likely to agree regardless of their own thoughts. Some wont, but humans generally have a strong natural desire to be agreeable. That’s why leaving your questions neutral and open so that respondents can speak to their experiences and views is critical.

6.3 Analyzing Interview Data

Writing good questions and interviewing respondents are just the first steps of the interview process. After these stages, the researcher still has a lot of work to do to collect usable data from the interview. The researcher must spend time coding and analyzing the interview to retrieve this data. Just doing an interview wont produce data. Think about how many conversations you have everyday, and none of those are leaving you swimming in data.

Hopefully you can record your interviews. Recording your interviews will allow you the opportunity to transcribe them word for word later. If you can’t record the interview you’ll need to take detailed notes so that you can reconstruct what you heard later. Do not trust yourself to “just remember” the conversation. You’re collecting data, precious data that you’re spending time and energy to collect. Treat it as important and valuable. Remember our description of the methodology section from Chapter 2, you need to maintain a chain of custody on your data. If you just remembered the interview, you could be accused of making up the results. Your interview notes and the recording become part of that chain of custody to prove to others that your interviews were real and that your results are accurate.

Assuming you recorded your interview, the first step in the analysis process is transcribing the interview. A transcription is a written record of every word in an interview. Transcriptions can either be completed by the researcher or by a hired worker, though it is good practice for the researcher to transcribe the interview him or herself. Researchers should keep the following points in mind regarding transcriptions: * The interview should take place in a quiet location with minimal background noise to produce a clear recording; * Transcribing interviews is a time-consuming process and may take two to three times longer than the actual interview; * Transcriptions provide a more precise record of the interview than hand written notes and allow the interviewer to focus during the interview.

After transcribing the interview, the next step is to analyze the responses. Coding is the main form of analysis used for interviews and involves studying a transcription to identify important themes. These themes are categorized into codes, which are words or phrases that denote an idea.

You’ll typically being with several codes in mind that are generated by key ideas you week seeking in the questions, but you can also being by using open coding to understand the results. An open coding process involves reading through the transcript multiple times and paying close attention to each line of the text to discover noteworthy concepts. During the open coding process, the researcher keeps an open mind to find any codes that may be relevant to the research topic.

After the open coding process is complete, focused coding can begin. Focused coding takes a closer look at the notes compiled during the open coding stage to merge common codes and define what the codes mean in the context of the research project.

Imagine a researcher is conducting interviews to learn about various people’s experiences of childhood in New Orleans. The following example shows several codes that this researcher extrapolated from an interview with one of their subjects.

one to one interview research method example

6.4 Using interview data

The next chapter will address ways to identify people to interview, but most of the remainder of the book will address how to analyze quantitative data. That shouldn’t be taken as a sign that quantitative data is better, or that it’s easier to use interview data. Because in an interview the researcher must interpret the words of others it is often more challenging to identify your findings and clearly answer your research question. However, quantitative data is more common, and there are more different things you can do with it, so we spend a lot of the textbook focusing on it.

I’ll work through one more example of using interview data though. It takes a lot of practice to be a good and skilled interviewer. What I show below is a brief excerpt of an interview I did, and how that data was used in a resulting paper I wrote. These aren’t the only way you can use interview data, but it’s an example of what the intermediary and final product might look like.

The overall project these are drawn from was concerned with minor league baseball stadiums, but the specific part I’m pulling from here was studying the decline and rejuvenation of downtown around those stadiums in several cities. You’ll see that I’m using the words of the respondent fairly directly, because that’s my data. But I’m not just relying on one respondent and trusting them, I did a few dozen interviews in order to understand the commonalities in people’s perspectives to build a narrative around my research question.

Excerpt from Notes

Excerpt from Notes

Excerpt from Resulting Paper

Excerpt from Resulting Paper

How many interviews are necessary? It actually doesn’t take many. What you want to observe in your interviews is theoretical saturation , where the codes you use in the transcript begin to appear across conversations and groups. If different people disagree that’s fine, but what you want to understand is the commonalities across peoples perspectives. Most research on the subject says that with 8 interviews you’ll typically start to see a decline in new information gathered. That doesn’t mean you won’t get new words , but you’ll stop hearing completely unique perspectives or gain novel insights. At that point, where you’ve ‘heard it all before’ you can stop, because you’ve probably identified the answer to the questions you were trying to research.

6.5 Ensuring Anonymity

One significant ethical concern with interviews, that also applies to surveys, is making sure that respondents maintain anonymity. In either form of data collection you may be asking respondents deeply personal questions, that if exposed may cause legal, personal, or professional harm. Notice that in the excerpt of the paper above the respondents are only identified by an id I assigned (Louisville D) and their career, rather than their name. I can only include the excerpt of the interview notes above because there are no details that might lead to them being identified.

You may want to report details about a person to contextualize the data you gathered, but you should always ensure that no one can be identified from your research. For instance, if you were doing research on racism at large companies, you may want to preface people’s comments by their race, as there is a good chance that white and minority employees would feel differently about the issues. However, if you preface someones comments by saying they’re a minority manager, that may violate their anonymity. Even if you don’t state what company you did interviews with, that may be enough detail for their co-workers to identify them if there are few minority managers at the company. As such, always think long and hard about whether there is any way that the participation of respondents may be exposed.

6.6 Why not both?

one to one interview research method example

We’ve discussed surveys and interviews as different methods the last two chapters, but they can also complement each other.

For instance, let’s say you’re curious to study people who change opinions on abortion, either going from support to opposition or vice versa. You could use a survey to understand the prevalence of changing opinions, i.e. what percentage of people in your city have changed their views. That would help to establish whether this is a prominent issue, or whether it’s a rare phenomenon. But it would be difficult to understand from the survey what makes people change their views. You could add an open ended question for anyone that said they changed their opinion, but many people won’t respond and few will provide the level of detail necessary to understand their motivations. Interviews with people that have changed their opinions would give you an opportunity to explore how their experiences and beliefs have changed in combination with their views towards abortion.

6.7 Summary

In the last two chapters we’ve discussed the two most prominent methods of data collection in the social sciences: surveys and interviews. What we haven’t discussed though is how to identify the people you’ll collect data from; that’s called a sampling strategy. In the next chapter

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research Interviews

Try Qualtrics for free

How to carry out great interviews in qualitative research.

11 min read An interview is one of the most versatile methods used in qualitative research. Here’s what you need to know about conducting great qualitative interviews.

What is a qualitative research interview?

Qualitative research interviews are a mainstay among q ualitative research techniques, and have been in use for decades either as a primary data collection method or as an adjunct to a wider research process. A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom.

There are three main types of qualitative research interview – structured, unstructured or semi-structured.

  • Structured interviews Structured interviews are based around a schedule of predetermined questions and talking points that the researcher has developed. At their most rigid, structured interviews may have a precise wording and question order, meaning that they can be replicated across many different interviewers and participants with relatively consistent results.
  • Unstructured interviews Unstructured interviews have no predetermined format, although that doesn’t mean they’re ad hoc or unplanned. An unstructured interview may outwardly resemble a normal conversation, but the interviewer will in fact be working carefully to make sure the right topics are addressed during the interaction while putting the participant at ease with a natural manner.
  • Semi-structured interviews Semi-structured interviews are the most common type of qualitative research interview, combining the informality and rapport of an unstructured interview with the consistency and replicability of a structured interview. The researcher will come prepared with questions and topics, but will not need to stick to precise wording. This blended approach can work well for in-depth interviews.

Free eBook: The qualitative research design handbook

What are the pros and cons of interviews in qualitative research?

As a qualitative research method interviewing is hard to beat, with applications in social research, market research, and even basic and clinical pharmacy. But like any aspect of the research process, it’s not without its limitations. Before choosing qualitative interviewing as your research method, it’s worth weighing up the pros and cons.

Pros of qualitative interviews:

  • provide in-depth information and context
  • can be used effectively when their are low numbers of participants
  • provide an opportunity to discuss and explain questions
  • useful for complex topics
  • rich in data – in the case of in-person or video interviews , the researcher can observe body language and facial expression as well as the answers to questions

Cons of qualitative interviews:

  • can be time-consuming to carry out
  • costly when compared to some other research methods
  • because of time and cost constraints, they often limit you to a small number of participants
  • difficult to standardize your data across different researchers and participants unless the interviews are very tightly structured
  • As the Open University of Hong Kong notes, qualitative interviews may take an emotional toll on interviewers

Qualitative interview guides

Semi-structured interviews are based on a qualitative interview guide, which acts as a road map for the researcher. While conducting interviews, the researcher can use the interview guide to help them stay focused on their research questions and make sure they cover all the topics they intend to.

An interview guide may include a list of questions written out in full, or it may be a set of bullet points grouped around particular topics. It can prompt the interviewer to dig deeper and ask probing questions during the interview if appropriate.

Consider writing out the project’s research question at the top of your interview guide, ahead of the interview questions. This may help you steer the interview in the right direction if it threatens to head off on a tangent.

one to one interview research method example

Avoid bias in qualitative research interviews

According to Duke University , bias can create significant problems in your qualitative interview.

  • Acquiescence bias is common to many qualitative methods, including focus groups. It occurs when the participant feels obliged to say what they think the researcher wants to hear. This can be especially problematic when there is a perceived power imbalance between participant and interviewer. To counteract this, Duke University’s experts recommend emphasizing the participant’s expertise in the subject being discussed, and the value of their contributions.
  • Interviewer bias is when the interviewer’s own feelings about the topic come to light through hand gestures, facial expressions or turns of phrase. Duke’s recommendation is to stick to scripted phrases where this is an issue, and to make sure researchers become very familiar with the interview guide or script before conducting interviews, so that they can hone their delivery.

What kinds of questions should you ask in a qualitative interview?

The interview questions you ask need to be carefully considered both before and during the data collection process. As well as considering the topics you’ll cover, you will need to think carefully about the way you ask questions.

Open-ended interview questions – which cannot be answered with a ‘yes’ ‘no’ or ‘maybe’ – are recommended by many researchers as a way to pursue in depth information.

An example of an open-ended question is “What made you want to move to the East Coast?” This will prompt the participant to consider different factors and select at least one. Having thought about it carefully, they may give you more detailed information about their reasoning.

A closed-ended question , such as “Would you recommend your neighborhood to a friend?” can be answered without too much deliberation, and without giving much information about personal thoughts, opinions and feelings.

Follow-up questions can be used to delve deeper into the research topic and to get more detail from open-ended questions. Examples of follow-up questions include:

  • What makes you say that?
  • What do you mean by that?
  • Can you tell me more about X?
  • What did/does that mean to you?

As well as avoiding closed-ended questions, be wary of leading questions. As with other qualitative research techniques such as surveys or focus groups, these can introduce bias in your data. Leading questions presume a certain point of view shared by the interviewer and participant, and may even suggest a foregone conclusion.

An example of a leading question might be: “You moved to New York in 1990, didn’t you?” In answering the question, the participant is much more likely to agree than disagree. This may be down to acquiescence bias or a belief that the interviewer has checked the information and already knows the correct answer.

Other leading questions involve adjectival phrases or other wording that introduces negative or positive connotations about a particular topic. An example of this kind of leading question is: “Many employees dislike wearing masks to work. How do you feel about this?” It presumes a positive opinion and the participant may be swayed by it, or not want to contradict the interviewer.

Harvard University’s guidelines for qualitative interview research add that you shouldn’t be afraid to ask embarrassing questions – “if you don’t ask, they won’t tell.” Bear in mind though that too much probing around sensitive topics may cause the interview participant to withdraw. The Harvard guidelines recommend leaving sensitive questions til the later stages of the interview when a rapport has been established.

More tips for conducting qualitative interviews

Observing a participant’s body language can give you important data about their thoughts and feelings. It can also help you decide when to broach a topic, and whether to use a follow-up question or return to the subject later in the interview.

Be conscious that the participant may regard you as the expert, not themselves. In order to make sure they express their opinions openly, use active listening skills like verbal encouragement and paraphrasing and clarifying their meaning to show how much you value what they are saying.

Remember that part of the goal is to leave the interview participant feeling good about volunteering their time and their thought process to your research. Aim to make them feel empowered , respected and heard.

Unstructured interviews can demand a lot of a researcher, both cognitively and emotionally. Be sure to leave time in between in-depth interviews when scheduling your data collection to make sure you maintain the quality of your data, as well as your own well-being .

Recording and transcribing interviews

Historically, recording qualitative research interviews and then transcribing the conversation manually would have represented a significant part of the cost and time involved in research projects that collect qualitative data.

Fortunately, researchers now have access to digital recording tools, and even speech-to-text technology that can automatically transcribe interview data using AI and machine learning. This type of tool can also be used to capture qualitative data from qualitative research (focus groups,ect.) making this kind of social research or market research much less time consuming.

one to one interview research method example

Data analysis

Qualitative interview data is unstructured, rich in content and difficult to analyze without the appropriate tools. Fortunately, machine learning and AI can once again make things faster and easier when you use qualitative methods like the research interview.

Text analysis tools and natural language processing software can ‘read’ your transcripts and voice data and identify patterns and trends across large volumes of text or speech. They can also perform khttps://www.qualtrics.com/experience-management/research/sentiment-analysis/

which assesses overall trends in opinion and provides an unbiased overall summary of how participants are feeling.

one to one interview research method example

Another feature of text analysis tools is their ability to categorize information by topic, sorting it into groupings that help you organize your data according to the topic discussed.

All in all, interviews are a valuable technique for qualitative research in business, yielding rich and detailed unstructured data. Historically, they have only been limited by the human capacity to interpret and communicate results and conclusions, which demands considerable time and skill.

When you combine this data with AI tools that can interpret it quickly and automatically, it becomes easy to analyze and structure, dovetailing perfectly with your other business data. An additional benefit of natural language analysis tools is that they are free of subjective biases, and can replicate the same approach across as much data as you choose. By combining human research skills with machine analysis, qualitative research methods such as interviews are more valuable than ever to your business.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

The Interview Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Interviews involve a conversation with a purpose, but have some distinct features compared to ordinary conversation, such as being scheduled in advance, having an asymmetry in outcome goals between interviewer and interviewee, and often following a question-answer format.

Interviews are different from questionnaires as they involve social interaction. Unlike questionnaire methods, researchers need training in interviewing (which costs money).

Multiracial businesswomen talk brainstorm at team meeting discuss business ideas together. Diverse multiethnic female colleagues or partners engaged in discussion. Interview concept

How Do Interviews Work?

Researchers can ask different types of questions, generating different types of data . For example, closed questions provide people with a fixed set of responses, whereas open questions allow people to express what they think in their own words.

The researcher will often record interviews, and the data will be written up as a transcript (a written account of interview questions and answers) which can be analyzed later.

It should be noted that interviews may not be the best method for researching sensitive topics (e.g., truancy in schools, discrimination, etc.) as people may feel more comfortable completing a questionnaire in private.

There are different types of interviews, with a key distinction being the extent of structure. Semi-structured is most common in psychology research. Unstructured interviews have a free-flowing style, while structured interviews involve preset questions asked in a particular order.

Structured Interview

A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded.

Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1).

interview schedule example

   Figure 1. An example of an interview schedule

The interviewer will not deviate from the interview schedule (except to clarify the meaning of the question) or probe beyond the answers received.  Replies are recorded on a questionnaire, and the order and wording of questions, and sometimes the range of alternative answers, is preset by the researcher.

A structured interview is also known as a formal interview (like a job interview).

  • Structured interviews are easy to replicate as a fixed set of closed questions are used, which are easy to quantify – this means it is easy to test for reliability .
  • Structured interviews are fairly quick to conduct which means that many interviews can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Structured interviews are not flexible. This means new questions cannot be asked impromptu (i.e., during the interview), as an interview schedule must be followed.
  • The answers from structured interviews lack detail as only closed questions are asked, which generates quantitative data . This means a researcher won’t know why a person behaves a certain way.

Unstructured Interview

Unstructured interviews do not use any set questions, instead, the interviewer asks open-ended questions based on a specific research topic, and will try to let the interview flow like a natural conversation. The interviewer modifies his or her questions to suit the candidate’s specific experiences.

Unstructured interviews are sometimes referred to as ‘discovery interviews’ and are more like a ‘guided conservation’ than a strictly structured interview. They are sometimes called informal interviews.

Unstructured interviews are most useful in qualitative research to analyze attitudes and values. Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective points of view.

Interviewer Self-Disclosure

Interviewer self-disclosure involves the interviewer revealing personal information or opinions during the research interview. This may increase rapport but risks changing dynamics away from a focus on facilitating the interviewee’s account.

In unstructured interviews, the informal conversational style may deliberately include elements of interviewer self-disclosure, mirroring ordinary conversation dynamics.

Interviewer self-disclosure risks changing the dynamics away from facilitation of interviewee accounts. It should not be ruled out entirely but requires skillful handling informed by reflection.

  • An informal interviewing style with some interviewer self-disclosure may increase rapport and participant openness. However, it also increases the chance of the participant converging opinions with the interviewer.
  • Complete interviewer neutrality is unlikely. However, excessive informality and self-disclosure risk the interview becoming more of an ordinary conversation and producing consensus accounts.
  • Overly personal disclosures could also be seen as irrelevant and intrusive by participants. They may invite increased intimacy on uncomfortable topics.
  • The safest approach seems to be to avoid interviewer self-disclosures in most cases. Where an informal style is used, disclosures require careful judgment and substantial interviewing experience.
  • If asked for personal opinions during an interview, the interviewer could highlight the defined roles and defer that discussion until after the interview.
  • Unstructured interviews are more flexible as questions can be adapted and changed depending on the respondents’ answers. The interview can deviate from the interview schedule.
  • Unstructured interviews generate qualitative data through the use of open questions. This allows the respondent to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation.
  • They also have increased validity because it gives the interviewer the opportunity to probe for a deeper understanding, ask for clarification & allow the interviewee to steer the direction of the interview, etc. Interviewers have the chance to clarify any questions of participants during the interview.
  • It can be time-consuming to conduct an unstructured interview and analyze the qualitative data (using methods such as thematic analysis).
  • Employing and training interviewers is expensive and not as cheap as collecting data via questionnaires . For example, certain skills may be needed by the interviewer. These include the ability to establish rapport and knowing when to probe.
  • Interviews inevitably co-construct data through researchers’ agenda-setting and question-framing. Techniques like open questions provide only limited remedies.

Focus Group Interview

Focus group interview is a qualitative approach where a group of respondents are interviewed together, used to gain an in‐depth understanding of social issues.

This type of interview is often referred to as a focus group because the job of the interviewer ( or moderator ) is to bring the group to focus on the issue at hand. Initially, the goal was to reach a consensus among the group, but with the development of techniques for analyzing group qualitative data, there is less emphasis on consensus building.

The method aims to obtain data from a purposely selected group of individuals rather than from a statistically representative sample of a broader population.

The role of the interview moderator is to make sure the group interacts with each other and do not drift off-topic. Ideally, the moderator will be similar to the participants in terms of appearance, have adequate knowledge of the topic being discussed, and exercise mild unobtrusive control over dominant talkers and shy participants.

A researcher must be highly skilled to conduct a focus group interview. For example, the moderator may need certain skills, including the ability to establish rapport and know when to probe.

  • Group interviews generate qualitative narrative data through the use of open questions. This allows the respondents to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation. Qualitative data also includes observational data, such as body language and facial expressions.
  • Group responses are helpful when you want to elicit perspectives on a collective experience, encourage diversity of thought, reduce researcher bias, and gather a wider range of contextualized views.
  • They also have increased validity because some participants may feel more comfortable being with others as they are used to talking in groups in real life (i.e., it’s more natural).
  • When participants have common experiences, focus groups allow them to build on each other’s comments to provide richer contextual data representing a wider range of views than individual interviews.
  • Focus groups are a type of group interview method used in market research and consumer psychology that are cost – effective for gathering the views of consumers .
  • The researcher must ensure that they keep all the interviewees” details confidential and respect their privacy. This is difficult when using a group interview. For example, the researcher cannot guarantee that the other people in the group will keep information private.
  • Group interviews are less reliable as they use open questions and may deviate from the interview schedule, making them difficult to repeat.
  • It is important to note that there are some potential pitfalls of focus groups, such as conformity, social desirability, and oppositional behavior, that can reduce the usefulness of the data collected.
For example, group interviews may sometimes lack validity as participants may lie to impress the other group members. They may conform to peer pressure and give false answers.

To avoid these pitfalls, the interviewer needs to have a good understanding of how people function in groups as well as how to lead the group in a productive discussion.

Semi-Structured Interview

Semi-structured interviews lie between structured and unstructured interviews. The interviewer prepares a set of same questions to be answered by all interviewees. Additional questions might be asked during the interview to clarify or expand certain issues.

In semi-structured interviews, the interviewer has more freedom to digress and probe beyond the answers. The interview guide contains a list of questions and topics that need to be covered during the conversation, usually in a particular order.

Semi-structured interviews are most useful to address the ‘what’, ‘how’, and ‘why’ research questions. Both qualitative and quantitative analyses can be performed on data collected during semi-structured interviews.

  • Semi-structured interviews allow respondents to answer more on their terms in an informal setting yet provide uniform information making them ideal for qualitative analysis.
  • The flexible nature of semi-structured interviews allows ideas to be introduced and explored during the interview based on the respondents’ answers.
  • Semi-structured interviews can provide reliable and comparable qualitative data. Allows the interviewer to probe answers, where the interviewee is asked to clarify or expand on the answers provided.
  • The data generated remain fundamentally shaped by the interview context itself. Analysis rarely acknowledges this endemic co-construction.
  • They are more time-consuming (to conduct, transcribe, and analyze) than structured interviews.
  • The quality of findings is more dependent on the individual skills of the interviewer than in structured interviews. Skill is required to probe effectively while avoiding biasing responses.

The Interviewer Effect

Face-to-face interviews raise methodological problems. These stem from the fact that interviewers are themselves role players, and their perceived status may influence the replies of the respondents.

Because an interview is a social interaction, the interviewer’s appearance or behavior may influence the respondent’s answers. This is a problem as it can bias the results of the study and make them invalid.

For example, the gender, ethnicity, body language, age, and social status of the interview can all create an interviewer effect. If there is a perceived status disparity between the interviewer and the interviewee, the results of interviews have to be interpreted with care. This is pertinent for sensitive topics such as health.

For example, if a researcher was investigating sexism amongst males, would a female interview be preferable to a male? It is possible that if a female interviewer was used, male participants might lie (i.e., pretend they are not sexist) to impress the interviewer, thus creating an interviewer effect.

Flooding interviews with researcher’s agenda

The interactional nature of interviews means the researcher fundamentally shapes the discourse, rather than just neutrally collecting it. This shapes what is talked about and how participants can respond.
  • The interviewer’s assumptions, interests, and categories don’t just shape the specific interview questions asked. They also shape the framing, task instructions, recruitment, and ongoing responses/prompts.
  • This flooding of the interview interaction with the researcher’s agenda makes it very difficult to separate out what comes from the participant vs. what is aligned with the interviewer’s concerns.
  • So the participant’s talk ends up being fundamentally shaped by the interviewer rather than being a more natural reflection of the participant’s own orientations or practices.
  • This effect is hard to avoid because interviews inherently involve the researcher setting an agenda. But it does mean the talk extracted may say more about the interview process than the reality it is supposed to reflect.

Interview Design

First, you must choose whether to use a structured or non-structured interview.

Characteristics of Interviewers

Next, you must consider who will be the interviewer, and this will depend on what type of person is being interviewed. There are several variables to consider:

  • Gender and age : This can greatly affect respondents’ answers, particularly on personal issues.
  • Personal characteristics : Some people are easier to get on with than others. Also, the interviewer’s accent and appearance (e.g., clothing) can affect the rapport between the interviewer and interviewee.
  • Language : The interviewer’s language should be appropriate to the vocabulary of the group of people being studied. For example, the researcher must change the questions’ language to match the respondents’ social background” age / educational level / social class/ethnicity, etc.
  • Ethnicity : People may have difficulty interviewing people from different ethnic groups.
  • Interviewer expertise should match research sensitivity – inexperienced students should avoid interviewing highly vulnerable groups.

Interview Location

The location of a research interview can influence the way in which the interviewer and interviewee relate and may exaggerate a power dynamic in one direction or another. It is usual to offer interviewees a choice of location as part of facilitating their comfort and encouraging participation.

However, the safety of the interviewer is an overriding consideration and, as mentioned, a minimal requirement should be that a responsible person knows where the interviewer has gone and when they are due back.

Remote Interviews

The COVID-19 pandemic necessitated remote interviewing for research continuity. However online interview platforms provide increased flexibility even under normal conditions.

They enable access to participant groups across geographical distances without travel costs or arrangements. Online interviews can be efficiently scheduled to align with researcher and interviewee availability.

There are practical considerations in setting up remote interviews. Interviewees require access to internet and an online platform such as Zoom, Microsoft Teams or Skype through which to connect.

Certain modifications help build initial rapport in the remote format. Allowing time at the start of the interview for casual conversation while testing audio/video quality helps participants settle in. Minor delays can disrupt turn-taking flow, so alerting participants to speak slightly slower than usual minimizes accidental interruptions.

Keeping remote interviews under an hour avoids fatigue for stare at a screen. Seeking advanced ethical clearance for verbal consent at the interview start saves participant time. Adapting to the remote context shows care for interviewees and aids rich discussion.

However, it remains important to critically reflect on how removing in-person dynamics may shape the co-created data. Perhaps some nuances of trust and disclosure differ over video.

Vulnerable Groups

The interviewer must ensure that they take special care when interviewing vulnerable groups, such as children. For example, children have a limited attention span, so lengthy interviews should be avoided.

Developing an Interview Schedule

An interview schedule is a list of pre-planned, structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue.
  • List the key themes or topics that must be covered to address your research questions. This will form the basic content.
  • Organize the content logically, such as chronologically following the interviewee’s experiences. Place more sensitive topics later in the interview.
  • Develop the list of content into actual questions and prompts. Carefully word each question – keep them open-ended, non-leading, and focused on examples.
  • Add prompts to remind you to cover areas of interest.
  • Pilot test the interview schedule to check it generates useful data and revise as needed.
  • Be prepared to refine the schedule throughout data collection as you learn which questions work better.
  • Practice skills like asking follow-up questions to get depth and detail. Stay flexible to depart from the schedule when needed.
  • Keep questions brief and clear. Avoid multi-part questions that risk confusing interviewees.
  • Listen actively during interviews to determine which pre-planned questions can be skipped based on information the participant has already provided.

The key is balancing preparation with the flexibility to adapt questions based on each interview interaction. With practice, you’ll gain skills to conduct productive interviews that obtain rich qualitative data.

The Power of Silence

Strategic use of silence is a key technique to generate interviewee-led data, but it requires judgment about appropriate timing and duration to maintain mutual understanding.
  • Unlike ordinary conversation, the interviewer aims to facilitate the interviewee’s contribution without interrupting. This often means resisting the urge to speak at the end of the interviewee’s turn construction units (TCUs).
  • Leaving a silence after a TCU encourages the interviewee to provide more material without being led by the interviewer. However, this simple technique requires confidence, as silence can feel socially awkward.
  • Allowing longer silences (e.g. 24 seconds) later in interviews can work well, but early on even short silences may disrupt rapport if they cause misalignment between speakers.
  • Silence also allows interviewees time to think before answering. Rushing to re-ask or amend questions can limit responses.
  • Blunt backchannels like “mm hm” also avoid interrupting flow. Interruptions, especially to finish an interviewee’s turn, are problematic as they make the ownership of perspectives unclear.
  • If interviewers incorrectly complete turns, an upside is it can produce extended interviewee narratives correcting the record. However, silence would have been better to let interviewees shape their own accounts.

Recording & Transcription

Design choices.

Design choices around recording and engaging closely with transcripts influence analytic insights, as well as practical feasibility. Weighing up relevant tradeoffs is key.
  • Audio recording is standard, but video better captures contextual details, which is useful for some topics/analysis approaches. Participants may find video invasive for sensitive research.
  • Digital formats enable the sharing of anonymized clips. Additional microphones reduce audio issues.
  • Doing all transcription is time-consuming. Outsourcing can save researcher effort but needs confidentiality assurances. Always carefully check outsourced transcripts.
  • Online platform auto-captioning can facilitate rapid analysis, but accuracy limitations mean full transcripts remain ideal. Software cleans up caption file formatting.
  • Verbatim transcripts best capture nuanced meaning, but the level of detail needed depends on the analysis approach. Referring back to recordings is still advisable during analysis.
  • Transcripts versus recordings highlight different interaction elements. Transcripts make overt disagreements clearer through the wording itself. Recordings better convey tone affiliativeness.

Transcribing Interviews & Focus Groups

Here are the steps for transcribing interviews:
  • Play back audio/video files to develop an overall understanding of the interview
  • Format the transcription document:
  • Add line numbers
  • Separate interviewer questions and interviewee responses
  • Use formatting like bold, italics, etc. to highlight key passages
  • Provide sentence-level clarity in the interviewee’s responses while preserving their authentic voice and word choices
  • Break longer passages into smaller paragraphs to help with coding
  • If translating the interview to another language, use qualified translators and back-translate where possible
  • Select a notation system to indicate pauses, emphasis, laughter, interruptions, etc., and adapt it as needed for your data
  • Insert screenshots, photos, or documents discussed in the interview at the relevant point in the transcript
  • Read through multiple times, revising formatting and notations
  • Double-check the accuracy of transcription against audio/videos
  • De-identify transcript by removing identifying participant details

The goal is to produce a formatted written record of the verbal interview exchange that captures the meaning and highlights important passages ready for the coding process. Careful transcription is the vital first step in analysis.

Coding Transcripts

The goal of transcription and coding is to systematically transform interview responses into a set of codes and themes that capture key concepts, experiences and beliefs expressed by participants. Taking care with transcription and coding procedures enhances the validity of qualitative analysis .
  • Read through the transcript multiple times to become immersed in the details
  • Identify manifest/obvious codes and latent/underlying meaning codes
  • Highlight insightful participant quotes that capture key concepts (in vivo codes)
  • Create a codebook to organize and define codes with examples
  • Use an iterative cycle of inductive (data-driven) coding and deductive (theory-driven) coding
  • Refine codebook with clear definitions and examples as you code more transcripts
  • Collaborate with other coders to establish the reliability of codes

Ethical Issues

Informed consent.

The participant information sheet must give potential interviewees a good idea of what is involved if taking part in the research.

This will include the general topics covered in the interview, where the interview might take place, how long it is expected to last, how it will be recorded, the ways in which participants’ anonymity will be managed, and incentives offered.

It might be considered good practice to consider true informed consent in interview research to require two distinguishable stages:

  • Consent to undertake and record the interview and
  • Consent to use the material in research after the interview has been conducted and the content known, or even after the interviewee has seen a copy of the transcript and has had a chance to remove sections, if desired.

Power and Vulnerability

  • Early feminist views that sensitivity could equalize power differences are likely naive. The interviewer and interviewee inhabit different knowledge spheres and social categories, indicating structural disparities.
  • Power fluctuates within interviews. Researchers rely on participation, yet interviewees control openness and can undermine data collection. Assumptions should be avoided.
  • Interviews on sensitive topics may feel like quasi-counseling. Interviewers must refrain from dual roles, instead supplying support service details to all participants.
  • Interviewees recruited for trauma experiences may reveal more than anticipated. While generating analytic insights, this risks leaving them feeling exposed.
  • Ultimately, power balances resist reconciliation. But reflexively analyzing operations of power serves to qualify rather than nullify situtated qualitative accounts.

Some groups, like those with mental health issues, extreme views, or criminal backgrounds, risk being discredited – treated skeptically by researchers.

This creates tensions with qualitative approaches, often having an empathetic ethos seeking to center subjective perspectives. Analysis should balance openness to offered accounts with critically examining stakes and motivations behind them.

Potter, J., & Hepburn, A. (2005). Qualitative interviews in psychology: Problems and possibilities.  Qualitative research in Psychology ,  2 (4), 281-307.

Houtkoop-Steenstra, H. (2000). Interaction and the standardized survey interview: The living questionnaire . Cambridge University Press

Madill, A. (2011). Interaction in the semi-structured interview: A comparative analysis of the use of and response to indirect complaints. Qualitative Research in Psychology, 8 (4), 333–353.

Maryudi, A., & Fisher, M. (2020). The power in the interview: A practical guide for identifying the critical role of actor interests in environment research. Forest and Society, 4 (1), 142–150

O’Key, V., Hugh-Jones, S., & Madill, A. (2009). Recruiting and engaging with people in deprived locales: Interviewing families about their eating patterns. Social Psychological Review, 11 (20), 30–35.

Puchta, C., & Potter, J. (2004). Focus group practice . Sage.

Schaeffer, N. C. (1991). Conversation with a purpose— Or conversation? Interaction in the standardized interview. In P. P. Biemer, R. M. Groves, L. E. Lyberg, & N. A. Mathiowetz (Eds.), Measurement errors in surveys (pp. 367–391). Wiley.

Silverman, D. (1973). Interview talk: Bringing off a research instrument. Sociology, 7 (1), 31–48.

Print Friendly, PDF & Email

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

27k Accesses

48 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

one to one interview research method example

Determinants of behaviour and their efficacy as targets of behavioural change interventions

one to one interview research method example

Interviews in the social sciences

one to one interview research method example

Principal component analysis

Introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions.

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

‘Baby mamas’ in Urban Ghana: an exploratory qualitative study on the factors influencing serial fathering among men in Accra, Ghana

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Revolutionising dental technologies: a qualitative study on dental technicians’ perceptions of Artificial intelligence integration

  • Galvin Sim Siang Lin
  • Yook Shiang Ng
  • Kah Hoay Chua

BMC Oral Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

one to one interview research method example

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Types of Interviews in Research | Guide & Examples

Types of Interviews in Research | Guide & Examples

Published on 4 May 2022 by Tegan George . Revised on 10 October 2022.

An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.

There are several types of interviews, often differentiated by their level of structure. Structured interviews have predetermined questions asked in a predetermined order. Unstructured interviews are more free-flowing, and semi-structured interviews fall in between.

Interviews are commonly used in market research, social science, and ethnographic research.

Table of contents

What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, frequently asked questions about types of interviews.

Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.

Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.

  • You feel very comfortable with your topic. This will help you formulate your questions most effectively.
  • You have limited time or resources. Structured interviews are a bit more straightforward to analyse because of their closed-ended nature, and can be a doable undertaking for an individual.
  • Your research question depends on holding environmental conditions between participants constant

Prevent plagiarism, run a free check.

Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.

Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered ‘the best of both worlds’.

However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalisability and validity of your results.

  • You have prior interview experience. It’s easier than you think to accidentally ask a leading question when coming up with questions on the fly. Overall, spontaneous questions are much more difficult than they may seem.
  • Your research question is exploratory in nature. The answers you receive can help guide your future research.

An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.

Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.

However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.

  • You have a solid background in your research topic and have conducted interviews before
  • Your research question is exploratory in nature, and you are seeking descriptive data that will deepen and contextualise your initial hypotheses
  • Your research necessitates forming a deeper connection with your participants, encouraging them to feel comfortable revealing their true opinions and emotions

A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behaviour, or controversial topics.

Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organise than experiments or large surveys. However, their small size leads to low external validity and the temptation as a researcher to ‘cherry-pick’ responses that fit your hypotheses.

  • Your research focuses on the dynamics of group discussion or real-time responses to your topic
  • Your questions are complex and rooted in feelings, opinions, and perceptions that cannot be answered with a ‘yes’ or ‘no’
  • Your topic is exploratory in nature, and you are seeking information that will help you uncover new questions or future research ideas

Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.

Here are some examples.

  • Semi-structured
  • Unstructured
  • Focus group
  • Do you like dogs? Yes/No
  • Do you associate dogs with feeling: happy; somewhat happy; neutral; somewhat unhappy; unhappy
  • If yes, name one attribute of dogs that you like.
  • If no, name one attribute of dogs that you don’t like.
  • What feelings do dogs bring out in you?
  • When you think more deeply about this, what experiences would you say your feelings are rooted in?

Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.

However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.

Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilise this research method.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, October 10). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved 6 May 2024, from https://www.scribbr.co.uk/research-methods/types-of-interviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, data collection methods | step-by-step guide & examples, face validity | guide with definition & examples, construct validity | definition, types, & examples.

Conduct one-to-one qualitative interviews for research

Affiliation.

  • 1 a School of Social Sciences , Cardiff University , Cardiff , UK.
  • PMID: 27113858
  • DOI: 10.1080/14739879.2016.1176874
  • Ethics, Research
  • Interviews as Topic / methods*
  • Qualitative Research*

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Research methodology series Interviewing in qualitative research: The one-to-one interview

Profile image of Sinead Mannion

Related Papers

Maria Fortes

one to one interview research method example

Munyaradzi Madziwa

Towards this end, various methodologies qualitative and quantitative are available for data collection, of which interviewing is a part of. It is this paper's purpose to discuss interviewing as a data collection method, particularly focusing on its value, strengths and weaknesses. For purposes of this discussion, interviews shall be defined as controlled conversations that the interviewer uses to obtain data required from the respondent by means of asking serious questions verbally (Akbayrak: 2000). The essay will not delve into the different interviewing techniques, but tackle interviewing in the collective. Interviews are a key qualitative data collection method for social research. There are many reasons to use interviews for collecting data and using it as a research instrument. They are mainly useful in cases where there is need to attain highly personalized data, as well as in cases where there are opportunities for probing to get underlying factors. They also become a viable option where there are limited respondents and a good return rate is important, and also where respondents are not fluent in the native language of a country, or where they have difficulties with written language (Gray: 2004). The main advantage of interviews stems from their capability to offer a complete description and analysis of a research subject, without limiting the scope of the research and the nature of participant's responses (Collis & Hussey, 2003). Interviews are thus useful for gaining insight and context into a topic. They can provide information to which the interviewee was previously privy to, unlike other data collection methods such as questionnaires may act as blinkers to the responses required. They thus become critical for discovery oriented researches where the researcher is, in advance, only roughly aware in of what they are looking for. In an interview, there is leeway for a respondent to describe what is important to them, and from their responses useful quotes and stories can also be collected. In response to the need to seek complete description and analysis of subject matter, interviews from the onset, facilitate for the accurate screening for the right interviewee. Due to the nature of information sought, which has to be in depth, accurate, and reliable, the interviewer has to find the right individual who has the desired information. If the assessment is around certain work processes, then individuals directly involved in the work, or those directly affected by the work are purposefully sampled. In line with the above, face to face interviews will go further in making screening more accurate, as an individual being interviewed is unable to provide false information during screening questions such as gender, age, or race(Akbayrak: 2000).

Virginia Njenga

Psychological Thought

Stanislava Stoyanova

Jawad Hussain Awan

Research is a scholastic attempt that helps to establish revised view about the particular area adopting pre-requisite procedure to establish authenticity without it research’s purpose shall not be achieved. In this regard, smart academic work is required because it demands different processes to pass on. Clifford Woody explains research in a way that it encompasses describing and re-describing troubles, devising assumption or suggesting solutions; bringing information together, putting them in order and assessing data; making supposition for attainment of conclusion and above all carefully examining the conclusions to ensure whether these come out to be fit for making hypothesis. The educational research work involves the student to seek the required guidance in congregation substance and organize them systematically. Using interview method for assimilating the required records is the useful way which may be suitable to exact problem, using data, questionnaires and conducting careful tests, preserving facts, categorising it and thereby interpreting it. After recognizing and identifying the problems, the researcher tries to figure out investigational plan to collect the desired facts in effective manner. In this paper, interview method has chosen for collection of data. Like other research tools, it is also a very important for the research purpose. This method comprises numerous types, among them few of which are discussed in this article. The interview method involves presentation of oral-verbal stimuli with respect to the change of different responses. This method offers variety of interviews as discussed in paper which help to acquire exact information required.

Clifford Konold

Jonathan Potter

Mediterranean Journal of Social Sciences

Stephen Mago

Shannon Oltmann

Interviews are a staple method used in qualitative research. Many authors hold face-to-face interviews to be the gold standard, or the assumed best mode in which to conduct interviews. However, a large number of research projects are based on conducting interviews via telephone. While some scholars have addressed the advantages and disadvantages of using telephones to conduct interviews, this work is scattered across multiple disciplines and lacks a cohesive, comprehensive framework. The current article seeks to rectify this gap in the literature, by explicitly developing the constructs of the interviewer context and the respondent context. By examining key components in each of these contexts, the qualitative interviewer can make an informed, reflective decision about the best interview mode to use for a particular project.

De Wet Schutte

Abstract All empirical research involves some form of data collection. One of the approaches commonly used the human sciences, is survey research. This article focuses on the various forms of interviews and using the questionnaire technique as a data collection instrument often associated with surveys. It puts the different interview types on a continuum, ranging from structured to unstructured interviews into perspective against two underlying types of data, namely qualitative and quantitative data. The article sensitises the prospective researcher for some pitfalls when using the interview as a data collection technique and includes some hints for this protective researcher when using the interview data collection technique in practice. It also attempts to bring order into the vocabulary when using the concepts: procedure and technique.

RELATED PAPERS

Computers & Chemical Engineering

Terje Hertzberg

Tuberculin Skin Test in Children

BAĞDAGÜL AKSU

Chinese Journal of Natural Medicines

ORISH EBERE ORISAKWE

2018 IEEE International Symposium on Information Theory (ISIT)

Anoosheh Heidarzadeh

Alister Munthali

Darwiniana, nueva serie

Antonia Oggero

Pediatrics International

Müjgan ARSLAN

Agrokémia és Talajtan

Ana paula Fernandez anton

Boletin Cultural Y Bibliografico

Fernando Antonio Salazar Rivas

Revista Terra sem Amos

Francisco Maurício

Journal of Fluid Mechanics

Florian Muijres

clement ajekwe

Sains Malaysiana

ira Puspitasari

Jurnal Ilmiah Momentum

Petrus Londa

New Journal of Physics

luciano maiani

Immaginare la storia. Abbecedario del colonialismo italiano,

Anna Chiara Cimoli

Work, Employment and Society

Maria Hudson

Andrew Witt

nora Tabouche

Morphologie

Labrini Athanasiou

International Journal of Geosciences

Parul Trivedi

World Journal of Engineering

anand siva gandam

Journal of Urban Affairs

Deborah de Lange

Revista Cadernos do Ceom

MARIA APARECIDA DA SILVA OLIVEIRA

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13.1 Interview research: What is it and when should it be used?

Learning objectives.

  • Define interviews from the social scientific perspective
  • Identify when it is appropriate to employ interviews as a data-collection strategy

Knowing how to create and conduct a good interview is an essential skill. Interviews are used by market researchers to learn how to sell their products. Journalists use interviews to get information from a host of people, from VIPs to random people on the street. Police use interviews to investigate crimes. It seems everyone who’s anyone knows how to conduct an interview.

two people talking in a dark restaurant

In social science,  interviews are a method of data collection that involves two or more people exchanging information through a series of questions and answers. The questions are designed by a researcher to elicit information from interview participants on a specific topic or set of topics. These topics are informed by the author’s research questions. Interviews typically involve an in-person meeting between two people (an interviewer and an interviewee), but interviews need not be limited to two people, nor must they occur in-person.

You may be wondering when you should choose interviews as your data collection method. Interviews are an excellent way to gather detailed information. They also have an advantage over surveys, as they can be adapted as you learn more information. Recall that survey data collection methods do not allow researchers to change the questions that are administered, even if a participant’s response sparks some follow-up question in your mind. All participants must be asked the same questions in the same manner. The questions you decided to put on your survey during the design stage determine what data you get. In an interview, however, you can follow up on new and unexpected topics that emerge during the conversation. Trusting in emergence and learning from your participants are hallmarks of qualitative research. In this way, interviews are a useful method to employ when you want to know the story behind the responses you might receive in a written survey.

Interviews are also useful when your topic is rather complex, requires lengthy explanation, or needs a dialogue between two people to thoroughly investigate. Additionally, interviews may be the best method to utilize if your study involves describing the process by which a phenomenon occurs, like how a person makes a decision. For example, you could use interviews to gather data about how people reach the decision not to have children and how others in their lives have responded to that decision. To understand these processes, you would need to exchange dialogue with respondents. When they begin to share their story with you, new questions that hadn’t occurred to you in prior interviews will arise because each person’s story is unique. Further, closed-ended survey questions would not be as effective in capturing the complex process of choosing not to have children.

In sum, interview research is especially useful when the following are true:

  • You wish to gather very detailed information
  • You anticipate wanting to ask respondents follow-up questions based on their responses
  • You plan to ask questions that require lengthy explanation
  • You are studying a complex or potentially confusing topic to respondents
  • You are studying processes, such as how people make decisions

Key Takeaways

  • Understanding how to design and conduct interview research is a useful skill to have.
  • In a social scientific interview, two or more people exchange information through a series of questions and answers.
  • Interview research is often used when detailed information is required and when a researcher wishes to examine processes.

Interviews- a method of data collection that involves two or more people exchanging information through a series of questions and answers

Image attributions

interview restaurant a pair by alda2 CC-0

Scientific Inquiry in Social Work Copyright © 2018 by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 14, Issue 5
  • Interdisciplinary research approach based on a mixed-methods design to explore patient altruism at the end of life: a study protocol
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-2823-8806 Mathieu Bernard 1 ,
  • Claudia Gamondi 2 ,
  • Anca-Cristina Sterie 3 ,
  • Philip J Larkin 4 ,
  • Ralf Jox 5 ,
  • Gian Domenico Borasio 2
  • 1 Palliative and Supportive Care Service, Chair of Palliative Psychology , Lausanne University Hospital and University of Lausanne , Lausanne , Vaud , Switzerland
  • 2 Palliative and Supportive Care Service , Lausanne University Hospital and University of Lausanne , Lausanne , Vaud , Switzerland
  • 3 Palliative and Supportive Care Service and Service of Geriatric Medicine and Geriatric Rehabilitation, Chair of Geriatric Palliative Care and Chair of Palliative Psychology , Lausanne University Hospital and University of Lausanne , Lausanne , Vaud , Switzerland
  • 4 Palliative and Supportive Care Service, Chair of Palliative Care Nursing , Lausanne University Hospital and University of Lausanne , Lausanne , Vaud , Switzerland
  • 5 Palliative and Supportive Care Service and Service of Geriatric Medicine and Geriatric Rehabilitation, Chair of Geriatric Palliative Care , Lausanne University Hospital and University of Lausanne , Lausanne , Vaud , Switzerland
  • Correspondence to Dr Mathieu Bernard; mathieu.bernard{at}chuv.ch

Introduction In the end of life context, patients are often seen as somewhat passive recipients of care provided by health professionals and relatives, with little opportunity to be perceived as autonomous and active agents. Since studies show a very high prevalence of altruistic dispositions in palliative care patients, we strive to investigate the concept of patient altruism in a set of six interdisciplinary studies by considering three settings: (1) in the general palliative context—by studying to what extent patient altruism is associated with essential psychological outcomes of palliative care (subproject 1a), how altruism is understood by patients (subproject 1b) and how altruism expressed by patients is experienced by palliative care nurses (subproject 1c); (2) in two concrete decision-making contexts—advance care planning (subproject 2a) and assisted suicide (subproject 2b); and (3) through verbal and non-verbal patient communication in palliative care settings (subproject 3).

Methods and analysis Subproject 1a: a cross-sectional study using validated and standardised questionnaires. Subprojects 1b and 1c: a constructivist grounded theory method aiming at developing a novel theory from semistructured interviews in both patients and nurses. Subproject 2a: a thematic analysis based on (1) audio-recordings of advance care planning encounters and (2) follow-up semidirective interviews with patients and their relatives. Subproject 2b: a qualitative study based on thematic analysis of interviews with patients actively pursuing assisted suicide and one of their relatives.Subproject 3: a conversation analysis based on audio and video-recorded interactions in two settings: (1) palliative inpatient unit and (2) advance care planning discussions.

Ethics and dissemination The study project was approved by the Ethics Committees of the Canton of Vaud, Bern and Ticino (no: 2023-00088). In addition to participation in national and international conferences, each project will be the subject of two scientific publications in peer-reviewed journals. Additional publications will be realised according to result triangulation between projects. A symposium opened to professionals, patients and the public will be organised in Switzerland at the end of the project.

  • PALLIATIVE CARE
  • Adult palliative care
  • SOCIAL MEDICINE

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjopen-2024-085632

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

We propose a set of multidisciplinary, complementary and interlinked studies to explore altruism in palliative care in three different settings.

A complex concept such as patient altruism can only be addressed if several disciplines and methods are used.

Each subproject is conceived to explore a specific facet of patient altruism at the end of life and relies on a specific methodology of analysis corresponding to their research questions.

A joint phase of data and results triangulation between the subprojects will allow exchanges between teams on results and new interpretations of phenomena.

As a limitation, patients and the public were not involved in the design of the study.

Introduction

Palliative care challenges.

In palliative care, the psychological domain is an essential component of patient care. Previous research in the field has mostly focused on improving the pharmacological and psychological treatments for the most frequent psychopathologies, that is, anxiety, depression and adjustment disorders. 1 2 Less is known concerning the factors that foster psychological well-being, autonomy and self-determination, thus improving quality of life. This approach, which has been termed ‘positive psychology’, 3 parallels the resource-oriented, health-promoting approach known as salutogenesis. In recent years, palliative care patients have been considered as active participants in their own dying, but only concerning specific and well-defined aspects of the end of life, namely, the so-called decisions concerning end of life (ie, advance directives, withholding and withdrawing life-sustaining treatment, and requests for hastened death/assisted suicide where legally possible). In all other aspects concerning the last phase of life, patients are rather regarded as passive recipients of care, with only little opportunity to be perceived as ‘agents’ who act, give and contribute something by and of themselves. One possible way of fostering patient autonomy and well-being at the end of life is to consider the patient in line with the concept of ‘patient empowerment’. While this paradigm is trending in many medical disciplines, 4 it has had little impact in the field of palliative care so far, with the exception of those situations where decisions need to be made. It is precisely for this reason that we place the concept of patient altruism, as a form of patient agency, at the centre of this project.

Nowadays, the concept of altruism is used in various disciplines such as psychology, philosophy, economics, sociology and evolutionary biology. 5 6 Numerous authors tend to apply this concept to any prosocial behaviour or action carried out on a voluntary basis and aiming to benefit the society at large, specific individuals or a group of individuals. 5 7–9 While altruistic behaviour is primarily explained as an individual constitutive characteristic, 10–13 this behaviour has to be understood by considering its sources of motivation (benevolence, 11 empathy, 14 reward, 15 norms 16 and anger 5 17 ). Social norms and interaction rules (social responsibility, group gain, reciprocity, negotiated rules) have also been identified as determinants of altruistic behaviours. 18 To adequately address this multilevel concept (personality, motivations, social norms), an interdisciplinary approach appears necessary, which is also underscored by the different disciplines involved in this project (humanities, social sciences, medical sciences and nursing).

Although palliative care addresses all people suffering from a life-limiting illness, most palliative care patients who benefit from it are close to end of their life and are of older age. It is well established that older people report higher altruism levels than younger people. 19 This is in agreement with life span developmental theories that have shown a change in motivation orientation during the last phase of existence, which promotes a sense of realisation and meaning in life. 20 While meaning in life tends to increase with age, 21 22 the sources of meaning change: Sparrow and Spaniol showed that, with increasing age, intrinsic values (such as authenticity, intimacy, spirituality but also altruism) are prioritised over extrinsic goals, such as achievement, competence and power. 23 Other authors mention a shift towards meaningful social goals focusing on others, especially close ones, after the recognition that time is becoming more limited as age increases. 24 Vollhardt suggests that altruism may also be a consequence of suffering after adverse life events, such as a life-threatening illness. 25 26 In line with these findings, Fegg et al showed that palliative care patients consistently reported higher self-transcendent and altruistic values as compared with healthy adults. 27 In addition, palliative care patients cite the social dimension as a source of meaning in life more often than the general population. 28

Since the fragility of patients at the end of life often reduces their capacity to undertake concrete altruistic actions by themselves, we need to broaden our understanding of altruism by also taking into account the attitudes and values at the origin of altruistic behaviours. In the theoretical model of basic human values developed by Schwartz and Cieciuch, the values of ‘benevolence’ and ‘universalism’ (‘self-transcendence’) refer to concern for others’ welfare and represent an internal motivation promoting social relations. 29 30 Since there is currently no clear definition or conceptual framework of altruism that has been applied to palliative care patients, we propose defining altruism in this context as: ‘a personal and intentional interest in improving the well-being and welfare of others—at an individual, group or societal level’. By emphasising the notion of ‘interest’, we aim to include the different levels of manifestations of altruism that might be relevant to palliative care, including the (advance) decisions made by patients and the interactions upon which these decisions are based.

We therefore propose a set of multidisciplinary, complementary and interlinked studies to explore altruism in palliative care in three different settings:

The first level refers to patient altruism in the general context of palliative care:

We first aim to study to what extent altruism, considered in terms of prosocial behaviour and self-transcendent value, is associated with important outcomes and indicators of palliative care (subproject 1a).

We then aim to explore (1) how altruism is defined and understood by the palliative care patients themselves (subproject 1b) and (2) how altruism from patients is experienced by palliative care nurses, the professionals who are closest to patients in clinical practice, and how the expression of altruism by patients and families towards nurses influences professional meaning and fulfilment (subproject 1c).

The second level considers how altruism might be expressed and realised through specific decisions between patients and their social environment by considering two specific decision-making contexts of high importance for palliative care: advance care planning (ACP, subproject 2a) and assisted suicide (subproject 2b). Both represent profound existential expressions of autonomy, which are closely related to the patients’ values and attitudes.

The third level concerns how altruism might be expressed in social interactions (both verbal or non-verbal), and how the interaction itself might have an altruistic dimension. We consider moments in which patients are involved in interaction by their own volition, and in which such interaction might bring a potential benefit to the other (subproject 3).

This project is anchored in an interdisciplinary effort bringing together the expertise of several researchers in palliative healthcare. They are therefore representative of the interdisciplinary nature of palliative care, as defined by the WHO. 31 Mathieu Bernard, head of subprojects 1a and 1b, is a psychologist; Phil J Larkin, head of subproject 1c, is a nurse; Ralf Jox, head of subproject 2a, and Claudia Gamondi, head of subproject 2b, are physicians, as Gian Domenico Borasio, project coordinator. Finally, Anca-Cristina Sterie, head of subproject 3, is a sociologist specialised in interactions in the medical setting.

The repercussions of this project could be manifold: first, it would allow designing and testing an altruism-based intervention for palliative care patients that could represent an important new step in the development of efficient, resource-oriented palliative care. Second, such interventions would have the potential to restore dignity and autonomy for patients in the last phase of life by allowing them, if they so wish, to assume a more active role. Third, the expression of patient altruism towards family members and healthcare professionals could also profoundly affect the latter two: it could diminish their distress and ease their grieving, improve relationships and may be a role model for themselves to become more altruistic.

Subproject 1a

A cross-sectional study using validated and standardised questionnaires will be conducted in both the French-speaking and German-speaking parts of Switzerland (two university hospitals and five palliative care centres), and two palliative care centres in the Italian-speaking part of Switzerland.

Inclusion criteria

18 years or older.

Treated by one of the palliative care teams.

<6 months’ life expectancy according to the treating physician.

Medically stable state or when their state has improved.

Exclusion criteria

Evidence of psychiatric and cognitive symptoms that might significantly alter the decision-making capacity.

Insufficient knowledge of the local language.

Questionnaires and procedure

For the purpose of the study, we will use:

The Prosocialness Scale for Adults. 32

The benevolence (four items) and universalism (six items) subscales of the 40-item Portrait Values Questionnaire. 30

Quality of life will be measured with the McGill Quality of Life Scale Revised version. 33

Psychological distress will be measured with the Hospital Anxiety and Depression Scale. 34

Patients’ feeling of being a burden to their caregivers will be assessed with the Self-Perceived Burden Scale. 35

Meaning in life will be assessed with the Meaning in Life Questionnaire. 36

The will to live intensity will be measured with a single-item Numerical Rating Scale. 37

These questionnaires were chosen because they measure dimensions that have been identified as psychological determinants of quality of life in palliative care. All new patients admitted in the palliative care centres participating in the study and who fulfil the inclusion and exclusion criteria as assessed by the referring physician will be asked by a research assistant to participate in the study as soon as an appropriate acute symptom control has been reached. Written informed consent will be obtained.

Validated translations or cross-cultural adaptations of the questionnaires will be used in face-to-face interviews. In addition, sociodemographic variables and medical data (principal diagnosis, comorbidities, performance status) will be obtained. Finally, questions using Numerical Rating Scales (0–10) will assess the level of stress and the potential for personal development induced by the questionnaires. Such Numerical Rating Scales will also be used to assess (1) to what extent altruism is an important concept in the end of life context, and (2) to what extent patients feel frustration not to be able to express altruism towards others.

Data analysis strategy

Rates will be reported for the recruitment information. Descriptive statistics will be calculated for sociodemographic variables and all outcomes. Missing outcome data will be treated as mentioned in the scoring manuals. Distribution of each outcome measure will be analysed using normality tests and other indicators of distribution (box-plots, skewness and kurtosis estimations). Associations between outcome variables will be assessed using Pearson or Spearman correlations according to distributions. Univariate and multiple linear regressions will be performed in order to determine whether altruism can be considered as a significant explanatory variable (in addition to psychological distress, desire to live, feeling of being a burden and meaning in life) for quality of life. We will use an alpha threshold of <0.05 as a criterion for rejecting the null hypothesis. All models will be controlled for sociodemographic characteristics.

Sample size

Multiple linear regressions represent the crucial elements for assessing the number of participants to be included to ensure sufficient power. According to Howell, 38 we aim to recruit 15 patients per explanatory variable, that is, 120 patients with complete data in this study (50 in the French-speaking and German-speaking parts and 20 in the Italian-speaking part, according to population representativeness). Estimating an exclusion rate of 60% and a refusal rate of 50% based on previous studies in the same context, 39–41 600 patients in total will need to be screened for study eligibility (250 in the French-speaking and German-speaking parts and 100 in the Italian-speaking part).

Subproject 1b

Procedure and sample size.

A subsample of participants from project 1a will be invited to participate in subproject 1b. Agreement to participate will be sought at the end of project 1a data collection. We estimate a maximum sample of 45 participants (approximately 15 per linguistic region), depending on data saturation (REF) and the need to explore items in greater or lesser depth. Therefore, the final sample may not be determined until the end of the study. We will aim to maximise the sample variation by ensuring diversity across gender, age, setting of care, illness type (eg, cancer vs non-cancer) and also by considering the score on the two questionnaires assessing altruism.

A constructivist grounded theory method will be adopted for projects 1b and 1c. 42 In this qualitative method, the researcher inductively analyses individual and collective actions as well as social and psychological processes, approaching the data with no preconceived ideas or hypotheses, accepting that there may be multiple perspectives and acknowledging the importance of their own place in constructing those perspectives with participants. Data collection and analysis are undertaken in parallel and decisions on sampling are revised as the project develops, a process referred to as ‘theoretical sampling’.

An initial interview guide will be developed based on topics derived from data collected in subproject 1a and from research questions of subproject 1b. Semistructured interviews will be conducted by a researcher trained in qualitative methodology and grounded theory data analysis, supported by qualitative research experts. The semistructured approach enables the interviewer to vary the wording and sequence of the questions and pursue leads provided by the participants, while being respectful of the burden of interviewing a frail and potentially vulnerable population.

The interview guide will be developed by the research team, comprising native speakers in French, German, Italian and English. The team will meet after the first five interviews in each region to discuss emerging themes and agree if new directions in terms of revised questions are needed. Theoretical saturation will be determined first in each region and later agreed through consensus with the wider research team.

Data analysis

All interviews will be digitally recorded and transcribed in the language of origin. To account for language and cultural differences, interpretation and analysis of the data will be carried out by research collaborators in each region, trained in qualitative methodology and supported by the local research partners. Data will be entered into MAXQDA software to assist analysis.

In accordance with constructivist grounded theory methods, a broad pool of codes will be obtained during an open (initial) coding using interview data, notes and memos. In a second phase of axial (focused) coding, all individual codes will be sorted and resorted as concepts and common themes are developed. 42 In the final step (theoretical coding), overarching themes will be grouped into more refined categories and central concepts, again referring to wider data sources (notes, memos, etc). Each coding step will be conducted in the individual language for the region and data shared through regular collaborative meetings to ensure accuracy and quality. An iterative discussion process will be engaged throughout the study, in order to reach a substantial agreement between researchers in terms of major themes and any cultural or regional difference identified. The final themes will be translated into English as the common language for the presentation of data and subjected to a second-level inductive analysis and comparison with the host language for accuracy in terms of conceptual equivalence. The emerging theory will be co-constructed by the research team.

Subproject 1c

We first begin with a conceptual analysis of altruism to determine its antecedents, attributes and relationships with palliative care nursing. Conceptual analyses are of particular benefit where a concept requires refinement within a specific discipline or context. 43 This will also inform the development of the interview guide and the outcome can be refined during interviews with the nursing participants.

Following the constructivist grounded theory principles outlined in subproject 1b, 30 semistructured in-depth interviews with nurses working in palliative care nursing practice will be undertaken. Participants will be accessed through the same university hospitals and palliative care centres collaborating for subprojects 1a and 1b (French, German and Italian). Interviews will provide concrete examples of the nurses’ experience of altruism in practice and address the meaning and impact of altruism for the palliative care nurse practitioner.

Inclusion criteria for nurses

Working at specialist or generalist level within an agreed palliative care setting.

Having worked in a full-time or part-time capacity for at least 6 months to be able to reflect and discuss their professional experience and offer concrete examples from practice.

Willing to be interviewed and recorded.

Able to converse in either French, German, Italian or English.

Since the division between specialist and generalist palliative care nursing roles is ill-defined, interviews with a broader nursing sample will help to understand the broader facilitators and barriers to altruism and to seek ulterior expressions and meanings of altruism within a wider professional nursing cohort. 10 interviews per linguistic region are planned.

The process of data analysis described in subproject 1b will also be applied to subproject 1c for symmetry and completeness of data across subprojects 1b and 1c. It will then afford opportunity to consider both datasets for similarities and disparities in the understanding and expression of altruism.

Subproject 2a

In order to address the aims of subproject 2a, a qualitative design will be used with three types of data: (1) audio or video-recorded routine ACP discussions between a facilitator, a patient and their relative(s) if applicable; (2) semistructured follow-up interviews of the persons having participated in these discussions.

Inclusion criteria for patients

Receiving general or specialised palliative care.

Inclusion criteria for relatives

Being nominated and recruited by the patient.

Capable to participate in an interview in French or German.

Inclusion criteria for ACP facilitators

Acting as a facilitator of ACP discussions with the patient (and his or her relatives).

Being trained as an ACP facilitator.

Exclusion criteria for patients and relatives

Evidence of psychiatric or cognitive symptoms significantly altering the decision-making capacity.

We will follow a sampling strategy combining cluster sampling and convenience sampling. The cluster sampling will be oriented towards variation regarding gender, age groups and health status of the patient participants (oncological, neurological and cardiorespiratory illnesses). The convenience part of the sampling relates to the fact that we will recruit in structured and professionally facilitated ACP activities in the university hospitals of both the French and German parts of Switzerland, and in two palliative care centres of the Italian part of Switzerland. It is planned to conduct 9–12 qualitative interviews with ACP facilitators (3–4 per language region), and to register 24 audio-recorded routine ACP discussions (8 in each language area), as well as 24 follow-up interviews with patients and 24 follow-up interviews with relatives having participated in the ACP discussions.

Data collection

The ACP discussions will be audio or video-recorded. Audio-recordings will be transcribed verbatim. The transcripts will then be coded by the investigator and analysed by the researcher.

The semistructured interviews will be conducted face-to-face with the patients, their relatives and the ACP facilitators in the participants’ native language, audio-taped and integrally transcribed anonymising all personal details. The interview grid will be constructed following both an inductive and a deductive approach. The initial grid will be deducted from the research team’s experience in the topic and the literature. Five pilot interviews will be carried out to test the interview design. Whether the pilot interviews will be included in the dataset will be decided depending on the evaluation of their transcripts.

Video and audio data will serve for further analysis in subproject 3.

For data analysis, please see below at subproject 2b.

Subproject 2b

To address the aims of subproject 2b, a qualitative study will be used based on semistructured interviews with patients and one of their relatives regarding the patient’s expression of their wish to die by assisted suicide. Participants will be recruited in the French, German and Italian-speaking parts of Switzerland.

Expression of wish to die by assisted suicide.

Being registered to a right to die association in Switzerland.

Capable of participating in an interview in French or Italian or German.

Having been informed by the patient of their intention to obtain assistance in suicide.

Evidence of psychiatric symptoms or cognitive impairment that might significantly alter the decision-making capacity.

Recruitment will take place through different sources: right to die associations and providers operating in the field of specialised palliative care. Snowball sampling will be also used. 10 patients and 10 relatives from the French-speaking part of Switzerland, 10 patients and 10 relatives from the German-speaking part of Switzerland, and 5 patients and 5 relatives from the Italian-speaking part of Switzerland will be recruited.

We intend to disseminate information about the study through two main channels:

Specialised palliative care centres of the university hospitals in the French and German parts of Switzerland, and two palliative care centres of the Italian part of Switzerland. Recruitment will be non-systematic: health professionals within these services will inform the study investigator when a patient meets the criteria.

Right to die societies will inform their members actively pursuing assisted suicide decision about the study.

Interviews will be conducted face-to-face with both the patients and their relatives. The interview grid will be constructed following the same procedure as described for subproject 2a.

Data analysis for subprojects 2a and 2b

The approach for data analysis is the same for subprojects 2a and 2b. To account for language and cultural differences, each coding step will be conducted in the regional language in which data were collected and data will then be shared through regular collaborative meetings. To ensure accuracy and quality in data generated, results will be translated into English and merged for a final transversal analysis.

The analysis will follow an inductive paradigm derived from thematic analysis. 44 The interviews will be fragmented into significant text units, to which codes, or designations able to synthetically account for their content, will be assigned. The identified codes will be linked and grouped into larger categories to define more abstract concepts around which to organise the various arguments. 25% of the material will be double-coded independently by another researcher affiliated to the project, to allow for parallel coding. For subproject 2a, each data subset (ACP encounters and follow-up interviews) will be analysed individually as well as jointly. These operations will be made with the support of the specialised software for qualitative data analysis MAXQDA.

Our analysis will also account for the fact that the decisions involved in these subsets of data (ACP and assisted suicide) have important ethical implications. We will therefore undertake further conceptual analysis inspired by methods from analytical philosophy and the frameworks of the principles of biomedical ethics 45 and of care ethics. 46 Normative criteria will be identified to ethically evaluate altruistic acts at the end of life, based on the principles of biomedical ethics. 45 Finally, we will present practical recommendations that promote the coexistence of patient autonomy and patient altruism.

Subproject 3

This subproject relies on natural data: audio and/or video-recorded naturally occurring interactions, taking place spontaneously, that is, not generated for the purpose of this study. Data will be collected in two settings in the French part of Switzerland.

1. ACP encounters

We will use data recorded for the subproject 2a.

2. Hospital palliative care units

We will record interactions taking place during 40 patient hospital admissions in three palliative centres in French Switzerland. This concerns activities taking place in the patient’s room, involving the patient, their visitors and palliative care professionals participating in the study. The term ‘activity’ will be used in the broad sense, comprising verbal and non-verbal exchanges and acts of care. Participants will be offered the option to decide, in advance, what type of activities they agree to be recorded in; consent will be reconfirmed prior to each recording. Participants will be offered the option of consenting to audio and video, or just to audio-recording.

Hospitalised in one of three palliative care centres.

Medically stable or when their state has improved.

Exclusion criteria for patients

Imminence of death according to the referent physician.

Impaired decision-making capacity.

Presence of psychological/psychiatric problems due to which participating (being recorded) might harm the patient.

Inclusion criteria for healthcare professionals

All health and allied health professionals affiliated to the palliative care unit.

Inclusion criteria for visitors and relatives

⁃Participation is open to all visitors of patients participating in the study.

Analysis will be guided by the conversation analysis (CA) approach. CA resides in a finely grained analysis of recorded data, focusing on how participants interact in order to accomplish ordinary as well as interactionally challenging tasks. 47 While CA is an inductive approach, its use is regulated by a well-defined and stepwise process:

The first-stage analysis is done as an ‘initial noticing’, in order to identify details of talk (‘phenomena’) that are interesting from a research point of view but also recurrent throughout the data. 48

Second, the researcher starts an exhaustive search throughout all the instances in which the phenomena occur and gathers them in datasets. Sequences of talk in which the phenomena are identified are transcribed according to a CA convention system designed for linguistic and multimodal transcriptions, which takes into account aspects of speech delivery and representation of activities parallel to talk (eye gaze, laughing, motions). 49–51

Third, the essential part of the analysis involves describing the phenomena in terms of sequential location (where it appears, why, what it generates) and content. The analysis will particularly draw on concepts of ‘affiliation’, 52 ‘benefactors and beneficiaries’ 53 and empathic communication 54 developed in CA and in relation to the field of palliative care. 55–57

Data collected in the first setting (ACP conversations) will be monitored for how patients participate in the discussion of medical decisions, especially how patients may be ‘prosocial’ by orienting themselves towards others when talking, for example, about death-related topics, without being required to do so. This is in keeping with our definition of altruism.

Data collected in the second setting (hospital palliative care) will be monitored for when and how patients might interact (verbally or non-verbally) without being prompted, and about what. The focus will be on localising and analysing sequences of interaction in which palliative patients interact by their own volition. These instances will be investigated focusing on whether the patient’s involvement might be identified as being done for the benefit of the other person (eg, make a compliment, participate in an act of care or an exchange without being asked/required to).

Patient and public involvement

No patients or members of the public were involved in the design of the study for any of the subprojects.

Ethics and dissemination

The study project was approved by the Ethics Committees of the Canton of Vaud, Bern and Ticino (no: 2023-00088). Each project will be the subject of two scientific publications in peer-reviewed journals. Additional publications will be realised according to result triangulation between projects. The results will also be presented at international and national conferences. Finally, a symposium opened to professionals, patients and the public will be organised in Switzerland at the end of the project to present all the results.

Ethics statements

Patient consent for publication.

Not applicable.

  • Mitchell AJ ,
  • Bhatti H , et al
  • Tsilika E ,
  • Gennimata V , et al
  • Seligman ME ,
  • Csikszentmihalyi M
  • Pekonen A ,
  • Eloranta S ,
  • Stolt M , et al
  • Sonne JWH ,
  • FitzPatrick WJ
  • Warneken F ,
  • Tomasello M
  • Gardner A ,
  • DeYoung CG ,
  • Quilty LC ,
  • Peterson JB
  • Hubbard J ,
  • Harbaugh WT ,
  • Srivastava S , et al
  • McCrae RR ,
  • Batson CD ,
  • Batson JG ,
  • Slingsby JK , et al
  • Hausmann A ,
  • Christiansen S , et al
  • Mussweiler T ,
  • Ockenfels A
  • Cropanzano R ,
  • Mitchell MS
  • Sparrow EP ,
  • Swirsky LT ,
  • Kudus F , et al
  • Dunkel CS ,
  • Carstensen LL ,
  • Vollhardt J
  • Vollhardt JR
  • Neudert C , et al
  • Bernard M ,
  • Berchtold A ,
  • Strasser F , et al
  • Cieciuch J , et al
  • Schwartz SH ,
  • World Health Organisation
  • Caprara GV ,
  • Zelli A , et al
  • Sawatzky R ,
  • Russell LB , et al
  • Zigmond AS ,
  • Cousineau N ,
  • McDowell I ,
  • Hotz S , et al
  • Naghiyaee M ,
  • Bahmani B ,
  • Bornet MA ,
  • Jaques C , et al
  • Althaus B ,
  • Borasio GD ,
  • Strasser F ,
  • Gamondi C , et al
  • Brenner KO ,
  • Rosenberg LB ,
  • Cramer MA , et al
  • Sharifi N ,
  • Adib-Hajbaghery M ,
  • Beauchamp T ,
  • Childress J
  • Schegloff EA
  • Jefferson G
  • Bourgeault I ,
  • Dingwall R ,
  • Clayman SE ,
  • Heritage J ,
  • Gramling D ,
  • Jenkins L ,
  • Pino M , et al

Contributors MB and GDB designed the study protocol. MB developed the study design and method of subproject 1a. MB and PJL developed the study design and method of subproject 1b. PJL developed the study design and method of subproject 1c. RJ developed the study design and method of subproject 2a. CG developed the study design and method of subproject 2b. A-CS developed the study design and method of subproject 3.

Funding This work is supported by the Swiss National Science Foundation (grant number 10001G_207814/1). The study is funded for a 3-year period from April 2023 to March 2026.

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; peer reviewed for ethical and funding approval prior to submission.

Read the full text or download the PDF:

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

The development and structural validity testing of the Person-centred Practice Inventory–Care (PCPI-C)

Contributed equally to this work with: Brendan George McCormack, Paul F. Slater, Fiona Gilmour, Denise Edgar, Stefan Gschwenter, Sonyia McFadden, Ciara Hughes, Val Wilson, Tanya McCance

Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Faculty of Medicine and Health, Susan Wakil School of Nursing and Midwifery/Sydney Nursing School, The University of Sydney, Camperdown Campus, New South Wales, Australia

ORCID logo

Roles Formal analysis, Methodology, Writing – original draft, Writing – review & editing

Affiliation Institute of Nursing and Health Research, Ulster University, Belfast, Northern Ireland

Roles Data curation, Investigation, Methodology, Writing – review & editing

Affiliation Division of Nursing, Queen Margaret University, Edinburgh, Scotland

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Nursing and Midwifery Directorate, Illawarra Shoalhaven Local Health District, New South Wales, Australia

Roles Data curation, Methodology, Validation, Writing – review & editing

Affiliation Division of Nursing Science with Focus on Person-Centred Care Research, Karl Landsteiner University of Health Sciences, Krems, Austria

Roles Data curation, Investigation, Validation, Writing – review & editing

Affiliation Prince of Wales Hospital, South East Sydney Local Health District, New South Wales, Australia

Roles Conceptualization, Formal analysis, Methodology, Validation, Writing – original draft, Writing – review & editing

  • Brendan George McCormack, 
  • Paul F. Slater, 
  • Fiona Gilmour, 
  • Denise Edgar, 
  • Stefan Gschwenter, 
  • Sonyia McFadden, 
  • Ciara Hughes, 
  • Val Wilson, 
  • Tanya McCance

PLOS

  • Published: May 10, 2024
  • https://doi.org/10.1371/journal.pone.0303158
  • Reader Comments

Fig 1

Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Measuring the outcomes arising from person-centred practices is complex and challenging and often adopts multiple perspectives and approaches. Few measurement frameworks are grounded in an explicit person-centred theoretical framework.

In the study reported in this paper, the aim was to develop a valid and reliable instrument to measure the experience of person-centred care by service users (patients)–The Person-centred Practice Inventory-Care (PCPI-C).

Based on the ‘person-centred processes’ construct of an established Person-centred Practice Framework (PCPF), a service user instrument was developed to complement existing instruments informed by the same theoretical framework–the PCPF. An exploratory sequential mixed methods design was used to construct and test the instrument, working with international partners and service users in Scotland, Northern Ireland, Australia and Austria. A three-phase approach was adopted to the development and testing of the PCPI-C: Phase 1 –Item Selection : following an iterative process a list of 20 items were agreed upon by the research team for use in phase 2 of the project; Phase 2 –Instrument Development and Refinement : Development of the PCPI-C was undertaken through two stages. Stage 1 involved three sequential rounds of data collection using focus groups in Scotland, Australia and Northern Ireland; Stage 2 involved distributing the instrument to members of a global community of practice for person-centred practice for review and feedback, as well as refinement and translation through one: one interviews in Austria. Phase 3 : Testing Structural Validity of the PCPI-C : A sample of 452 participants participated in this phase of the study. Service users participating in existing cancer research in the UK, Malta, Poland and Portugal, as well as care homes research in Austria completed the draft PCPI-C. Data were collected over a 14month period (January 2021-March 2022). Descriptive and measures of dispersion statistics were generated for all items to help inform subsequent analysis. Confirmatory factor analysis was conducted using maximum likelihood robust extraction testing of the 5-factor model of the PCPI-C.

The testing of the PCPI-C resulted in a final 18 item instrument. The results demonstrate that the PCPI-C is a psychometrically sound instrument, supporting a five-factor model that examines the service user’s perspective of what constitutes person-centred care.

Conclusion and implications

This new instrument is generic in nature and so can be used to evaluate how person-centredness is perceived by service users in different healthcare contexts and at different levels of an organisation. Thus, it brings a service user perspective to an organisation-wide evaluation framework.

Citation: McCormack BG, Slater PF, Gilmour F, Edgar D, Gschwenter S, McFadden S, et al. (2024) The development and structural validity testing of the Person-centred Practice Inventory–Care (PCPI-C). PLoS ONE 19(5): e0303158. https://doi.org/10.1371/journal.pone.0303158

Editor: Nabeel Al-Yateem, University of Sharjah, UNITED ARAB EMIRATES

Received: January 26, 2023; Accepted: April 20, 2024; Published: May 10, 2024

Copyright: © 2024 McCormack et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data cannot be shared publicly because of ethical reason. Data are available from the Ulster University Institutional Data Access / Ethics Committee (contact via email on [email protected] ) for researchers who meet the criteria for access to confidential data

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Person-centred healthcare goes beyond other models of shared decision-making as it requires practitioners to work with service users (patients) as actively engaged partners in care [ 1 ]. It is widely agreed that person-centred practice has a positive influence on the care experiences of all people associated with healthcare, service users and staff alike. International evidence shows that person-centred practice has the capacity to have a positive effect on the health and social care experiences of service users and staff [ 1 – 4 ]. Person-centred practice is a complex health care process and exists in the presence of respectful relationships, attitudes and behaviours [ 5 ]. Fundamentally, person-centred healthcare can be seen as a move away from neo-liberal models towards the humanising of healthcare delivery, with a focus on the development of individualised approaches to care and interventions, rather than seeing people as ‘products’ that need to be moved through the system in an efficient and cost-effective way [ 6 ].

Person-centred healthcare is underpinned by philosophical and theoretical constructs that frame all aspects of healthcare delivery, from the macro-perspective of policy and organisational practices to the micro-perspective of person-to-person interaction and experience of healthcare (whether as professional or service user) and so is promoted as a core attribute of the healthcare workforce [ 1 , 7 ]. However, Dewing and McCormack [ 8 ] highlighted the problems of the diverse application of concepts, theories and models all under the label of person-centredness, leading to a perception of person-centred healthcare being poorly defined, non-specific and overly generalised. Whilst person-centredness has become a well-used term globally, it is often used interchangeably with other terms such as ’woman-centredness’ [ 9 ], ’child-centredness’ [ 10 ], ’family-centredness’ [ 11 ], ’client-centredness’ [ 12 ] and ’patient-centredness’ [ 13 ]. In their review of person-centred care, Harding et al [ 14 ] identified three fundamental ‘stances’ that encompass person-centred care— Person-centred care as an overarching grouping of concepts : includes care based on shared-decision making, care planning, integrated care, patient information and self-management support; Person-centred care emphasising personhood : people being immersed in their own context and a person as a discrete human being; Person-centred care as partnership : care imbued with mutuality, trust, collaboration for care, and a therapeutic relationship.

Harding et al. adopt the narrow focus of ’care’ in their review, and others contend that for person-centred care to be operationalised there is a need to understand it from an inclusive whole-systems perspective [ 15 ] and as a philosophy to be applied to all persons. This inclusive approach has enabled the principles of person-centredness to be integrated at different levels of healthcare organisations and thus enable its embeddedness in health systems [ 16 – 19 ]. This inclusive approach is significant as person-centred care is impossible to sustain if person-centred cultures do not exist in healthcare organisations [ 20 , 21 ].

McCance and McCormack [ 5 ] developed the Person-centred Practice Framework (PCPF) to highlight the factors that affect the delivery of person-centred practices. McCormack and McCance published the original person-centred nursing framework in 2006. The Framework has evolved over two decades of research and development activity into a transdisciplinary framework and has made a significant contribution to the landscape of person-centredness globally. Not only does it enable the articulation of the dynamic nature of person-centredness, recognising complexity at different levels in healthcare systems, but it offers a common language and a shared understanding of person-centred practice. The Person-centred Practice Framework is underpinned by the following definition of person-centredness:

[A]n approach to practice established through the formation and fostering of healthful relationships between all care providers , service users and others significant to them in their lives . It is underpinned by values of respect for persons , individual right to self-determination , mutual respect and understanding . It is enabled by cultures of empowerment that foster continuous approaches to practice development [ 16 ].

The Person-centred Practice Framework ( Fig 1 ) comprises five domains: the macro context reflects the factors that are strategic and political in nature that influence the development of person-centred cultures; prerequisites focus on the attributes of staff; the practice environment focuses on the context in which healthcare is experienced; the person-centred processes focus on ways of engaging that are necessary to create connections between persons; and the outcome , which is the result of effective person-centred practice. The relationships between the five domains of the Person-centred Practice Framework are represented pictorially, that being, to reach the centre of the framework, strategic and policy frames of reference need to be attended to, then the attributes of staff must be considered as a prerequisite to managing the practice environment and to engaging effectively through the person-centred processes. This ordering ultimately leads to the achievement of the outcome–the central component of the framework. It is also important to recognise that there are relationships and there is overlap between the constructs within each domain.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0303158.g001

In 2015, Slater et al. [ 22 ] developed an instrument for staff to use to measure person centred practice- the Person-centred Practice Inventory- Staff (PCPI-S). The PCPI-S is a 59-item, self-report measure of health professionals’ perceptions of their person-centred practice. The items in the PCPI-S relate to seventeen constructs across three domains of the PCPF (prerequisites, practice environment and person-centred processes). The PCPI-S has been widely used, translated into multiple languages and has undergone extensive psychometric testing [ 23 – 28 ].

No instrument exists to measure service users’ perspectives of person-centred care that is based on an established person-centred theoretical framework or that is designed to compare with service providers perceptions of it. In an attempt to address this gap in the evidence base, this study set out to develop such a valid and reliable instrument. The PCPI-C focuses on the person-centred processes domain, with the intention of measuring service users’ experiences of person-centred care. The person-centred processes are the components of care that directly affect service users’ experiences. The person-centred processes enable person-centred care outcomes to be achieved and include working with the person’s beliefs and values, sharing decision-making, engaging authentically, being sympathetically present and working holistically. Based on the ‘person-centred processes’ construct of the PCPF and relevant items from the PCPI-S, a version for service users was developed.

This paper describes the processes used to develop and test the instrument–The Person-centred Practice Inventory-Care (PCPI-C). The PCPI-C has the potential to enable healthcare services to understand service users’ experience of care and how they align with those of healthcare providers.

Materials and methods

The aim of this research was to develop and test the face validity of a service users’ version of the person-centred practice inventory–The Person-centred Practice Inventory-Care.

The development and testing of the instrument was guided by the instrument development principles of Boateng et al [ 29 ] ( Fig 2 ) and reported in line with the COSMIN guidelines for instrument testing [ 30 , 31 ]. An exploratory sequential mixed methods design was used to construct and test the instrument [ 29 , 30 ] working with international partners and service users. A three-phase approach was adopted to the development and testing of the PCPI-C. As phases 1 and 2 intentionally informed phase 3 (the testing phase), these two phases are included here in our description of methods.

thumbnail

https://doi.org/10.1371/journal.pone.0303158.g002

Ethical approval

Ethics approval was sought and gained for each phase of the study and across each of the participating sites. For phase 2 of the study, a generic research protocol was developed and adapted for use by the Scottish, Australian and Northern Irish teams to apply for local ethical approval. In Scotland, ethics approval was gained from Queen Margaret University Edinburgh Divisional Research Ethics Committee; in Australia, ethics approval was gained from The University of Wollongong and in Northern Ireland ethics approval was gained from the Research Governance Filter Committee, Nursing and Health Research, Ulster University. For phase 3 of the study, secondary analysis of an existing data set was undertaken. For the original study from which this data was derived (see phase 3 for details), ethical approval was granted by the UK Office of Research Ethics Committee Northern Ireland (ORECNI Ref: FCNUR-21-019) and Ulster University Research Ethics Committee. Additional local approvals were obtained for each partner site as required. In addition, a data sharing agreement was generated to facilitate sharing of study data between European Union (EU) sites and the United Kingdom (UK).

Phase 1 –Item selection

An initial item pool for the PCPI-C was identified by <author initials to be added after peer-review> by selecting items from the ‘person-centred processes’ sub-scale of the PCPI-S ( Table 1 ). Sixteen items were extracted, and the wording of the statements was adjusted to reflect a service-user perspective. Additional items were identified (n = 4) to fully represent the construct from a service-user perspective. A final list of 20 items was agreed upon and this 20-item questionnaire was used in Phase 2 of the instrument development.

thumbnail

https://doi.org/10.1371/journal.pone.0303158.t001

Phase 2 –Instrument development and refinement

Testing the validity of PCPI-C was undertaken through three sequential rounds of data collection using focus groups in Scotland, Australia and Northern Ireland. The purpose of these focus groups was to work with service users to share and compare understandings and views of their experiences of healthcare and to consider these experiences in the context of the initial set of PCPI-C items generated in phase 1 of the study. These countries were selected as the lead researchers had established relationships with healthcare partners who were willing to host the research. The inclusion of multiple countries provided different perspectives from service users who used different health services. In Scotland, a convenience sample of service users (n = 11) attending a palliative care day centre of a local hospice was selected. In Australia a cancer support group for people living with a cancer diagnosis (n = 9) was selected and in Northern Ireland, people with lived experience who were attending a community group hosted by a Cancer Charity (n = 9) were selected. All service users were current users of healthcare and so the challenge of memory recall was avoided. The type of conditions/health problems of participants was not the primary concern. Instead, we targeted persons who had recent experiences of the health system. The three centres selected were known to the researchers in those geographical areas and relationships were already established, which helped with gaining access to potential participants. Whilst the research team had potential access to other centres in each country, it was evident at focus group 3 that no significant new issues were being identified from the participants and thus we agreed to not do further rounds of refinement.

A Focus Group guide was developed ( Fig 3 ). Participants were invited to draw on their experiences as a user of the service; particularly remembering what they saw, the way they felt and what they imagined was happening [ 32 ]. The participants were invited to independently complete the PCPI-C and the purpose of the exercise was reiterated i.e. to think about how each question of the PCPI-C reflected their own experiences and their answers to the questions. Following completion of the questionnaire, participants were asked to comment on each question in the PCPI-C (20 questions), with a specific focus on their understanding of the question, what they thought about when they read the question, and any suggestions to improve readability. The focus group was concluded with a discussion on the overall usability of the PCPI-C. Each focus group was audiotaped and the audio recordings were transcribed in full. The facilitators of the focus group then listened to the audio recordings, alongside the transcripts, and identified the common issues that arose from the discussions and noted against each of the questions in the draft PCPI-C. Revisions were made to the questions in accordance with the comments and recommendations of the participants. At the end of the analysis phase of each focus group, a table of comments and recommendations mapped to the questions in the instrument was compiled and sent to the whole research team for review and consideration. The comments and recommendations were reviewed by the research team and amendments made to the draft PCPI-C. The amended draft was then used in the next focus group until a final version was agreed. Focus group 1 was held in Scotland, focus group 2 in Australia and focus group 3 in Northern Ireland. Table 2 presents a summary of the feedback from the final focus group.

thumbnail

https://doi.org/10.1371/journal.pone.0303158.g003

thumbnail

https://doi.org/10.1371/journal.pone.0303158.t002

A final stage of development involved distributing the agreed version of the PCPI-C to members of ‘The International Community of Practice for Person-centred Practice’ (PcP-ICoP) for review and feedback. The PcP-ICoP is an international community of higher education, health and care organisations and individuals who are committed to advancing knowledge in the field of person-centredness. No significant changes to the distributed version were suggested by the PcP-ICoP members, but several members requested permission to translate the instrument into their national language. PcP-ICoP members at the University of Vienna, who were leading on a large research project with nursing homes in the region of Lower Austria, agreed to undertake a parallel translation project as a priority, so they could use the PCPI-C in their research project. The instrument was culturally and linguistically adapted to the nursing home setting in an iterative process by the Austrian research team in collaboration with the international research team. Data were collected through face-to-face interviews by trained research staff. Residents of five nursing homes for older persons in Lower Austria were included. All residents who did not have a cognitive impairment or were physically unable to complete the questionnaire (because of ill-health) (n = 235) were included. 71% of these residents (N = 167) managed to complete the questionnaire. Whilst in Austria, formal ethical approval for non-intervention studies is not required, the team sought informed consent from participants. Particular attention was paid throughout the interviews to assure ongoing consent of residents by carefully guided conversations.

Phase 3: Testing structural validity of the PCPI-C

The aim of this phase was to test the structural validity of the PCPI-C using confirmatory factor analysis with an international sample of service users. The PCPI-C comprises 20 items measured on a 5-point scale ranging from ‘strongly disagree’ to ‘strongly agree. The 20 items represent the 5 constructs comprising the final model to be tested, which is outlined in Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0303158.t003

A sample of 452 participants was selected for this phase of the study. The sample selected comprised two groups. Group 1 (n = 285) were service users with cancer (breast, urological and other) receiving radiotherapy in four Cancer Treatment Centres in four European Countries–UK, Malta, Poland and Portugal. These service users were participants in a wider SAFE EUROPE ( www.safeeurope.eu ) project exploring the education and professional migration of therapeutic radiographers in the European Union. In the UK a study information poster with a link to the PCPI-C via Qualtrics © survey was disseminated via UK cancer charity social media websites. Service user information and consent were embedded in the online survey and presented to the participant following the study link. At the non-UK sites, hard copy English versions of the surveys were available in clinical departments where a convenience sampling approach was used, inviting everyone in their final few days of radiotherapy to participate. The ‘DeepL Translator’ software (DeepL GmbH, Cologne, Germany) was used to make the necessary terminology adaptions for both the questionnaire and the participant information sheet across the various countries. Fluent speakers based in the participating sites and who were members of the SAFE EUROPE project team confirmed the accuracy of this process by checking the accuracy of the translated version against the original English version. Participants were provided with study information and had at least 24 hours to decide if they wished to participate. Willing participants were then invited to provide written informed consent by the local study researcher. The study researcher provided the hard copy survey to the service user but did not engage with or assist them during completion. Service users were informed they could take the survey home for completion if they wished. Completed surveys were returned to a drop box in the department or returned by post (data collected May 2021-March 2022). Group 2 were residents in nursing homes in Lower Austria (n = 125). No participating residents had a cognitive impairment and were physically able to complete the questionnaire. Data were collected through face-to-face interviews by trained research staff (data collected January 2021-March 2021).

Statistical analysis

Descriptive and measures of dispersion statistics were generated for all items to help inform subsequent analysis. Measures of appropriateness to conduct factor analysis were conducted using The Kaiser-Meyer-Olkin Measures of Sampling Adequacy and Bartletts Test of Sphericity. Inter-item correlations were generated to examine for collinearity prior to full analysis. Confirmatory factor analysis was conducted using maximum likelihood robust extraction testing of the 5-factor model.

Acceptable fit statistics were set at Root Mean Square Estimations of Approximation (RMSEA) of 0.05 or below; 90% RMSEA higher bracket below 0.08; and Confirmation Fit Indices (CFI) of 0.95 or higher and SRMR below 0.05 [ 33 – 35 ]. Internal consistency was measured using Cronbach alpha scores for factors in the accepted factor model.

The model was re-specified using the modification indices provided in the statistical output until acceptable and a statistically significant relationship was identified. All re-specifications of the model were guided by principles of (1) meaningfulness (a clear theoretical rationale); (2) transitivity (if A is correlated to B, and B correlated to C, then A should correlate with C); and (3) generality (if there is a reason for correlating the errors between one pair of errors, then all pairs for which that reason applies should also be correlated) [ 36 ].

Acceptance modification criteria of:

  • The items to first order factors were initially fitted.
  • Correlated error variance permitted as all items were measuring the same unidimensional construct.
  • Only statistically significant relationship retained to help produce as parsimonious a model as possible.
  • Factor loadings above 0.40 to provide a strong emergent factor structure.

Factor loading scores were based on Comrey and Lee’s [ 37 ] guidelines (>.71 = excellent, >.63 = very good, >.55 = good, >.45 = fair and >.32 = poor) and acceptable factor loading given the sample size (n = 452) were set at >0.3 [ 33 , 38 ].

Results and discussion

Demographic details.

The sample of 452 participants represented an international sample of respondents drawn from across five countries: UK (14.6% n = 66), Portugal (47.8%. n = 216), Austria (27.7%, n = 125), Malta (6.6, n = 30) and Poland (3.3%, n = 15). Table 4 outline the demographic characteristics of the sample. The final sample of 452 participants provides an acceptable ratio 33 of 22:1 respondent to items.

thumbnail

https://doi.org/10.1371/journal.pone.0303158.t004

The means scores indicate that respondents scored the items neutrally. The measures of skewness and kurtosis were acceptable and satisfied the conditions of normality of distribution for further psychometric testing. Examination of the Kaiser Meyer Olkin (0.947) and the Bartlett test for sphericity (4431.68, df = 190, p = 0.00) indicated acceptability of performing factor analysis on the items. Cronbach alpha scores for each of the constructs confirm the acceptability and unidimensionality of each construct.

Examination of the correlation matrix between items shows a range of between 0.144 and 0.740, indicating a broadness in the areas of care the questionnaire items address, as well as no issues of collinearity. The original measurement model was examined using maximum likelihood extraction and the original model had mixed fit statistics. All factor loadings (except for items 11 and 13) were above the threshold of 0.4 ( Table 3 ). Six further modifications were introduced into the original model based on highest scored modification indices until the fit statistics were deemed acceptable ( Table 5 for model fit statistics and Fig 4 for items correlated errors). Two item correlated error modifications were within factors and 4 between factors. The accepted model factor structure is displayed in Fig 4 .

thumbnail

https://doi.org/10.1371/journal.pone.0303158.g004

thumbnail

https://doi.org/10.1371/journal.pone.0303158.t005

Measuring person-centred care is a complex and challenging endeavour [ 39 ]. In a review of existing measures of person-centred care, DeSilva [ 39 ] identified that whilst there are many tools available to measure person-centred care, there was no agreement about which tools were most worthwhile. The complexity of measurement is further reinforced by the multiplicity of terms used that imply a person-centred approach being adopted without explicitly setting out the meaning of the term. Further, person-centred care is multifaceted and comprises a multitude of methods that are held together by a common philosophy of care and organisational goals that focus on service users having the best possible (personalised) experience of care. As DeSilva suggested, “it is a priority to understand what ‘person-centred’ means . Until we know what we want to achieve , it is difficult to know the most appropriate way to measure it . (p 3)” . However, it remains the case that many of the methods adopted are poorly specified and not embedded in clear conceptual or theoretical frameworks [ 40 , 41 ]. A clear advantage of the study reported here is that the PCPI-C is embedded in a theoretical framework of person-centredness (the PCPF) that clearly defines what we mean by person-centred practice. The PCPI-C is explicitly informed by the ‘person-centred processes’ domain of the PCPF, which has an explicit focus on the care processes used by healthcare workers in providing healthcare to service-users.

In the development of the PCPI-C, initial items were selected from the Person-centred Practice Inventory-Staff (PCPI-S) and these items are directly connected with the person-centred processes domain of the PCPF. The PCPI-S has been translated, validated and adopted internationally [ 23 – 28 ] and so provides a robust theoretically informed starting point for the development of the PCPI-C. This starting point contributed to the initial acceptability of the instrument to participants in the focus groups. Like DeSilva, [ 39 ] McCormack et al [ 42 ] and McCormack [ 41 ] have argued that measuring person-centred care as an isolated activity from the evaluation of the impact of contextual factors on the care experienced, is a limited exercise. As McCormack [ 41 ] suggests “ Evaluating person-centred care as a specific intervention or group of interventions , without understanding the impact of these cultural and contextual factors , does little to inform the quality of a service . ” (p1) Using the PCPI-C alongside other instruments such as the PCPI-S helps to generate contrasting perspectives from healthcare providers and healthcare service users, informed by clear definitions of terms that can be integrated in quality improvement and practice development programmes. The development of the PCPI-C was conducted in line with good practice guidelines in instrument development [ 29 ] and underpinned by an internationally recognised person-centred practice theoretical framework, the PCPF [ 5 ]. The PCPI-C provides a psychometrically robust tool to measure service users’ perspectives of person-centred care as an integrated and multi-faceted approach to evaluating person-centredness more generally in healthcare organisations.

With the advancement of Patient Reported Outcome Measures (PROMS) [ 43 , 44 ], Patient Reported Experience Measures (PREMS) [ 45 ] and the World Health Organization (WHO) [ 15 ] emphasis on the development of people-centred and integrated health systems, greater emphasis has been placed on developing measures to determine the person-centredness of care experienced by service users. Several instruments have been developed to measure the effectiveness of person-centred care in specific services, such as mental health [ 45 ], primary care [ 46 , 47 ], aged care [ 48 , 49 ] and community care [ 50 ]. However only one other instrument adopts a generic approach to evaluating services users’ experiences of person-centred care [ 51 ]. The work of Fridberg et al (The Generic Person-centred Care Questionnaire (GPCCQ)) is located in the Gothenburg Centre for Person-centred Care (GPCC) concept of person-centredness—patient narrative, partnership and documentation. Whilst there are clear connections between the GPCCQ and the PCPI-C, a strength of the PCPI-C is that it is set in a broader system of evaluation that views person-centredness as a whole system issue, with all parts of the system needing to be consistent in concepts used, definitions of terms and approaches to evaluation. Whilst the PCPI-S evaluates how person-centredness is perceived at different levels of the organisation, using the same theoretical framework and the same definition of terms, the PCPI-C brings a service user perspective to an organisation-wide evaluation framework.

A clear strength of this study lies in the methods engaged in phase 2. Capturing service user experiences of healthcare has become an important part of the evaluation of effectiveness. Service user experience evaluation methodologies adopt a variety of methods that aim to capture key transferrable themes across patient populations, supported by granular detail of individual specific experience [ 43 ]. This kind of service evaluation depends on systematically capturing a variety of experiences across different service-user groups. In the research reported here, service users from a variety of services including palliative care and cancer services from three countries, engaged in the focus group discussions and were freely able to discuss their experiences of care and consider them in the context of the questionnaire items. The use of focus groups in three different countries enabled different cultural perspectives to be considered in the way participants engaged with discussions and considered the relevance of items and their wording. The sequential approach enabled three rounds of refinement of the items and this enabled the most relevant wording to be achieved. The range of comments and depth of feedback prevented ‘knee-jerk’ changes being made based on one-off comments, but instead, it was possible to compare and contrast the comments and feedback and achieve a more considered outcome. The cultural relevance of the instrument was reinforced through the translation of the instrument to the German language in Austria, as few changes were made to the original wording in the translation process. This approach combined the capturing of individual lived experience with the systematic generation of key themes that can assist with the systematic evaluation of healthcare services. Further, adopting this approach provides a degree of confidence to users of the PCPI-C that it represents real service-user experiences.

The factorial validity of the instrument was supported by the findings of the study. The modified models fit indices suggest a good model fit for the sample [ 31 , 34 , 35 ]. The Confirmation Fit Indices (CFI) fall short of the threshold of >0.95. However, this is above 0.93 which is considered an acceptable level of fit [ 52 ]. Examination of the alpha scores confirm the reliability (internal consistency) of each construct [ 53 ]. All factor loadings were at a statistically significant level and above the acceptable criteria of 0.3 recommended for the sample size [ 38 ]. All but 2 of the loadings (v11 –‘ Staff don’t assume they know what is best for me’ and v13 – ‘My family are included in decisions about my care only when I want them to be’ ) were above the loadings considered as good to excellent [ 37 ]. At the level of construct, previous research by McCance et al [ 54 ] showed that all five constructs of the person-centred processes domain of the Person-centred Practice Framework carried equal significance in shaping how person-centred practice is delivered, and this is borne out by the approval of a 5-factor model in this study. However, it is also probable that there is a degree of overlap between items across the constructs, reflected in the 2 items with lower loadings. Other items in the PCPI-C address perspectives on shared decision-making and family engagement and thus it was concluded that based on the theoretical model and statistical analysis, these 2 items could be removed without compromising the comprehensiveness of the scale, resulting in a final 18-item version of the PCPI-C (available on request).

Whilst a systematic approach to the development of the PCPI-C was adopted, and we engaged with service users in several care settings in different countries, further research is required in the psychometric testing of the instrument across differing conditions, settings and with culturally diverse samples. Whilst the sample does provide an acceptable respondent to item ratio, and the sample contains international respondents, the model structure is not examined across international settings. Likewise, further research is required across service users with differing conditions and clinical settings. Whilst this is a limitation of this study reported here, the psychometric testing of an instrument is a continuous process and further testing of the PCPI-C is welcomed.

Conclusions

This paper has presented the systematic approach adopted to develop and test a theoretically informed instrument for measuring service users’ perspectives of person-centred care. The instrument is one of the first that is generic and theory-informed, enabling it to be applied as part of a comprehensive and integrated framework of evaluation at different levels of healthcare organisations. Whilst the instrument has good statistical properties, ongoing testing is recommended.

Acknowledgments

The authors of this paper acknowledge the significant contributions of all the service users who participated in this study.

  • View Article
  • Google Scholar
  • 2. Institute of Medicine Committee on Quality of Health Care in America (2001) Crossing the Quality Chasm : A New Health System for the 21st Century . Washington: National Academies Press. https://nap.nationalacademies.org/catalog/10027/crossing-the-quality-chasm-a-new-health-system-for-the (Accessed 20/1/2023).
  • PubMed/NCBI
  • 5. McCance T. and McCormack B. (2021) The Person-centred Practice Framework, in McCormack B, McCance T, Martin S, McMillan A, Bulley C (2021) Fundamentals of Person-centred Healthcare Practice . Oxford. Wiley. PP: 23–32 https://www.perlego.com/book/2068078/fundamentals-of-personcentred-healthcare-practice-pdf?utm_source=google&utm_medium=cpc&gclid=Cj0KCQjwtrSLBhCLARIsACh6Rmj9sarf1IjwEHCseXMsPLGeUTTQlJWYL6mfQEQgwO3lnLkUU9Gb0A8aAgT1EALw_wcB .
  • 7. Nursing and Midwifery Council (2018) Future Nurse : Standards of Proficiency for Registered Nurses . London. Nursing and Midwifery Council. https://www.nmc.org.uk/globalassets/sitedocuments/standards-of-proficiency/nurses/future-nurse-proficiencies.pdf (Accessed 20/1/2023).
  • 14. Harding E., Wait S. and Scrutton J. (2015) The State of Play in Person-centred Care : A Pragmatic Review of how Person-centred Care is Defined , Applied and Measured . London: The Health Policy Partnership.
  • 16. McCormack B. and McCance T. (2017) Person-centred Practice in Nursing and Health Care : Theory and Practice . Chichester, UK: Wiley-Blackwell.
  • 17. Buetow S. (2016) Person-centred Healthcare : Balancing the Welfare of Clinicians and Patients . Oxford: Routledge.
  • 32. Kruger R. A. and Casey M. A., (2000). Focus groups : A practical guide for applied research . 3rd ed. Thousand Oaks, CA: Sage Publications.
  • 33. Kline P., (2014). An easy guide to factor analysis . Oxfordshire. Routledge.
  • 34. Byrne B.M., (2013). Structural equation modeling with Mplus : Basic concepts , applications , and programming . Oxfordshire. Routledge.
  • 35. Wang J. and Wang X., (2019). Structural equation modeling : Applications using Mplus . New Jersey. John Wiley & Sons.
  • 37. Comrey A.L. and Lee H.B., (2013). A first course in factor analysis . New York. Psychology press.
  • 38. Hair J.F., Black W.C., Babin B.J., Anderson R.E. R.L. and Tatham , (2018). Multivariate Data Analysis . 8th Edn., New Jersey. Pearson Prentice Hall.
  • 39. DeSilva (2014) Helping measure person-centred care : A review of evidence about commonly used approaches and tools used to help measure person-centred care . London, The Health Foundation. https://www.health.org.uk/publications/helping-measure-person-centred-care .
  • 42. McCormack B, McCance T and Maben J (2013) Outcome Evaluation in the Development of Person-Centred Practice In B McCormack, K Manley and A Titchen (2013) Practice Development in Nursing (Vol 2 ) . Oxford. Wiley-Blackwell Publishing. Pp: 190–211.
  • 43. Irish Platform for Patients Organisations Science & Industry (IPPOSI) (2018). Patient-centred outcome measures in research & healthcare : IPPOSI outcome report . Dublin, Ireland: IPPOSI. https://www.ipposi.ie/wp-content/uploads/2018/12/PCOMs-outcome-report-final-v3.pdf (Accessed 20/1/2023).
  • Reference Manager
  • Simple TEXT file

People also looked at

Editorial article, editorial: capturing talk: the institutional practices surrounding the transcription of spoken language.

one to one interview research method example

  • 1 Research Hub for Language in Forensic Evidence, The University of Melbourne, Parkville, VIC, Australia
  • 2 Aston Institute for Forensic Linguistics, Aston University, Birmingham, United Kingdom
  • 3 Department of Communication and Media, School of Social Science and Humanities, Loughborough University, Loughborough, United Kingdom
  • 4 Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Amsterdam, Netherlands

Editorial on the Research Topic Capturing talk: the institutional practices surrounding the transcription of spoken language

Transcripts are a ubiquitous feature of virtually all modern institutions, many of which would be unable to function without them. Nevertheless, transcription remains an under-researched subject—a situation that Capturing talk: the institutional practices surrounding the transcription of spoken language seeks to remedy.

The initial aim of this Research Topic was to expose and examine under-appreciated features of “entextualization” (the process of representing spoken language as written text). One of these features is the fact that a transcript can only ever be a representation of speech, not a copy—and thus can never represent speech exactly. Another feature, well-articulated by Sarangi (1998) , is the unequal power over the process of transcription exercised by, on the one hand, the speakers whose voices are represented, and, on the other, by those controlling the transcription process.

Where Sarangi's interest was mainly in health and social services institutions, the present Research Topic has a leaning toward legal institutions, where, arguably, these power inequalities are even more starkly contrasted—as demonstrated by the territory-defining volume ( Heffer et al., 2013 ).

Four of the papers in this Research Topic deal with police interviews, providing insight into differing practices across jurisdictions and type of interview (e.g., whether with witnesses or suspects). Several papers examine the practice of converting an interview into a “statement,” written up by the officers who conduct the interviews. Beginning with interviews with witnesses in England and Wales (E&W), Milne et al. analyze a sample of such statements against transcripts produced by the researchers from an audio recording. The omissions, additions, distortions, and other errors in the police versions give cause for deep concern.

An extended study analyzing the creation of records of interviews with suspects in the Netherlands is recounted by Komter , which, again, contrasts transcripts prepared by police interviewers, with the author's transcripts prepared from audio recordings. Again, many concerning limitations on the police transcripts are observed and analyzed. However, while her own transcripts are far more detailed, Komter acknowledges that she too is necessarily selective in what she chooses to represent, guided by the evolving research questions she seeks to investigate.

One practice Komter discusses is that of police records presenting an interview as a monolog, in the voice of the interviewee, rather than as the question-and-answer dialogue it actually was. This practice is also investigated by Eerland and van Charldorp , again focusing on the Dutch context. These authors study how readers of the statements were influenced by three different styles of reporting (monolog, dialogue and narrative), with the troubling finding that the style of reporting affected perceptions of the statements' accuracy and comprehensibility.

In many jurisdictions, police interviews with suspects are routinely audio- or video-recorded. However, this does not signal the end of problems with the representation of these high-stakes interactions. The last of our interview papers is Haworth et al. , which summarizes the key findings to date of an ongoing study of the transcription of electronic records of interviews with suspects in E&W. It demonstrates a range of problems with official police transcripts even when these ostensibly capture the dialogue “verbatim,” and proposes that consistency, accuracy, and neutrality are the foundational features that should underpin any police interview transcript.

A second group of papers studies transcription in non-legal institutional settings. Holder et al. delves into two very large and highly structured organizations with serious security needs: NASA and the US Military. Both make extensive use of audio and video recordings capturing employees as they work—with transcripts produced either routinely, or on demand. The authors look into the two organizations' use of these transcripts, again comparing the official transcripts with their own transcripts of selected sections, using conversation analysis (CA) conventions.

Park and Hepburn also examine CA-style transcripts. Taking as an example Rachel Mitchell's interview of US Supreme Court nominee Brett Kavanaugh about his alleged historical sexual misconduct, these authors compare the information retrievable from a richly detailed Jeffersonian transcript with an orthographic transcript that “wipes out” or “skates over” crucial aspects of speech used by speakers and listeners in constructing the message expressed by the speech.

Another institutional use of transcripts covered in Capturing Talk concerns workers on the assembly line of a small factory in Sweden. Carlsson and Harari report an observation-and-interview study of the instruction manuals created by the workers. While they find much to commend in the retention of power by the creators and users of the manuals, the authors observe room for improvement in the “information design” of the texts, recommending that consultation of linguistics experts could offer benefits.

Voutilainen showcases the high quality of transcripts produced as an official record of the complex and challenging multicultural discussions of wide-ranging Research Topics covered by the parliament in Finland. His account demonstrates how much thought, research and work goes into managing all the factors that need to be considered to create transcripts of this standard.

In a return to the legal setting, a further group of papers examines transcripts of forensic audio, i.e., recordings of speech used as evidence in criminal trials. These are often of very poor quality, meaning that the transcript is intended not as a record of what was said, but as assistance to the court in determining what was said. Internationally, it is common for such transcripts to be provided by police investigating the case. While the courts recognize that police transcripts might contain errors, they rely on judges and/or juries being able to check the transcript against the audio. This ignores well-established research findings that the very act of checking a transcript can cause the listener to hear in line with the transcript, even if it is demonstrably false. For this reason, linguists sometimes recommend that, to ensure accuracy, transcripts should be produced by independent experts in transcription.

However, mere independence may not be enough, and Love and Wright point out some important caveats around this recommendation. They had eight trained transcribers produce transcripts of poor-quality forensic-like audio—finding huge divergences in the content of the transcripts (< 3% of conversational turns were transcribed consistently by all eight participants). This demonstrates that transcribing poor-quality forensic audio needs not just expertise in linguistics, but a managed, evidence-based method.

Recently, a common response to any discussion of the difficulty of transcribing poor-quality audio has been: “Why not let AI do it?” Loakes investigates this suggestion, finding that, while modern automatic speech recognition (ASR) systems are extremely efficient at transcribing good-quality audio, their performance on poor-quality forensic-like audio is low. Even the best-performing system, Whisper, scored only around 50% accuracy, with others far lower.

Harrington also observed low scores for ASR transcripts of poor-quality forensic-like audio. Bridging two of the main areas considered in this Research Topic, she also trialed ASR on recordings of police interviews. The resulting transcripts, though not problem-free, score far higher than those of covert recordings, with errors easier to identify. Harrington makes innovative recommendations for how ASR could be used as a “first draft” interview transcript, to be refined via human transcribers.

Two papers consider the transcription and translation of forensic audio featuring languages other than English. Gilbert and Heydon look at translated transcripts of Vietnamese recordings used as evidence in a drug-related trial. They point out significant errors in the translations, but note that, unless the defense goes to the expense of hiring their own translator/interpreter, such errors are unlikely to be detected—and suggest that audio in languages other than English is often admitted with inadequately tested translations.

Lai presents results of a large national survey of the practices and concerns of translators and interpreters who undertake forensic casework across a wide range of languages. Here, too, results indicate a number of important deficiencies in current practice for translating forensic audio featuring languages other than English—and Lai makes valuable recommendations for improvement.

Finally, taking an authoritative overview of the key issues relevant to this Research Topic, Fraser provides a systematic review of interdisciplinary research on transcripts and transcription, and sets out a series of interacting factors that are known to affect a transcript's reliability. Using examples from a range of legal and academic situations, Fraser argues that, to ensure a transcript is suitable for its intended purpose, it is essential that all the factors be appropriately managed.

Taken as a whole, Capturing Talk amplifies two observations made in both Sarangi (1998) and Heffer et al. (2013) , which, though not the exclusive focus of any individual paper, are highlighted throughout the Research Topic. First, the strong role that context inevitably plays in the interpretation of a transcript implies that “recontextualization” (using a transcript in a context other than the one it was created in) is likely to change its interpretation. Second, even the most expert linguistic analysis of transcripts produced by others is not itself a neutral or “objective” activity. However, this does not mean that such analysis must be “subjective” in any limiting sense. Rather it indicates a need for transcripts to be produced and analyzed by independent, context-aware experts able to devote appropriate attention to all relevant factors.

Most importantly, all contributions to Capturing Talk emphasize that transcription is far from the simple transduction of “sounds” into letters that it is often assumed to be by those who have not studied its intricacies. It is a highly complex and fascinating Research Topic worthy of taking its place as a dedicated field of research in its own right, particularly in view of the widespread misconceptions and unhelpful language ideologies that still beset the institutional practices surrounding the transcription of spoken language.

Author contributions

HF: Writing – original draft, Writing – review & editing. KH: Writing – review & editing. FD: Writing – review & editing. DL: Writing – review & editing. ER: Writing – review & editing. MK: Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Heffer, C., Rock, F., and Conley, J. (2013). Legal-Lay Communication: Textual Travels in the Law . Oxford: Oxford University Press.

Google Scholar

Sarangi, S. (1998). Rethinking recontextualization in professional discourse studies: an epilogue. Text Talk 18, 301–318. doi: 10.1515/text.1.1998.18.2.301

Crossref Full Text | Google Scholar

Keywords: transcription, misconceptions about language and linguistics, language ideologies, forensic linguistics, forensic transcription, police interviews and interrogations, entextualization

Citation: Fraser H, Haworth K, Deamer F, Loakes D, Richardson E and Komter M (2024) Editorial: Capturing talk: the institutional practices surrounding the transcription of spoken language. Front. Commun. 9:1417465. doi: 10.3389/fcomm.2024.1417465

Received: 15 April 2024; Accepted: 22 April 2024; Published: 08 May 2024.

Edited and reviewed by: Mila Vulchanova , NTNU, Norway

Copyright © 2024 Fraser, Haworth, Deamer, Loakes, Richardson and Komter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Helen Fraser, helen.fraser@unimelb.edu.au

This article is part of the Research Topic

Capturing Talk: The Institutional Practices Surrounding the Transcription of Spoken Language

A portrait of Shaun Barcavage, who holds his forehead as though in pain.

Thousands Believe Covid Vaccines Harmed Them. Is Anyone Listening?

All vaccines have at least occasional side effects. But people who say they were injured by Covid vaccines believe their cases have been ignored.

Shaun Barcavage, 54, a nurse practitioner in New York City, said that ever since his first Covid shot, standing up has sent his heart racing. Credit... Hannah Yoon for The New York Times

Supported by

  • Share full article

Apoorva Mandavilli

By Apoorva Mandavilli

Apoorva Mandavilli spent more than a year talking to dozens of experts in vaccine science, policymakers and people who said they had experienced serious side effects after receiving a Covid-19 vaccine.

  • Published May 3, 2024 Updated May 4, 2024

Within minutes of getting the Johnson & Johnson Covid-19 vaccine, Michelle Zimmerman felt pain racing from her left arm up to her ear and down to her fingertips. Within days, she was unbearably sensitive to light and struggled to remember simple facts.

She was 37, with a Ph.D. in neuroscience, and until then could ride her bicycle 20 miles, teach a dance class and give a lecture on artificial intelligence, all in the same day. Now, more than three years later, she lives with her parents. Eventually diagnosed with brain damage, she cannot work, drive or even stand for long periods of time.

“When I let myself think about the devastation of what this has done to my life, and how much I’ve lost, sometimes it feels even too hard to comprehend,” said Dr. Zimmerman, who believes her injury is due to a contaminated vaccine batch .

The Covid vaccines, a triumph of science and public health, are estimated to have prevented millions of hospitalizations and deaths . Yet even the best vaccines produce rare but serious side effects . And the Covid vaccines have been given to more than 270 million people in the United States, in nearly 677 million doses .

Dr. Zimmerman’s account is among the more harrowing, but thousands of Americans believe they suffered serious side effects following Covid vaccination. As of April, just over 13,000 vaccine-injury compensation claims have been filed with the federal government — but to little avail. Only 19 percent have been reviewed. Only 47 of those were deemed eligible for compensation, and only 12 have been paid out, at an average of about $3,600 .

Some scientists fear that patients with real injuries are being denied help and believe that more needs to be done to clarify the possible risks.

“At least long Covid has been somewhat recognized,” said Akiko Iwasaki, an immunologist and vaccine expert at Yale University. But people who say they have post-vaccination injuries are “just completely ignored and dismissed and gaslighted,” she added.

Michelle Zimmerman sits on the floor of a ballroom where she used to dance, with a pair of dancing shoes next to her. She wears a dark skirt and a red velvet shirt.

In interviews and email exchanges conducted over several months, federal health officials insisted that serious side effects were extremely rare and that their surveillance efforts were more than sufficient to detect patterns of adverse events.

“Hundreds of millions of people in the United States have safely received Covid vaccines under the most intense safety monitoring in U.S. history,” Jeff Nesbit, a spokesman for the Department of Health and Human Services, said in an emailed statement.

But in a recent interview, Dr. Janet Woodcock, a longtime leader of the Food and Drug Administration, who retired in February, said she believed that some recipients had experienced uncommon but “serious” and “life-changing” reactions beyond those described by federal agencies.

“I feel bad for those people,” said Dr. Woodcock, who became the F.D.A.’s acting commissioner in January 2021 as the vaccines were rolling out. “I believe their suffering should be acknowledged, that they have real problems, and they should be taken seriously.”

“I’m disappointed in myself,” she added. “I did a lot of things I feel very good about, but this is one of the few things I feel I just didn’t bring it home.”

Federal officials and independent scientists face a number of challenges in identifying potential vaccine side effects.

The nation’s fragmented health care system complicates detection of very rare side effects, a process that depends on an analysis of huge amounts of data. That’s a difficult task when a patient may be tested for Covid at Walgreens, get vaccinated at CVS, go to a local clinic for minor ailments and seek care at a hospital for serious conditions. Each place may rely on different health record systems.

There is no central repository of vaccine recipients, nor of medical records, and no easy to way to pool these data. Reports to the largest federal database of so-called adverse events can be made by anyone, about anything. It’s not even clear what officials should be looking for.

“I mean, you’re not going to find ‘brain fog’ in the medical record or claims data, and so then you’re not going to find” a signal that it may be linked to vaccination, Dr. Woodcock said. If such a side effect is not acknowledged by federal officials, “it’s because it doesn’t have a good research definition,” she added. “It isn’t, like, malevolence on their part.”

The government’s understaffed compensation fund has paid so little because it officially recognizes few side effects for Covid vaccines. And vaccine supporters, including federal officials, worry that even a whisper of possible side effects feeds into misinformation spread by a vitriolic anti-vaccine movement.

‘I’m Not Real’

Patients who believe they experienced serious side effects say they have received little support or acknowledgment.

Shaun Barcavage, 54, a nurse practitioner in New York City who has worked on clinical trials for H.I.V. and Covid, said that ever since his first Covid shot, merely standing up sent his heart racing — a symptom suggestive of postural orthostatic tachycardia syndrome , a neurological disorder that some studies have linked to both Covid and, much less often, vaccination .

He also experienced stinging pain in his eyes, mouth and genitals, which has abated, and tinnitus, which has not.

“I can’t get the government to help me,” Mr. Barcavage said of his fruitless pleas to federal agencies and elected representatives. “I am told I’m not real. I’m told I’m rare. I’m told I’m coincidence.”

Renee France, 49, a physical therapist in Seattle, developed Bell’s palsy — a form of facial paralysis, usually temporary — and a dramatic rash that neatly bisected her face. Bell’s palsy is a known side effect of other vaccines, and it has been linked to Covid vaccination in some studies.

But Dr. France said doctors were dismissive of any connection to the Covid vaccines. The rash, a bout of shingles, debilitated her for three weeks, so Dr. France reported it to federal databases twice.

“I thought for sure someone would reach out, but no one ever did,” she said.

Similar sentiments were echoed in interviews, conducted over more than a year, with 30 people who said they had been harmed by Covid shots. They described a variety of symptoms following vaccination, some neurological, some autoimmune, some cardiovascular.

All said they had been turned away by physicians, told their symptoms were psychosomatic, or labeled anti-vaccine by family and friends — despite the fact that they supported vaccines.

Even leading experts in vaccine science have run up against disbelief and ambivalence.

Dr. Gregory Poland, 68, editor in chief of the journal Vaccine, said that a loud whooshing sound in his ears had accompanied every moment since his first shot, but that his entreaties to colleagues at the Centers for Disease Control and Prevention to explore the phenomenon, tinnitus, had led nowhere.

He received polite responses to his many emails, but “I just don’t get any sense of movement,” he said.

“If they have done studies, those studies should be published,” Dr. Poland added. In despair that he might “never hear silence again,” he has sought solace in meditation and his religious faith.

Dr. Buddy Creech, 50, who led several Covid vaccine trials at Vanderbilt University, said his tinnitus and racing heart lasted about a week after each shot. “It’s very similar to what I experienced during acute Covid, back in March of 2020,” Dr. Creech said.

Research may ultimately find that most reported side effects are unrelated to the vaccine, he acknowledged. Many can be caused by Covid itself.

“Regardless, when our patients experience a side effect that may or may not be related to the vaccine, we owe it to them to investigate that as completely as we can,” Dr. Creech said.

Federal health officials say they do not believe that the Covid vaccines caused the illnesses described by patients like Mr. Barcavage, Dr. Zimmerman and Dr. France. The vaccines may cause transient reactions, such as swelling, fatigue and fever, according to the C.D.C., but the agency has documented only four serious but rare side effects .

Two are associated with the Johnson & Johnson vaccine, which is no longer available in the United States: Guillain-Barré syndrome , a known side effect of other vaccines , including the flu shot; and a blood-clotting disorder.

The C.D.C. also links mRNA vaccines made by Pfizer-BioNTech and Moderna to heart inflammation, or myocarditis, especially in boys and young men. And the agency warns of anaphylaxis, or severe allergic reaction, which can occur after any vaccination.

Listening for Signals

Agency scientists are monitoring large databases containing medical information on millions of Americans for patterns that might suggest a hitherto unknown side effect of vaccination, said Dr. Demetre Daskalakis, director of the C.D.C.’s National Center for Immunization and Respiratory Diseases.

“We toe the line by reporting the signals that we think are real signals and reporting them as soon as we identify them as signals,” he said. The agency’s systems for monitoring vaccine safety are “pretty close” to ideal, he said.

one to one interview research method example

Those national surveillance efforts include the Vaccine Adverse Event Reporting System (VAERS). It is the largest database, but also the least reliable: Reports of side effects can be submitted by anyone and are not vetted, so they may be subject to bias or manipulation.

The system contains roughly one million reports regarding Covid vaccination, the vast majority for mild events, according to the C.D.C.

Federal researchers also comb through databases that combine electronic health records and insurance claims on tens of millions of Americans. The scientists monitor the data for 23 conditions that may occur following Covid vaccination. Officials remain alert to others that may pop up, Dr. Daskalakis said.

But there are gaps, some experts noted. The Covid shots administered at mass vaccination sites were not recorded in insurance claims databases, for example, and medical records in the United States are not centralized.

“It’s harder to see signals when you have so many people, and things are happening in different parts of the country, and they’re not all collected in the same system,” said Rebecca Chandler, a vaccine safety expert at the Coalition for Epidemic Preparedness Innovations.

An expert panel convened by the National Academies concluded in April that for the vast majority of side effects, there was not enough data to accept or reject a link.

Asked at a recent congressional hearing whether the nation’s vaccine-safety surveillance was sufficient, Dr. Peter Marks, director of the F.D.A.’s Center for Biologics Evaluation and Research, said, “I do believe we could do better.”

In some countries with centralized health care systems, officials have actively sought out reports of serious side effects of Covid vaccines and reached conclusions that U.S. health authorities have not.

In Hong Kong, the government analyzed centralized medical records of patients after vaccination and paid people to come forward with problems. The strategy identified “a lot of mild cases that other countries would not otherwise pick up,” said Ian Wong, a researcher at the University of Hong Kong who led the nation’s vaccine safety efforts.

That included the finding that in rare instances — about seven per million doses — the Pfizer-BioNTech vaccine triggered a bout of shingles serious enough to require hospitalization.

The European Medicines Agency has linked the Pfizer and Moderna vaccines to facial paralysis, tingling sensations and numbness. The E.M.A. also counts tinnitus as a side effect of the Johnson & Johnson vaccine, although the American health agencies do not. There are more than 17,000 reports of tinnitus following Covid vaccination in VAERS.

Are the two linked? It’s not clear. As many as one in four adults has some form of tinnitus. Stress, anxiety, grief and aging can lead to the condition, as can infections like Covid itself and the flu.

There is no test or scan for tinnitus, and scientists cannot easily study it because the inner ear is tiny, delicate and encased in bone, said Dr. Konstantina Stankovic, an otolaryngologist at Stanford University.

Still, an analysis of health records from nearly 2.6 million people in the United States found that about 0.04 percent , or about 1,000, were diagnosed with tinnitus within three weeks of their first mRNA shot. In March, researchers in Australia published a study linking tinnitus and vertigo to the vaccines .

The F.D.A. is monitoring reports of tinnitus, but “at this time, the available evidence does not suggest a causal association with the Covid-19 vaccines,” the agency said in a statement.

Despite surveillance efforts, U.S. officials were not the first to identify a significant Covid vaccine side effect: myocarditis in young people receiving mRNA vaccines. It was Israeli authorities who first raised the alarm in April 2021. Officials in the United States said at the time that they had not seen a link.

On May 22, 2021, news broke that the C.D.C. was investigating a “relatively few” cases of myocarditis. By June 23, the number of myocarditis reports in VAERS had risen to more than 1,200 — a hint that it is important to tell doctors and patients what to look for.

Later analyses showed that the risk for myocarditis and pericarditis, a related condition, is highest after a second dose of an mRNA Covid vaccine in adolescent males aged 12 to 17 years.

In many people, vaccine-related myocarditis is transient. But some patients continue to experience pain, breathlessness and depression, and some show persistent changes on heart scans . The C.D.C. has said there were no confirmed deaths related to myocarditis, but in fact there have been several accounts of deaths reported post-vaccination .

Pervasive Misinformation

The rise of the anti-vaccine movement has made it difficult for scientists, in and out of government, to candidly address potential side effects, some experts said. Much of the narrative on the purported dangers of Covid vaccines is patently false, or at least exaggerated, cooked up by savvy anti-vaccine campaigns.

Questions about Covid vaccine safety are core to Robert F. Kennedy Jr.’s presidential campaign. Citing debunked theories about altered DNA, Florida’s surgeon general has called for a halt to Covid vaccination in the state.

“The sheer nature of misinformation, the scale of misinformation, is staggering, and anything will be twisted to make it seem like it’s not just a devastating side effect but proof of a massive cover-up,” said Dr. Joshua Sharfstein, a vice dean at Johns Hopkins University.

Among the hundreds of millions of Americans who were immunized for Covid, some number would have had heart attacks or strokes anyway. Some women would have miscarried. How to distinguish those caused by the vaccine from those that are coincidences? The only way to resolve the question is intense research .

But the National Institutes of Health is conducting virtually no studies on Covid vaccine safety, several experts noted. William Murphy, a cancer researcher who worked at the N.I.H. for 12 years, has been prodding federal health officials to initiate these studies since 2021.

The officials each responded with “that very tired mantra: ‘But the virus is worse,’” Dr. Murphy recalled. “Yes, the virus is worse, but that doesn’t obviate doing research to make sure that there may be other options.”

A deeper understanding of possible side effects, and who is at risk for them, could have implications for the design of future vaccines, or may indicate that for some young and healthy people, the benefit of Covid shots may no longer outweigh the risks — as some European countries have determined.

Thorough research might also speed assistance to thousands of Americans who say they were injured.

The federal government has long run the National Vaccine Injury Compensation Program , designed to compensate people who suffer injuries after vaccination. Established more than three decades ago, the program sets no limit on the amounts awarded to people found to have been harmed.

But Covid vaccines are not covered by that fund because Congress has not made them subject to the excise tax that pays for it. Some lawmakers have introduced bills to make the change.

Instead, claims regarding Covid vaccines go to the Countermeasures Injury Compensation Program . Intended for public health emergencies, this program has narrow criteria to pay out and sets a limit of $50,000, with stringent standards of proof.

It requires applicants to prove within a year of the injury that it was “the direct result” of getting the Covid vaccine, based on “compelling, reliable, valid, medical, and scientific evidence.”

The program had only four staff members at the beginning of the pandemic, and now has 35 people evaluating claims. Still, it has reviewed only a fraction of the 13,000 claims filed, and has paid out only a dozen.

Dr. Ilka Warshawsky, a 58-year-old pathologist, said she lost all hearing in her right ear after a Covid booster shot. But hearing loss is not a recognized side effect of Covid vaccination.

The compensation program for Covid vaccines sets a high bar for proof, she said, yet offers little information on how to meet it: “These adverse events can be debilitating and life-altering, and so it’s very upsetting that they’re not acknowledged or addressed.”

Dr. Zimmerman, the neuroscientist, submitted her application in October 2021 and provided dozens of supporting medical documents. She received a claim number only in January 2023.

In adjudicating her claim for workers’ compensation, Washington State officials accepted that Covid vaccination caused her injury, but she has yet to get a decision from the federal program.

One of her therapists recently told her she might never be able to live independently again.

“That felt like a devastating blow,” Dr. Zimmerman said. “But I’m trying not to lose hope there will someday be a treatment and a way to cover it.”

Apoorva Mandavilli is a reporter focused on science and global health. She was a part of the team that won the 2021 Pulitzer Prize for Public Service for coverage of the pandemic. More about Apoorva Mandavilli

Advertisement

IMAGES

  1. One-to-One Interview

    one to one interview research method example

  2. INTERVIEW AS A RESEARCH METHOD (KEY POINTS TO REMEMBER)

    one to one interview research method example

  3. Interview method in research

    one to one interview research method example

  4. 15 Research Methodology Examples (2023)

    one to one interview research method example

  5. General Guidelines for Conducting Research Interviews

    one to one interview research method example

  6. Qualitative Research: Definition, Types, Methods and Examples (2023)

    one to one interview research method example

VIDEO

  1. 8-9-2011 Practice: Markelle Martin Interview

  2. Interview Method and Types of Interview Method in Urdu/Hindi 2020

  3. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  4. t-star interview with don smooth k103 fm labnoise.com part 1

  5. Sociology: Research methods

  6. Balitang Bisdak: Ang One-on-One Interview sa PRO 7 Director

COMMENTS

  1. One-to-One Interview

    Definition: A one-to-one interview is a research method in which the interviewer meets with one respondent at a time to ask questions. This type of interview is used to collect qualitative data from respondents about their opinions, beliefs, or experiences. One-to-one interviews are usually conducted in person, but they can also be done over ...

  2. Types of Interviews in Research

    Types of Interviews in Research | Guide & Examples. Published on March 10, 2022 by Tegan George. Revised on June 22, 2023. An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions.

  3. AEC675/WC338: Preparing for One-on-One Qualitative Interviews ...

    Qualitative research methods such as interviews can provide context to the numbers gathered using quantitative methods. One-on-one interviews between a researcher and a participant reach a small number of people in-depth and offer insight into an array of experiences. ... For example, a structured interview will have the exact order of the ...

  4. One-On-One Interviews: Techniques, Questions, Pros & Cons

    A one-on-one interview is a qualitative research method in which an interviewer engages in a face-to-face conversation with a single participant or interviewee. This format allows for a focused and personalized interaction between the interviewer and the interviewed individual. One-on-one interviews are commonly used in various fields ...

  5. Interviewing in qualitative research: The one-to-one interview

    Background The one-to-one interview is a commonly used data collection method in health and social research. Increasing attention has been given in the literature to the process of conducting an interview, particularly with respect to the role of the interviewer and the relationship between the interviewer and interviewee. The individual interview is a valuable method of gaining insight into ...

  6. PDF COLLECT MEANINGFUL DATA: Conducting One-on-One Interviews

    Qualitative Data emPower Tool). Interviews are a great tool for collecting data that needs to go deeper into a topic—data with context and nuance, and in the words of the people you interview, rather than the person writing a survey. Interviews can be used to evaluate a program, explore the impacts of an intervention, or more deeply ...

  7. 6 Qualitative Research and Interviews

    6.1 Interviews. In-depth interviews allow participants to describe experiences in their own words (a primary strength of the interview format). Strong in-depth interviews will include many open-ended questions that allow participants to respond in their own words, share new ideas, and lead the conversation in different directions. The purpose of open-ended questions and in-depth interviews is ...

  8. Preparing for One-on-One Qualitative Interviews: Designing and

    Uses and Types of Interviews Qualitative research methods such as interviews can provide context to the numbers gathered using quantitative methods. One-on-one interviews between a researcher and a participant reach a small number of people in-depth and offer insight into an array of experiences. ... For example, the first research on social ...

  9. Conduct one-to-one qualitative interviews for research

    Interviewing is a method that belongs to the qualitative research family and in medical education research, it is one of the most common Methods of gathering qualitative data.[1] My focus in this p...

  10. How to carry out great interviews in qualitative research

    A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom. There are three main types of qualitative research interview - structured, unstructured or semi-structured.

  11. Chapter 11. Interviewing

    Rapley, Timothy John. 2001. "The 'Artfulness' of Open-Ended Interviewing: Some considerations in analyzing interviews." Qualitative Research 1(3):303-323. Argues for the importance of "local context" of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

  12. Preparing for One-on-One Qualitative Interviews: Designing and

    Abstract. This new 4-page document provides instructions on designing the question guide and conducting one-on-one interviews for qualitative data collection. It covers common types of interviews ...

  13. (PDF) Interviewing in qualitative research

    Background The one-to-one interview is a commonly used data collection method in health and social research. Increasing attention has been given in the literature to the process of conducting an ...

  14. Interview Method In Psychology Research

    A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded. Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1). Figure 1.

  15. Research Methods Guide: Interview Research

    Develop an interview guide. Introduce yourself and explain the aim of the interview. Devise your questions so interviewees can help answer your research question. Have a sequence to your questions / topics by grouping them in themes. Make sure you can easily move back and forth between questions / topics. Make sure your questions are clear and ...

  16. Interviews and focus groups in qualitative research: an update for the

    Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives ...

  17. Types of Interviews in Research

    Types of Interviews in Research | Guide & Examples. Published on 4 May 2022 by Tegan George. Revised on 10 October 2022. An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions.

  18. Conduct one-to-one qualitative interviews for research

    Conduct one-to-one qualitative interviews for research. Conduct one-to-one qualitative interviews for research. Conduct one-to-one qualitative interviews for research Educ Prim Care. 2016 Jul;27(4):330-2. doi: 10.1080/14739879.2016.1176874. Epub 2016 Apr 26. Author Alison Bullock 1 Affiliation ... Interviews as Topic / methods*

  19. One-on-one interviews in qualitative research

    Choosing the right method for your study revolves around your research goals. ... One-on-one interviews in qualitative research From the course: Market Research: Qualitative. Start my 1-month free ...

  20. Research methodology series Interviewing in qualitative research: The

    J Adv Nurs 55(5): 587-95 KEY POINTS One-to-one interviews are a valuable method of collecting rich in-depth data about participants' experiences and perspectives. Interviews vary in structure and the approach chosen will depend on the aims and objectives of the study and the research design. It is necessary to consider the interview ...

  21. Qualitative research method-interviewing and observation

    Qualitative research method-interviewing and observation. Buckley and Chiang define research methodology as "a strategy or architectural design by which the researcher maps out an approach to problem-finding or problem-solving.". [ 1] According to Crotty, research methodology is a comprehensive strategy 'that silhouettes our choice and ...

  22. 13.1 Interview research: What is it and when should it be used?

    In social science, interviews are a method of data collection that involves two or more people exchanging information through a series of questions and answers. The questions are designed by a researcher to elicit information from interview participants on a specific topic or set of topics. These topics are informed by the author's research ...

  23. Interdisciplinary research approach based on a mixed-methods design to

    Subprojects 1b and 1c: a constructivist grounded theory method aiming at developing a novel theory from semistructured interviews in both patients and nurses. Subproject 2a: a thematic analysis based on (1) audio-recordings of advance care planning encounters and (2) follow-up semidirective interviews with patients and their relatives.

  24. The development and structural validity testing of the Person-centred

    Background Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Measuring the outcomes arising from person-centred practices is complex and challenging and often adopts multiple perspectives and approaches. Few measurement frameworks are grounded in an explicit ...

  25. Frontiers

    The initial aim of this Research Topic was to expose and examine under-appreciated features of 'entextualisation' (the process of representing spoken language as written text). One of these features is the fact that a transcript can only ever be a representation of speech, not a copy -and thus can never represent speech exactly. Another feature, well articulated by Sarangi (1998), is the ...

  26. 27 Leadership Interview Questions (With Sample Answers)

    More leadership interview questions for managers If you're interviewing to be a team leader, the hiring manager may ask questions that relate to your ability to manage a smaller group on a personal level.Here are some examples of questions for managers: 18. Tell me about your approach to scheduling, like approving time-off requests and organizing week-long shift schedules.

  27. Thousands Believe Covid Vaccines Harmed Them. Is Anyone Listening

    In interviews and email exchanges conducted over several months, federal health officials insisted that serious side effects were extremely rare and that their surveillance efforts were more than ...