Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Submit?
  • About Journal of Survey Statistics and Methodology
  • About the American Association for Public Opinion Research
  • About the American Statistical Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Editors-in-Chief

Kristen Olson

Katherine Jenny Thompson

Editorial board

Why Submit to JSSAM ?

The Journal of Survey Statistics and Methodology  is an international, high-impact journal sponsored by the American Association for Public Opinion Research (AAPOR) and the American Statistical Association. Published since 2013, the journal has quickly become a trusted source for a wide range of high quality research in the field.

Latest articles

Latest posts on x.

Highly Cited

High-Impact Articles

Access a collection of articles from  Journal of Survey Statistics and Methodology that represent some of the most cited, most read, and most discussed articles from recent years.

Explore article collection

Alerts in the Inbox

Email alerts

Register to receive table of contents email alerts as soon as new issues of Journal of Survey Statistics and Methodology are published online.

Author resources

Publish in JSSAM

Want to publish in Journal of Survey Statistics and Methodology ? The journal publishes articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

View instructions to authors

Related Titles

Cover image of current issue from Research Evaluation

  • Recommend to your Library

Affiliations

  • Online ISSN 2325-0992
  • Copyright © 2024 American Association for Public Opinion Research
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Help Center
  • اَلْعَرَبِيَّةُ
  • Deutsch (Schweiz)
  • Español (Mexico)
  • Bahasa Indonesia
  • Bahasa Melayu
  • Português (Brasil)
  • Tiếng việt

Survey Research — Types, Methods and Example Questions

Survey research The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research. What is survey research? Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity. Why is survey research important? Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data. Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them. Types of survey research Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding. Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject. Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group. Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect. But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section. Survey research and data collection methods Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data. Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are. Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence. Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls. Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data. As we journey further, let’s weigh the pros and cons of survey research. Advantages and disadvantages of survey research Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside. Advantages: Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs. Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room. Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment. Disadvantages: Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too. Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues. Nonresponse: Sometimes, potential respondents play hard to get, skewing the data. Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research. Limitations of survey research Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth. Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear. Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty. Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this. Now that we're aware of the limitations let's delve into the world of survey design. {loadmoduleid 430} Survey research design Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started. Approach and Strategy Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey. Panel The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive. 9 Tips on Building the Perfect Survey Research Questionnaire Keep It Simple: Clear and straightforward questions lead to accurate responses. Make It Relevant: Ensure every question ties back to your research objectives. Order Matters: Start with easy questions to build rapport and save sensitive ones for later. Avoid Double-Barreled Questions: Stick to one idea per question. Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options. Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate. Pretest Your Survey: A pilot run helps you spot any issues before the final launch. Keep It Short: Respect your respondents' time. Make It Engaging: Keep your respondents interested with a mix of question types. Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points. Here's an example of how such a survey could look: "On a scale of 1 to 10, how would you rate the ease of navigating our app?" "How often do you encounter difficulties while using our app?" "What features do you use most frequently in our app?" "What improvements would you suggest for our app?" "What features would you like to see in future updates?" This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users. Psychology and Ethics in survey research The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically. Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable. For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago. Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process. Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents. Survey research software In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience. LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research. Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat. For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge. Conclusion If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next! In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning! We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now! Happy surveying! {loadmoduleid 429}

sample survey research papers

Table Content

Survey research.

The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research.

What is survey research?

Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity.

Why is survey research important?

Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data.

Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them.

Types of survey research

Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us.

  • Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding.
  • Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject.
  • Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group.
  • Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect.

But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section.

Survey research and data collection methods

Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data.

  • Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are.
  • Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence.
  • Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls.
  • Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data.

As we journey further, let’s weigh the pros and cons of survey research.

Advantages and disadvantages of survey research

Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside.

Advantages:

  • Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs.
  • Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room.
  • Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment.

Disadvantages:

  • Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too.
  • Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues.
  • Nonresponse: Sometimes, potential respondents play hard to get, skewing the data.

Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research.

Limitations of survey research

Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth.

Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear.

Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty.

Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this.

Now that we're aware of the limitations let's delve into the world of survey design.

  •   Create surveys in 40+ languages
  •   Unlimited number of users
  •   Ready-to-go survey templates
  •   So much more...

Survey research design

Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started.

Approach and Strategy

Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey.

The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive.

9 Tips on Building the Perfect Survey Research Questionnaire

  • Keep It Simple: Clear and straightforward questions lead to accurate responses.
  • Make It Relevant: Ensure every question ties back to your research objectives.
  • Order Matters: Start with easy questions to build rapport and save sensitive ones for later.
  • Avoid Double-Barreled Questions: Stick to one idea per question.
  • Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options.
  • Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate.
  • Pretest Your Survey: A pilot run helps you spot any issues before the final launch.
  • Keep It Short: Respect your respondents' time.
  • Make It Engaging: Keep your respondents interested with a mix of question types.

Survey research examples and questions

Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains.

User Experience (UX)

Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points.

Here's an example of how such a survey could look:

  • "On a scale of 1 to 10, how would you rate the ease of navigating our app?"
  • "How often do you encounter difficulties while using our app?"
  • "What features do you use most frequently in our app?"
  • "What improvements would you suggest for our app?"
  • "What features would you like to see in future updates?"

This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users.

Psychology and Ethics in survey research

The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically.

Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable.

For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago.

Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process.

Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents.

Survey research software

In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience.

LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research.

Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat.

For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge.

If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next!

In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning!

We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now!

Happy surveying!

Think one step ahead.

Step into a bright future with our simple online survey tool

Open Source

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

sample survey research papers

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

customer communication tool

Customer Communication Tool: Types, Methods, Uses, & Tools

Apr 23, 2024

sentiment analysis tools

Top 12 Sentiment Analysis Tools for Understanding Emotions

QuestionPro BI: From Research Data to Actionable Dashboards

QuestionPro BI: From research data to actionable dashboards within minutes

Apr 22, 2024

customer experience management software

21 Best Customer Experience Management Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

A Comprehensive Guide to Survey Research Methodologies

For decades, researchers and businesses have used survey research to produce statistical data and explore ideas. The survey process is simple, ask questions and analyze the responses to make decisions. Data is what makes the difference between a valid and invalid statement and as the American statistician, W. Edwards Deming said:

“Without data, you’re just another person with an opinion.” - W. Edwards Deming

In this article, we will discuss what survey research is, its brief history, types, common uses, benefits, and the step-by-step process of designing a survey.

What is Survey Research

A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It’s an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts.

Brief History of Survey Research

Survey research may have its roots in the American and English “social surveys” conducted around the turn of the 20th century. The surveys were mainly conducted by researchers and reformers to document the extent of social issues such as poverty. ( 1 ) Despite being a relatively young field to many scientific domains, survey research has experienced three stages of development ( 2 ):

-       First Era (1930-1960)

-       Second Era (1960-1990)

-       Third Era (1990 onwards)

Over the years, survey research adapted to the changing times and technologies. By exploiting the latest technologies, researchers can gain access to the right population from anywhere in the world, analyze the data like never before, and extract useful information.

Survey Research Methods & Types

Survey research can be classified into seven categories based on objective, data sources, methodology, deployment method, and frequency of deployment.

Types of survey research based on objective, data source, methodology, deployment method, and frequency of deployment.

Surveys based on Objective

Exploratory survey research.

Exploratory survey research is aimed at diving deeper into research subjects and finding out more about their context. It’s important for marketing or business strategy and the focus is to discover ideas and insights instead of gathering statistical data.

Generally, exploratory survey research is composed of open-ended questions that allow respondents to express their thoughts and perspectives. The final responses present information from various sources that can lead to fresh initiatives.

Predictive Survey Research

Predictive survey research is also called causal survey research. It’s preplanned, structured, and quantitative in nature. It’s often referred to as conclusive research as it tries to explain the cause-and-effect relationship between different variables. The objective is to understand which variables are causes and which are effects and the nature of the relationship between both variables.

Descriptive Survey Research

Descriptive survey research is largely observational and is ideal for gathering numeric data. Due to its quantitative nature, it’s often compared to exploratory survey research. The difference between the two is that descriptive research is structured and pre-planned.

 The idea behind descriptive research is to describe the mindset and opinion of a particular group of people on a given subject. The questions are every day multiple choices and users must choose from predefined categories. With predefined choices, you don’t get unique insights, rather, statistically inferable data.

Survey Research Types based on Concept Testing

Monadic concept testing.

Monadic testing is a survey research methodology in which the respondents are split into multiple groups and ask each group questions about a separate concept in isolation. Generally, monadic surveys are hyper-focused on a particular concept and shorter in duration. The important thing in monadic surveys is to avoid getting off-topic or exhausting the respondents with too many questions.

Sequential Monadic Concept Testing

Another approach to monadic testing is sequential monadic testing. In sequential monadic surveys, groups of respondents are surveyed in isolation. However, instead of surveying three groups on three different concepts, the researchers survey the same groups of people on three distinct concepts one after another. In a sequential monadic survey, at least two topics are included (in random order), and the same questions are asked for each concept to eliminate bias.

Based on Data Source

Primary data.

Data obtained directly from the source or target population is referred to as primary survey data. When it comes to primary data collection, researchers usually devise a set of questions and invite people with knowledge of the subject to respond. The main sources of primary data are interviews, questionnaires, surveys, and observation methods.

 Compared to secondary data, primary data is gathered from first-hand sources and is more reliable. However, the process of primary data collection is both costly and time-consuming.

Secondary Data

Survey research is generally used to collect first-hand information from a respondent. However, surveys can also be designed to collect and process secondary data. It’s collected from third-party sources or primary sources in the past.

 This type of data is usually generic, readily available, and cheaper than primary data collection. Some common sources of secondary data are books, data collected from older surveys, online data, and data from government archives. Beware that you might compromise the validity of your findings if you end up with irrelevant or inflated data.

Based on Research Method

Quantitative research.

Quantitative research is a popular research methodology that is used to collect numeric data in a systematic investigation. It’s frequently used in research contexts where statistical data is required, such as sciences or social sciences. Quantitative research methods include polls, systematic observations, and face-to-face interviews.

Qualitative Research

Qualitative research is a research methodology where you collect non-numeric data from research participants. In this context, the participants are not restricted to a specific system and provide open-ended information. Some common qualitative research methods include focus groups, one-on-one interviews, observations, and case studies.

Based on Deployment Method

Online surveys.

With technology advancing rapidly, the most popular method of survey research is an online survey. With the internet, you can not only reach a broader audience but also design and customize a survey and deploy it from anywhere. Online surveys have outperformed offline survey methods as they are less expensive and allow researchers to easily collect and analyze data from a large sample.

Paper or Print Surveys

As the name suggests, paper or print surveys use the traditional paper and pencil approach to collect data. Before the invention of computers, paper surveys were the survey method of choice.

Though many would assume that surveys are no longer conducted on paper, it's still a reliable method of collecting information during field research and data collection. However, unlike online surveys, paper surveys are expensive and require extra human resources.

Telephonic Surveys

Telephonic surveys are conducted over telephones where a researcher asks a series of questions to the respondent on the other end. Contacting respondents over a telephone requires less effort, human resources, and is less expensive.

What makes telephonic surveys debatable is that people are often reluctant in giving information over a phone call. Additionally, the success of such surveys depends largely on whether people are willing to invest their time on a phone call answering questions.

One-on-one Surveys

One-on-one surveys also known as face-to-face surveys are interviews where the researcher and respondent. Interacting directly with the respondent introduces the human factor into the survey.

Face-to-face interviews are useful when the researcher wants to discuss something personal with the respondent. The response rates in such surveys are always higher as the interview is being conducted in person. However, these surveys are quite expensive and the success of these depends on the knowledge and experience of the researcher.

Based on Distribution

The easiest and most common way of conducting online surveys is sending out an email. Sending out surveys via emails has a higher response rate as your target audience already knows about your brand and is likely to engage.

Buy Survey Responses

Purchasing survey responses also yields higher responses as the responders signed up for the survey. Businesses often purchase survey samples to conduct extensive research. Here, the target audience is often pre-screened to check if they're qualified to take part in the research.

Embedding Survey on a Website

Embedding surveys on a website is another excellent way to collect information. It allows your website visitors to take part in a survey without ever leaving the website and can be done while a person is entering or exiting the website.

Post the Survey on Social Media

Social media is an excellent medium to reach abroad range of audiences. You can publish your survey as a link on social media and people who are following the brand can take part and answer questions.

Based on Frequency of Deployment

Cross-sectional studies.

Cross-sectional studies are administered to a small sample from a large population within a short period of time. This provides researchers a peek into what the respondents are thinking at a given time. The surveys are usually short, precise, and specific to a particular situation.

Longitudinal Surveys

Longitudinal surveys are an extension of cross-sectional studies where researchers make an observation and collect data over extended periods of time. This type of survey can be further divided into three types:

-       Trend surveys are employed to allow researchers to understand the change in the thought process of the respondents over some time.

-       Panel surveys are administered to the same group of people over multiple years. These are usually expensive and researchers must stick to their panel to gather unbiased opinions.

-       In cohort surveys, researchers identify a specific category of people and regularly survey them. Unlike panel surveys, the same people do not need to take part over the years, but each individual must fall into the researcher’s primary interest category.

Retrospective Survey

Retrospective surveys allow researchers to ask questions to gather data about past events and beliefs of the respondents. Since retrospective surveys also require years of data, they are similar to the longitudinal survey, except retrospective surveys are shorter and less expensive.

Why Should You Conduct Research Surveys?

“In God we trust. All others must bring data” - W. Edwards Deming

 In the information age, survey research is of utmost importance and essential for understanding the opinion of your target population. Whether you’re launching a new product or conducting a social survey, the tool can be used to collect specific information from a defined set of respondents. The data collected via surveys can be further used by organizations to make informed decisions.

Furthermore, compared to other research methods, surveys are relatively inexpensive even if you’re giving out incentives. Compared to the older methods such as telephonic or paper surveys, online surveys have a smaller cost and the number of responses is higher.

 What makes surveys useful is that they describe the characteristics of a large population. With a larger sample size , you can rely on getting more accurate results. However, you also need honest and open answers for accurate results. Since surveys are also anonymous and the responses remain confidential, respondents provide candid and accurate answers.

Common Uses of a Survey

Surveys are widely used in many sectors, but the most common uses of the survey research include:

-       Market research : surveying a potential market to understand customer needs, preferences, and market demand.

-       Customer Satisfaction: finding out your customer’s opinions about your services, products, or companies .

-       Social research: investigating the characteristics and experiences of various social groups.

-       Health research: collecting data about patients’ symptoms and treatments.

-       Politics: evaluating public opinion regarding policies and political parties.

-       Psychology: exploring personality traits, behaviors, and preferences.

6 Steps to Conduct Survey Research

An organization, person, or company conducts a survey when they need the information to make a decision but have insufficient data on hand. Following are six simple steps that can help you design a great survey.

Step 1: Objective of the Survey

The first step in survey research is defining an objective. The objective helps you define your target population and samples. The target population is the specific group of people you want to collect data from and since it’s rarely possible to survey the entire population, we target a specific sample from it. Defining a survey objective also benefits your respondents by helping them understand the reason behind the survey.

Step 2: Number of Questions

The number of questions or the size of the survey depends on the survey objective. However, it’s important to ensure that there are no redundant queries and the questions are in a logical order. Rephrased and repeated questions in a survey are almost as frustrating as in real life. For a higher completion rate, keep the questionnaire small so that the respondents stay engaged to the very end. The ideal length of an interview is less than 15 minutes. ( 2 )

Step 3: Language and Voice of Questions

While designing a survey, you may feel compelled to use fancy language. However, remember that difficult language is associated with higher survey dropout rates. You need to speak to the respondent in a clear, concise, and neutral manner, and ask simple questions. If your survey respondents are bilingual, then adding an option to translate your questions into another language can also prove beneficial.

Step 4: Type of Questions

In a survey, you can include any type of questions and even both closed-ended or open-ended questions. However, opt for the question types that are the easiest to understand for the respondents, and offer the most value. For example, compared to open-ended questions, people prefer to answer close-ended questions such as MCQs (multiple choice questions)and NPS (net promoter score) questions.

Step 5: User Experience

Designing a great survey is about more than just questions. A lot of researchers underestimate the importance of user experience and how it affects their response and completion rates. An inconsistent, difficult-to-navigate survey with technical errors and poor color choice is unappealing for the respondents. Make sure that your survey is easy to navigate for everyone and if you’re using rating scales, they remain consistent throughout the research study.

Additionally, don’t forget to design a good survey experience for both mobile and desktop users. According to Pew Research Center, nearly half of the smartphone users access the internet mainly from their mobile phones and 14 percent of American adults are smartphone-only internet users. ( 3 )

Step 6: Survey Logic

Last but not least, logic is another critical aspect of the survey design. If the survey logic is flawed, respondents may not continue in the right direction. Make sure to test the logic to ensure that selecting one answer leads to the next logical question instead of a series of unrelated queries.

How to Effectively Use Survey Research with Starlight Analytics

Designing and conducting a survey is almost as much science as it is an art. To craft great survey research, you need technical skills, consider the psychological elements, and have a broad understanding of marketing.

The ultimate goal of the survey is to ask the right questions in the right manner to acquire the right results.

Bringing a new product to the market is a long process and requires a lot of research and analysis. In your journey to gather information or ideas for your business, Starlight Analytics can be an excellent guide. Starlight Analytics' product concept testing helps you measure your product's market demand and refine product features and benefits so you can launch with confidence. The process starts with custom research to design the survey according to your needs, execute the survey, and deliver the key insights on time.

  • Survey research in the United States: roots and emergence, 1890-1960 https://searchworks.stanford.edu/view/10733873    
  • How to create a survey questionnaire that gets great responses https://luc.id/knowledgehub/how-to-create-a-survey-questionnaire-that-gets-great-responses/    
  • Internet/broadband fact sheet https://www.pewresearch.org/internet/fact-sheet/internet-broadband/    

Related Articles

Concept testing: real-time customer feedback to power your products + marketing.

Need real-time feedback on your product or marketing idea? Concept testing helps you gauge customer acceptance + willingness to purchase your product before you go-to-market.

The Hidden Reason Why Market Research Matters

While there are many self-evident reasons why market research matters, they all tend to center around one hidden reason: your intuition is not always right. By recognizing these implicit biases, and regularly challenging your intuition, you open yourself up to new business opportunities, and insights about your market + customers that run counter to your gut.

Price Testing 101: How to Do it The Right Way

Tired of playing the guessing game with your pricing strategy? Learn the 101 of price testing and how to do it the right way with Starlight Analytics.

What is Product Positioning? (Examples and Strategies)

Launching a new product is a long and arduous process. Learn how to define and differentiate your product for maximum success with a product positioning strategy.

Test Marketing | How to Test Market a New Product

Test marketing is a tool used to help companies test their product and gather customer feedback before its launch. Click here to learn more about it.

Sample Survey

  • Reference work entry
  • pp 5641–5643
  • Cite this reference work entry

sample survey research papers

  • Neelam Ark 3 &
  • Tavinder Ark 3  

268 Accesses

Questionnaires and sample surveys ; Scientific survey ; Survey

Survey sampling is the process of selecting individuals (a sample) from a population of interest (theoretical population) so that the selected individuals not only represent the population but that the findings from the sample can be generalized to the population from which the sample came from.

Description

How to select individuals in your study.

When conducting a survey, researchers often select a sample from a population of interest (theoretical population) because it is impossible to survey to the whole population. The goal is to ensure the sample is (a) representative of the population and (b) that the survey findings from the sample can be generalized back to the theoretical population from which the sample was drawn. There are two major types of sampling methods: random/probability sampling and non-probability sampling. Overviews of the statistical theory of sampling can be found in Cochran ( 1977 ),...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Cochran, W. G. (1977). Sampling techniques (3rd ed.). New York: Wiley.

Google Scholar  

Kalton, G. (1983). Introduction to survey sampling (quantitative applications in the social sciences . Newbury Park, CA: Sage Publication.

Kish, L. (1995). Survey sampling . New York: Wiley Classic Library.

Lohr, S. L. (1999). Sampling: Design and analysis . Boston, Massachusetts: Duxbury Press.

Download references

Author information

Authors and affiliations.

Department of ECPS, University of British Columbia, Scarfe Building, 2125 Main Mall, Vancouver, BC, Canada

Neelam Ark & Tavinder Ark

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Neelam Ark .

Editor information

Editors and affiliations.

University of Northern British Columbia, Prince George, BC, Canada

Alex C. Michalos

(residence), Brandon, MB, Canada

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Ark, N., Ark, T. (2014). Sample Survey. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_2553

Download citation

DOI : https://doi.org/10.1007/978-94-007-0753-5_2553

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-0752-8

Online ISBN : 978-94-007-0753-5

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

How to write a Survey Paper

So you want to write a survey paper? How do you begin?

What is (not) a survey paper?

Some of the following is adapted from https://www.researchgate.net/post/How_to_write_survey_or_review_papers_and_What_sections_should_be_mentioned_in_such_papers

To answer this question its best to first ask what a survey paper is not . A survey paper is not simply a core dump of a bunch of papers in a common area.

Instead, a survey is a research paper whose data and results are taken from other papers. This means that you should have a point to make or some new conclusion to draw.

You’ll make this point or draw these conclusions based upon a broad reading of previous works. You need to really know the topic in order to have the audacity to claim that a thorough survey of the field. You’ll need to be completely aware of the main themes, directions, controversies, and results in the field. You may wish to email and interview authors of related works to get their opinion.

Writing a survey paper is much more difficult than writing a research paper

You do not simply list prior results. You need to assimilate and synthesize the results. Sometimes you’ll need to address conflicts in notation or introduce entirely new notation.

And, of course, you need to have a point. The point you make will determine the organization of survey paper. The structure of the main sections of the paper will reflect the structure of field. You might consider the following organization:

  • Simple to complex scale. Maybe there was some seminal invention that people add more and more complexity onto — this is very very very common in AI and ML.
  • Comparative Analysis. You compare Two or more different approaches to the same problem.
  • Pipeline Analysis. Many complex solutions require a pipeline that you’ll describe and categorize and annotate.
  • Disentanglement. Maybe your field has researchers conflating issues that need to be carefully untangled.
  • Historical. Tell the story of something if its compelling.

You’ll do a good job if you can communicate a perspective and/or articulate the gaps in the knowledge. This is difficult and should probably not be attempted by young scientists or graduate students.

The bottom line is that you need to have a point to make and, conclusion to draw, or some kind of contribution that is not just a list of abstracts.

An iterative process

The following is taken from https://academia.stackexchange.com/questions/43371/how-to-write-a-survey-paper

The point of a survey paper of the type you are discussion (as distinct from a systematic review), is to provide an organized view of the current state of the field. As such, you should not be attempting to cite every paper, but only the ones that are significant (which will still be an awful lot)

Writing a good survey paper is hard, and there really aren’t any good shortcuts: you  do  need to become familiar with the content of a very large number of papers, in order to make sure that the view you are presenting is sane.

Step 1: Begin by collecting a large pile of papers to survey.

Start by collecting a handful of papers that you are interested in. See who cites them and what they cite.

Step 2: Things of an organization schema.

Based on your experience and a readings, hypothesize an organization schema for the field. What point are you trying to make with this survey?

Start reading (mostly skimming) and organizing your collection of papers you read using this schema, including noting which ones are most important and which do not fit the schema well.

As you find significant numbers of papers that do not fit the schema well, adjust the schema to better fit what you are actually finding and shift the organization of your collection to match.

Step 3: Find new papers

As you continue to read you’ll find papers that cite and are cited by the papers you’ve read. Add these new papers to the “to be read” collection based on the adjusted schema, then return to Step 1.

Convergence.

When the process converges to a stable schema and an empty to-be-read pile, you will have a well-developed view of the current state of the field and be in a good position to write a survey. Note, however, that this may take a number of months.

Hints and Tricks

Use a bib manager. Zotero, Mendeley, etc

Use a consistent bibtex index structure. I use lastnameYearFirstwordoftitle convention

You should follow this process in your PhD study generally, but it doesn’t mean that you have to write a survey paper. A survey paper needs to have something to say; a point to make; or some contribution in the way we think about a thing.

sample survey research papers

Conducting Survey Research

Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person being surveyed, an online questionnaire, a face-to-face interview, or a telephone interview. Using surveys, it is possible to collect data from large or small populations (sometimes referred to as the universe of a study).

Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines. For instance, interview began as a tool primarily for psychologists and anthropologists, while sampling got its start in the field of agricultural economics (Angus and Katona, 1953, p. 15).

Survey research does not belong to any one field and it can be employed by almost any discipline. According to Angus and Katona, "It is this capacity for wide application and broad coverage which gives the survey technique its great usefulness..." (p. 16).

Types of Surveys

Surveys come in a wide range of forms and can be distributed using a variety of media.

Mail Surveys

Group administered questionnaires, drop-off surveys, oral surveys, electronic surveys.

  • An Example Survey

Example Survey

General Instructions: We are interested in your writing and computing experiences and attitudes. Please take a few minutes to complete this survey. In general, when you are presented with a scale next to a question, please put an X over the number that best corresponds to your answer. For example, if you strongly agreed with the following question, you might put an X through the number 5. If you agreed moderately, you might put an X through number 4, if you neither agreed nor disagreed, you might put an X through number 3.

Example Question:

As is the case with all of the information we are collecting for our study, we will keep all the information you provide to us completely confidential. Your teacher will not be made aware of any of your responses. Thanks for your help.

Your Name: ___________________________________________________________

Your Instructor's Name: __________________________________________________

Written Surveys

Imagine that you are interested in exploring the attitudes college students have about writing. Since it would be impossible to interview every student on campus, choosing the mail-out survey as your method would enable you to choose a large sample of college students. You might choose to limit your research to your own college or university, or you might extend your survey to several different institutions. If your research question demands it, the mail survey allows you to sample a very broad group of subjects at small cost.

Strengths and Weaknesses of Mail Surveys

Cost: Mail surveys are low in cost compared to other methods of surveying. This type of survey can cost up to 50% less than the self-administered survey, and almost 75% less than a face-to-face survey (Bourque and Fielder 9). Mail surveys are also substantially less expensive than drop-off and group-administered surveys.

Convenience: Since many of these types of surveys are conducted through a mail-in process, the participants are able to work on the surveys at their leisure.

Bias: Because the mail survey does not allow for personal contact between the researcher and the respondent, there is little chance for personal bias based on first impressions to alter the responses to the survey. This is an advantage because if the interviewer is not likeable, the survey results will be unfavorably affected. However, this could be a disadvantage as well.

Sampling--internal link: It is possible to reach a greater population and have a larger universe (sample of respondents) with this type of survey because it does not require personal contact between the researcher and the respondents.

Low Response Rate: One of the biggest drawbacks to written survey, especially as it relates to the mail-in, self-administered method, is the low response rate. Compared to a telephone survey or a face-to-face survey, the mail-in written survey has a response rate of just over 20%.

Ability of Respondent to Answer Survey: Another problem with self-administered surveys is three-fold: assumptions about the physical ability, literacy level and language ability of the respondents. Because most surveys pull the participants from a random sampling, it is impossible to control for such variables. Many of those who belong to a survey group have a different primary language than that of the survey. They may also be illiterate or have a low reading level and therefore might not be able to accurately answer the questions. Along those same lines, persons with conditions that cause them to have trouble reading, such as dyslexia, visual impairment or old age, may not have the capabilities necessary to complete the survey.

Imagine that you are interested in finding out how instructors who teach composition in computer classrooms at your university feel about the advantages of teaching in a computer classroom over a traditional classroom. You have a very specific population in mind, and so a mail-out survey would probably not be your best option. You might try an oral survey, but if you are doing this research alone this might be too time consuming. The group administered questionnaire would allow you to get your survey results in one space of time and would ensure a very high response rate (higher than if you stuck a survey into each instructor's mailbox). Your challenge would be to get everyone together. Perhaps your department holds monthly technology support meetings that most of your chosen sample would attend. Your challenge at this point would be to get permission to use part of the weekly meeting time to administer the survey, or to convince the instructors to stay to fill it out after the meeting. Despite the challenges, this type of survey might be the most efficient for your specific purposes.

Strengths and Weaknesses of Group Administered Questionnaires

Rate of Response: This second type of written survey is generally administered to a sample of respondents in a group setting, guaranteeing a high response rate.

Specificity: This type of written survey can be very versatile, allowing for a spectrum of open and closed ended types of questions and can serve a variety of specific purposes, particularly if you are trying to survey a very specific group of people.

Weaknesses of Group Administered Questionnaires

Sampling: This method requires a small sample, and as a result is not the best method for surveys that would benefit from a large sample. This method is only useful in cases that call for very specific information from specific groups.

Scheduling: Since this method requires a group of respondents to answer the survey together, this method requires a slot of time that is convenient for all respondents.

Imagine that you would like to find out about how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. You have prepared a questionnaire that requires quite a few long answers, and since you suspect that the students in the dorms may not have the motivation to take the time to respond, you might want a chance to tell them about your research, the benefits that might come from their responses, and to answer their questions about your survey. To ensure the highest response rate, you would probably pick a time of the day when you are sure that the majority of the dorm residents are home, and then work your way from door to door. If you don't have time to interview the number of students you need in your sample, but you don't trust the response rate of mail surveys, the drop-off survey might be the best option for you.

Strengths and Weaknesses of Drop-off Surveys

Convenience: Like the mail survey, the drop-off survey allows the respondents to answer the survey at their own convenience.

Response Rates: The response rates for the drop-off survey are better than the mail survey because it allows the interviewer to make personal contact with the respondent, to explain the importance of the survey, and to answer any questions or concerns the respondent might have.

Time: Because of the personal contact this method requires, this method takes considerably more time than the mail survey.

Sampling: Because of the time it takes to make personal contact with the respondents, the universe of this kind of survey will be considerably smaller than the mail survey pool of respondents.

Response: The response rate for this type of survey, although considerably better than the mail survey, is still not as high as the response rate you will achieve with an oral survey.

Oral surveys are considered more personal forms of survey than the written or electronic methods. Oral surveys are generally used to get thorough opinions and impressions from the respondents.

Oral surveys can be administered in several different ways. For instance, in a group interview, as opposed to a group administered written survey, each respondent is not given an instrument (an individual questionnaire). Instead, the respondents work in groups to answer the questions together while one person takes notes for the whole group. Another more familiar form of oral survey is the phone survey. Phone surveys can be used to get short one word answers (yes/no), as well as longer answers.

Strengths and Weaknesses of Oral Surveys

Personal Contact: Oral surveys conducted either on the telephone or in person give the interviewer the ability to answer questions from the participant. If the participant, for example, does not understand a question or needs further explanation on a particular issue, it is possible to converse with the participant. According to Glastonbury and MacKean, "interviewing offers the flexibility to react to the respondent's situation, probe for more detail, seek more reflective replies and ask questions which are complex or personally intrusive" (p. 228).

Response Rate: Although obtaining a certain number of respondents who are willing to take the time to do an interview is difficult, the researcher has more control over the response rate in oral survey research than with other types of survey research. As opposed to mail surveys where the researcher must wait to see how many respondents actually answer and send back the survey, a researcher using oral surveys can, if the time and money are available, interview respondents until the required sample has been achieved.

Cost: The most obvious disadvantage of face-to-face and telephone survey is the cost. It takes time to collect enough data for a complete survey, and time translates into payroll costs and sometimes payment for the participants.

Bias: Using face-to-face interview for your survey may also introduce bias, from either the interviewer or the interviewee.

Types of Questions Possible: Certain types of questions are not convenient for this type of survey, particularly for phone surveys where the respondent does not have a chance to look at the questionnaire. For instance, if you want to offer the respondent a choice of 5 different answers, it will be very difficult for respondents to remember all of the choices, as well as the question, without a visual reminder. This problem requires the researcher to take special care in constructing questions to be read aloud.

Attitude: Anyone who has ever been interrupted during dinner by a phone interviewer is aware of the negative feelings many people have about answering a phone survey. Upon receiving these calls, many potential respondents will simply hang up.

With the growth of the Internet (and in particular the World Wide Web) and the expanded use of electronic mail for business communication, the electronic survey is becoming a more widely used survey method. Electronic surveys can take many forms. They can be distributed as electronic mail messages sent to potential respondents. They can be posted as World Wide Web forms on the Internet. And they can be distributed via publicly available computers in high-traffic areas such as libraries and shopping malls. In many cases, electronic surveys are placed on laptops and respondents fill out a survey on a laptop computer rather than on paper.

Strengths and Weaknesses of Electronic Surveys

Cost-savings: It is less expensive to send questionnaires online than to pay for postage or for interviewers.

Ease of Editing/Analysis: It is easier to make changes to questionnaire, and to copy and sort data.

Faster Transmission Time: Questionnaires can be delivered to recipients in seconds, rather than in days as with traditional mail.

Easy Use of Preletters: You may send invitations and receive responses in a very short time and thus receive participation level estimates.

Higher Response Rate: Research shows that response rates on private networks are higher with electronic surveys than with paper surveys or interviews.

More Candid Responses: Research shows that respondents may answer more honestly with electronic surveys than with paper surveys or interviews.

Potentially Quicker Response Time with Wider Magnitude of Coverage: Due to the speed of online networks, participants can answer in minutes or hours, and coverage can be global.

Sample Demographic Limitations: Population and sample limited to those with access to computer and online network.

Lower Levels of Confidentiality: Due to the open nature of most online networks, it is difficult to guarantee anonymity and confidentiality.

Layout and Presentation issues: Constructing the format of a computer questionnaire can be more difficult the first few times, due to a researcher's lack of experience.

Additional Orientation/Instructions: More instruction and orientation to the computer online systems may be necessary for respondents to complete the questionnaire.

Potential Technical Problems with Hardware and Software: As most of us (perhaps all of us) know all too well, computers have a much greater likelihood of "glitches" than oral or written forms of communication.

Response Rate: Even though research shows that e-mail response rates are higher, Opermann (1995) warns that most of these studies found response rates higher only during the first few days; thereafter, the rates were not significantly higher.

Designing Surveys

Initial planning of the survey design and survey questions is extremely important in conducting survey research. Once surveying has begun, it is difficult or impossible to adjust the basic research questions under consideration or the tool used to address them since the instrument must remain stable in order to standardize the data set. This section provides information needed to construct an instrument that will satisfy basic validity and reliability issues. It also offers information about the important decisions you need to make concerning the types of questions you are going to use, as well as the content, wording, order and format of your survey questionnaire.

Overall Design Issues

Four key issues should be considered when designing a survey or questionnaire: respondent attitude, the nature of the items (or questions) on the survey, the cost of conducting the survey, and the suitability of the survey to your research questions.

Respondent attitude: When developing your survey instrument, it is important to try to put yourself into your target population's shoes. Think about how you might react when approached by a pollster while out shopping or when receiving a phone call from a pollster while you are sitting down to dinner. Think about how easy it is to throw away a response survey that you've received in the mail. When developing your instrument, it is important to choose the method you think will work for your research, but also one in which you have confidence. Ask yourself what kind of survey you, as a respondent, would be most apt to answer.

Nature of questions: It is important to consider the relationship between the medium that you use and the questions that you ask. For instance, certain types of questions are difficult to answer over the telephone. Think of the problems you would have in attempting to record Likert scale responses, as in closed-ended questions, over the telephone--especially if a scale of more than five points is used. Responses to open-ended questions would also be difficult to record and report in telephone interviews.

Cost: Along with decisions about the nature of the questions you ask, expense issues also enter into your decision making when planning a survey. The population under consideration, the geographic distribution of this sample population, and the type of questionnaire used all affect costs.

Ability of instrument to meet needs of research question: Finally, there needs to be a logical link between your survey instrument and your research questions. If it is important to get a large number of responses from a broad sample of the population, you obviously will not choose to do a drop-off written survey or an in-person oral survey. Because of the size of the needed sample, you will need to choose a survey instrument that meets this need, such as a phone or mail survey. If you are interested in getting thorough information that might need a large amount of interaction between the interviewer and respondent, you will probably pick in-person oral survey with a smaller sample of respondents. Your questions, then, will need to reflect both your research goals and your choice of medium.

Creating Questionnaire Questions

Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.

Open-ended vs. Closed-ended Questions

All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.

Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").

Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."

Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."

Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.

Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.

Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.

Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.

Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.

Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.

Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.

The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.

In The Practice of Social Research , Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).

This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.

To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.

  • Questions should be written in a straightforward, direct language that is not caught up in complex rhetoric or syntax, or in a discipline's slang or lingo. Questions should be specifically tailored for a group of respondents.
  • Questions should be kept short and simple. Respondents should not be expected to learn new, complex information in order to answer questions.
  • Specific questions are for the most part better than general ones. Research shows that the more general a question is the wider the range of interpretation among respondents. To keep specific questions brief, researchers can sometimes use longer introductions that make the context, background, and purpose of the survey clear so that this information is not necessary to include in the actual questions.
  • Avoid questions that are overly personal or direct, especially when dealing with sensitive issues.

When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:

  • Does your choice of open or close-ended questions lead to the types of answers you would like to get from your respondents?
  • Is every question in your survey integral to your intent? Superfluous questions that have already been addressed or are not relevant to your study will waste the time of both the respondents and the researcher.
  • Does one topic warrant more than one question?
  • Do you give enough prior information/context for each set of questions? Sometimes lead-in questions are useful to help the respondent become familiar and comfortable with the topic.
  • Are the questions both general enough (they are both standardized and relevant to your entire sample), and specific enough (avoid vague generalizations and ambiguousness)?
  • Is each question as succinct as it can be without leaving out essential information?
  • Finally, and most importantly, try to put yourself in your respondents' shoes. Write a survey that you would be willing to answer yourself, and be polite, courteous, and sensitive. Thank the responder for participating both at the beginning and the end of the survey.

Order of Questions

Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.

  • Pretesting can help determine if the ordering of questions is effective.
  • Which topics should start the survey off, and which should wait until the end of the survey?
  • What kind of preparation do my respondents need for each question?
  • Do the questions move logically from one to the next, and do the topics lead up to each other?

The following general guidelines for ordering survey questions can address these questions:

  • Use warm-up questions. Easier questions will ease the respondent into the survey and will set the tone and the topic of the survey.
  • Sensitive questions should not appear at the beginning of the survey. Try to put the responder at ease before addressing uncomfortable issues. You may also prepare the reader for these sensitive questions with some sort of written preface.
  • Consider transition questions that make logical links.
  • Try not to mix topics. Topics can easily be placed into "sets" of questions.
  • Try not to put the most important questions last. Respondents may become bored or tired before they get to the end of the survey.
  • Be careful with contingency questions ("If you answered yes to the previous question . . . etc.").
  • If you are using a combination of open and close-ended questions, try not to start your survey with open-ended questions. Respondents will be more likely to answer the survey if they are allowed the ease of closed-questions first.

Borrowing Questions

Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.

Advantages of Closed-Ended Questions

  • Closed-ended questions are more easily analyzed. Every answer can be given a number or value so that a statistical interpretation can be assessed. Closed-ended questions are also better suited for computer analysis. If open-ended questions are analyzed quantitatively, the qualitative information is reduced to coding and answers tend to lose some of their initial meaning. Because of the simplicity of closed-ended questions, this kind of loss is not a problem.
  • Closed-ended questions can be more specific, thus more likely to communicate similar meanings. Because open-ended questions allow respondents to use their own words, it is difficult to compare the meanings of the responses.
  • In large-scale surveys, closed-ended questions take less time from the interviewer, the participant and the researcher, and so is a less expensive survey method. The response rate is higher with surveys that use closed-ended question than with those that use open-ended questions.

Advantages of Open-Ended Questions

  • Open-ended questions allow respondents to include more information, including feelings, attitudes and understanding of the subject. This allows researchers to better access the respondents' true feelings on an issue. Closed-ended questions, because of the simplicity and limit of the answers, may not offer the respondents choices that actually reflect their real feelings. Closed-ended questions also do not allow the respondent to explain that they do not understand the question or do not have an opinion on the issue.
  • Open-ended questions cut down on two types of response error; respondents are not likely to forget the answers they have to choose from if they are given the chance to respond freely, and open-ended questions simply do not allow respondents to disregard reading the questions and just "fill in" the survey with all the same answers (such as filling in the "no" box on every question).
  • Because they allow for obtaining extra information from the respondent, such as demographic information (current employment, age, gender, etc.), surveys that use open-ended questions can be used more readily for secondary analysis by other researchers than can surveys that do not provide contextual information about the survey population.

Potential Problems with Survey Questions

While designing questions for a survey, researchers should to be aware of a few problems and how to avoid them:

"Everyone has an opinion": It is incorrect to assume that each respondent has an opinion regarding every question. Therefore, you might offer a "no opinion" option to avoid this assumption. Filters can also be created. For example, researchers can ask respondents if they have any thoughts on an issue, to which they have the option to say "no."

Agree and disagree statements: according to Converse and Presser (1986), these statements suffer from "acquiescence" or the tendency of respondents to agree despite question content (p.35). Researchers can avoid this problem by using forced-choice questions with these statements.

Response order bias: this occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate. Typically, the respondent chooses the last or first response option. This problem might occur if researchers use long lists and/or rating scales.

Response set: this problem can occur when using a close-ended question format with response options like yes/no or agree/disagree. Sometimes respondents do not consider each question and just answer no or disagree to all questions.

Telescoping: occurs when respondents report that an event took place more recently than it actually did. To avoid this problem, Frey and Mertens (1995) say researchers can use "aided recall"-using a reference point or landmark, or list of events or behaviors (p. 101).

Forward telescoping: occurs when respondents include events that have actually happened before the time frame established. This results in overreporting. According to Converse and Presser (1986), researchers can use "bounded recall" to avoid this problem (p.21). Bounded recall is when researchers interview respondents several months or so after the initial interview to inquire about events that have happened since then. This technique, however, requires more resources. Converse and Presser said that researchers can also just try to narrow the reference points used, which has been shown to reduce this problem too.

Fatigue effect: happens when respondents grow bored or tired during the interview. To avoid this problem, Frey and Mertens (1995) say researchers can use transitions, vary questions and response options, and they can put easy to answer questions at the end of the questionnaire.

Types of Questions to Avoid

  • Double-barreled questions- force respondents to make two decisions in one. For example, a question like: "Do you think women and children should be given the first available flu shots?" does not allow the responder to choose whether women or children should be given the first shots.
  • Double negative questions-for example: "Please tell me whether or not you agree or disagree with this statement. Graduate teaching assistants should not be required to help students outside of class." Respondents may confuse the meaning of the disagree option.
  • Hypothetical questions- are typically too difficult for respondents since they require more scrutiny. For example, "If there were a cure for cancer, would you still support euthanasia?"
  • Ambiguous questions- respondents might not understand the question.
  • Biased questions- For example, "Don't you think that suffering terminal cancer patients should be allowed to be released from their pain?" Researchers should never try to make one response option look more suitable than another.
  • Questions with long lists-these questions may tire respondents or respondents may lose track of the question.

Pretesting the Questionnaire

Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.

There are two types of survey pretests: participating and undeclared .

  • Participating pretests dictate that you tell respondents that the pretest is a practice run; rather than asking the respondents to simply fill out the questionnaire, participating pretests usually involve an interview setting where respondents are asked to explain reactions to question form, wording and order. This kind of pretest will help you determine whether the questionnaire is understandable.
  • When conducting an undeclared pretest , you do not tell respondents that it is a pretest. The survey is given just as you intend to conduct it for real. This type of pretest allows you to check your choice of analysis and the standardization of your survey. According to Converse and Presser (1986), if researchers have the resources to do more than one pretest, it might be best to use a participatory pretest first, then an undeclared test.

General Applications of Pretesting:

Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.

Pretesting for reliability and validity:

Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.

For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.

Conducting Surveys

Once you have constructed a questionnaire, you'll need to make a plan that outlines how and to whom you will administer it. There are a number of options available in order to find a relevant sample group amongst your survey population. In addition, there are various considerations involved with administering the survey itself.

Administering a Survey

This section attempts to answer the question: "How do I go about getting my questionnaire answered?"

For all types of surveys, some basic practicalities need to be considered before the surveying begins. For instance, you need to find the most convenient time to carry out the data collection (this becomes particularly important in interview surveying and group-administered surveys), how long the data collection is likely to take. Finally, you need to make practical arrangements for administering the survey. Pretesting your survey will help you determine the time it takes to administer, process, and analyze your survey, and will also help you clear out some of the bugs.

Administering Written Surveys

Written surveys can be handled in several different ways. A research worker can deliver the questionnaires to the homes of the sample respondents, explain the study, and then pick the questionnaires up on a later date (or, alternately, ask the respondent to mail the survey back when completed). Another option is mailing questionnaires directly to homes and having researchers pick up and check the questionnaires for completeness in person. This method has proven to have higher response rates than straightforward mail surveys, although it tends to take more time and money to administer.

It is important to put yourself into the role of respondent when deciding how to administer your survey. Most of us have received and thrown away a mail survey, and so it may be useful to think back to the reasons you had for not filling it out and returning it. Here are some ideas for boosting your response rate:

  • Include in each questionnaire a letter of introduction and explanation, and a self-addressed, stamped envelope for returning the questionnaire.
  • Oftentimes, when it fits the study's budget, the envelope might also include a monetary "reward" (usually a dollar to five dollars) as an incentive to fill out the survey.
  • Another method for saving the responder time is to create a self-mailing questionnaire that requires no envelope but folds easily so that the return address appears on the outside. The easier you make the process of completing and returning the survey, the better your survey results will be.
  • Follow up mailings are an important part of administering mail surveys. Nonrespondents can be sent letters of additional encouragement to participate. Even better, a new copy of the survey can be sent to nonresponders. Methodological literature suggests that three follow up letters are adequate, and two to three weeks should be allowed between each mailing.

Administering Oral Surveys

Face-To-Face Surveys

Oftentimes conducting oral surveys requires a staff of interviewers; to control this variable as much as possible, the presentation and preparation of the interviewer is an important consideration.

  • In any face-to-face interview, the appearance of the interviewer is important. Since the success of any survey relies on the interest of the participants to respond to the survey, the interviewer should take care to dress and act in such a way that would not offend the general sample population.
  • Of equal importance is the preparedness of the interviewer. The interviewer should be well acquainted with the questions, and have ample practice administering the survey with mock interviews. If several interviewers will be used, they should be trained as a group to ensure standardization and control. Interviewers also need to carry a letter of identification/authentication to present at in-person surveys.

When actually administering the survey, you need to make decisions about how much of the participants' responses need to be recorded, how much the interviewer will need to "probe" for responses, and how much the interviewer will need to account for context (what is the respondent's age, race, gender, reaction to the study, etc.) If you are administering a close-ended question survey, these may not be considerations. On the other hand, when recording more open-ended responses, the researcher needs to decide beforehand on each of these factors:

  • It depends on the purpose of the study whether the interview should be recorded word for word, or whether the interviewer should record general impressions and opinions. However, for the sake of precision, the former approach is preferred. More information is always better than less when it comes to analyzing the results.
  • Sometimes respondents will respond to a question with an inappropriate answer; this can happen with both open and close-question surveys. Even if you give the participant structured choices like "I agree" or "I disagree," they might respond "I think that is true," which might require the interviewer to probe for an appropriate answer. In an open-question survey, this probing becomes more challenging. The interviewer might come with a set of potential questions if the respondent does not elaborate enough or strays from the subject. The nature of these probes, however, need to be constructed by the researcher rather than ad-libbed by the interviewers, and should be carefully controlled so that they do not lead the respondent to change answers.

Phone Surveys

Phone surveys certainly involve all of the preparedness of the face-to-face surveys, but encounter new problems because of their reputation. It is much easier to hang-up on a phone surveyor than it is to slam the door in someone's face, and so the sheer number of calls needed to complete a survey can be baffling. Computer innovation has tempered this problem a bit by allowing more for quick and random number dialing and the ability for interviewers to type answers programs that automatically set up the data for analysis. Systems like CATI (Computer-assisted survey interview) have made phone surveys a more cost and time effective method, and therefore a popular one, although respondents are getting more and more reluctant to answer phone surveys because of the increase in telemarketing.

Before conducting a survey, you must choose a relevant survey population. And, unless a survey population is very small, it is usually impossible to survey the entire relevant population. Therefore, researchers usually just survey a sample of a population from an actual list of the relevant population, which in turn is called a sampling frame . With a carefully selected sample, researchers can make estimations or generalizations regarding an entire population's opinions, attitudes or beliefs on a particular topic.

Sampling Procedures and Methods

There are two different types of sampling procedures-- probability and nonprobability . Probability sampling methods ensure that there is a possibility for each person in a sample population to be selected, whereas nonprobability methods target specific individuals. Nonprobability sampling methods include the following:

  • Purposive samples: to purposely select individuals to survey.
  • Volunteer subjects: to ask for volunteers to survey.
  • Haphazard sampling: to survey individuals who can be easily reached.
  • Quota sampling: to select individuals based on a set quota. For example, if a census indicates that more than half of the population is female, then the sample will be adjusted accordingly.

Clearly, there can be an inherent bias in nonprobability methods. Therefore, according to Weisberg, Krosnick, and Bowen (1989), it is not surprising that most survey researchers prefer probability sampling methods. Some commonly used probability sampling methods for surveys are:

  • Simple random sample: a sample is drawn randomly from a list of individuals in a population.
  • Systematic selection procedure sample: a variant of a simple random sample in which a random number is chosen to select the first individual and so on from there.
  • Stratified sample: dividing up the population into smaller groups, and randomly sampling from each group.
  • Cluster sample: dividing up a population into smaller groups, and then only sampling from one of the groups. Cluster sampling is " according to Lee, Forthofer, and Lorimer (1989), is considered a more practical approach to surveys because it samples by groups or clusters of elements rather than by individual elements" (p. 12). It also reduces interview costs. However, Weisberg et. al (1989) said accuracy declines when using this sampling method.
  • Multistage sampling: first, sampling a set of geographic areas. Then, sampling a subset of areas within those areas, and so on.

Sampling and Nonsampling Errors

Directly related to sample size are the concepts of sampling and nonsampling errors. According to Fox and Tracy (1986), surveys are subject to both sampling errors and nonsampling errors.

A sampling error arises from the fact that inevitably samples differ from their populations. Therefore, survey sample results should be seen only as estimations. Weisberg et. al. (1989) said sampling errors cannot be calculated for nonprobability samples, but they can be determined for probability samples. First, to determine sample error, look at the sample size. Then, look at the sampling fraction--the percentage of the population that is being surveyed. Thus, the more people surveyed, the smaller the error. This error can also be reduced, according to Fox and Tracy (1986), by increasing the representativeness of the sample.

Then, there are two different kinds of nonsampling error--random and nonrandom errors. Fox and Tracy (1986) said random errors decrease the reliability of measurements. These errors can be reduced through repeated measurements. Nonrandom errors result from a bias in survey data, which is connected to response and nonresponse bias.

Confidence Level and Interval

Any statement of sampling error must contain two essential components: the confidence level and the confidence interval. These two components are used together to express the accuracy of the sample's statistics in terms of the level of confidence that the statistics fall within a specified interval from the true population parameter. For example, a researcher may be "95 percent confident" that the sample statistic (that 50 percent favor candidate X) is within plus or minus 5 percentage points of the population parameter. In other words, the researcher is 95 percent confident that between 45 and 55 percent of the total population favor candidate X.

Lauer and Asher (1988) provide a table that gives the confidence interval limits for percentages based upon sample size (p. 58):

Sample Size and Confidence Interval Limits

(95% confidence intervals based on a population incidence of 50% and a large population relative to sample size.)

Confidence Limits and Sample Size

When selecting a sample size, one can consider that a higher number of individuals surveyed from a target group yields a tighter measurement, a lower number yields a looser range of confidence limits. The confidence limits may need to be corrected if, according to Lauer and Asher (1988), "the sample size starts to approach the population size" or if "the variable under scrutiny is known to have a much [original emphasis] smaller or larger occurrence than 50% in the whole population" (p. 59). For smaller populations, Singleton (1988) said the standard error or confidence interval should be multiplied by a correction factor equal to sqrt(1 - f), where "f" is the sampling fraction, or proportion of the population included in the sample.

Lauer and Asher (1988) give a table of correction factors for confidence limits where sample size is an important part of population size (p. 60) and also a table of correction factors for where the percentage incidence of the parameter in the population is not 50% (p. 61).

Tables for Calculating Confidence Limits vs. Sample Size

Correction Factors for Confidence Limits When Sample Size (n) Is an Important Part of Population Size (N >= 100)

(For n over 70% of N, take all of N)

From Lauer and Asher (1988, p. 60)

Correction Factors for Rare and Common Percentage of Variables

From Lauer and Asher (1988, p. 61)

Analyzing Survey Results

After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages. How you conduct these steps will depend on the scope of your study, your own capabilities, and the audience to whom you wish to direct the work.

Processing the Results

It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis (p. 243).

Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, "The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables [regarding those produced by social science computer software] (p. 100). Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.

All of this information should be used to test for statistical significance. See our guide on Statistics for more on this topic.

Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.

Analyzing the Results

Before actually beginning the survey the researcher should know how they want to analyze the data. As stated in the Processing the Results section, if you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route. Be sure to pick your program before you design your survey - - some programs require the data to be laid out in different ways.

After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable--whether writing ability differs among students in two classrooms, for instance. A matched T-Test could also be applied to determine if scores of the same participants in a study differ under different conditions or over time. An ANOVA could be applied if the study compares multiple groups on one or more variables. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.

Secondary Analysis

Secondary analysis of survey data is an accepted methodology which applies previously collected survey data to new research questions. This methodology is particularly useful to researchers who do not have the time or money to conduct an extensive survey, but may be looking at questions for which some large survey has already collected relevant data. A number of books and chapters have been written about this methodology, some of which are listed in the annotated bibliography under "Secondary Analysis."

Advantages and Disadvantages of Using Secondary Analysis

  • Considerably cheaper and faster than doing original studies
  • You can benefit from the research from some of the top scholars in your field, which for the most part ensures quality data.
  • If you have limited funds and time, other surveys may have the advantage of samples drawn from larger populations.
  • How much you use previously collected data is flexible; you might only extract a few figures from a table, you might use the data in a subsidiary role in your research, or even in a central role.
  • A network of data archives in which survey data files are collected and distributed is readily available, making research for secondary analysis easily accessible.

Disadvantages

  • Since many surveys deal with national populations, if you are interested in studying a well-defined minority subgroup you will have a difficult time finding relevant data.
  • Secondary analysis can be used in irresponsible ways. If variables aren't exactly those you want, data can be manipulated and transformed in a way that might lessen the validity of the original research.
  • Much research, particularly of large samples, can involve large data files and difficult statistical packages.

Data-entry Packages Available for Survey Data Analysis

SNAP: Offers simple survey analysis, is able to help with the survey from start to finish, including the designing of questions and questionnaires.

SPSS: Statistical package for social sciences; can cope with most kinds of data.

SAS: A flexible general purpose statistical analysis system.

MINITAB: A very easy-to-use and fairly limited general purpose package for "beginners."

STATGRAPHS: General interactive statistical package with good graphics but not very flexible.

Reporting Survey Results

The final stage of the survey is to report your results. There is not an established format for reporting a survey's results. The report may follow a pattern similar to formal experimental write-ups, or the analysis may show up in pitches to advertising agencies--as with Arbitron data--or the analysis may be presented in departmental meetings to aid curriculum arguments. A formal report might contain contextual information, a literature review, a presentation of the research question under investigation, information on survey participants, a section explaining how the survey was conducted, the survey instrument itself, a presentation of the quantified results, and a discussion of the results.

You can choose to graphically represent your data for easier interpretation by others outside your research project. You can use, for example, bar graphs, histograms, frequency polygrams, pie charts and consistency tables.

Commentary on Survey Research

In this section, we present several commentaries on survey research.

Strengths and Weaknesses of Surveys

  • Surveys are relatively inexpensive (especially self-administered surveys).
  • Surveys are useful in describing the characteristics of a large population. No other method of observation can provide this general capability.
  • They can be administered from remote locations using mail, email or telephone.
  • Consequently, very large samples are feasible, making the results statistically significant even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • There is flexibilty at the creation phase in deciding how the questions will be administered: as face-to-face interviews, by telephone, as group administered written or oral survey, or by electonic means.
  • Standardized questions make measurement more precise by enforcing uniform definitions upon the participants.
  • Standardization ensures that similar data can be collected from groups then interpreted comparatively (between-group study).
  • Usually, high reliability is easy to obtain--by presenting all subjects with a standardized stimulus, observer subjectivity is greatly eliminated.

Weaknesses:

  • A methodology relying on standardization forces the researcher to develop questions general enough to be minimally appropriate for all respondents, possibly missing what is most appropriate to many respondents.
  • Surveys are inflexible in that they require the initial study design (the tool and administration of the tool) to remain unchanged throughout the data collection.
  • The researcher must ensure that a large number of the selected sample will reply.
  • It may be hard for participants to recall information or to tell the truth about a controversial question.
  • As opposed to direct observation, survey research (excluding some interview approaches) can seldom deal with "context."

Reliability and Validity

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

Ethical Considerations of Using Electronic Surveys

Because electronic mail is rapidly becoming such a large part of our communications system, this survey method deserves special attention. In particular, there are four basic ethical issues researchers should consider if they choose to use email surveys.

Sample Representatives: Since researchers who choose to do surveys have an ethical obligation to use population samples that are inclusive of race, gender, educational and income levels, etc., if you choose to utilize e-mail to administer your survey you face some serious problems. Individuals who have access to personal computers, modems and the Internet are not necessarily representative of a population. Therefore, it is suggested that researchers not use an e-mail survey when a more inclusive research method is available. However, if you do choose to do an e-mail survey because of its other advantages, you might consider including as part of your survey write up a reminder of the limitations of sample representativeness when using this method.

Data Analysis: Even though e-mail surveys tend to have greater response rates, researchers still do not necessarily know exactly who has responded. For example, some e-mail accounts are screened by an unintended viewer before they reach the intended viewer. This issue challenges the external validity of the study. According to Goree and Marszalek (1995), because of this challenge, "researchers should avoid using inferential analysis for electronic surveys" (p. 78).

Confidentiality versus Anonymity: An electronic response is never truly anonymous, since researchers know the respondents' e-mail addresses. According to Goree and Marszalek (1995), researchers are ethically required to guard the confidentiality of their respondents and to assure respondents that they will do so.

Responsible Quotation: It is considered acceptable for researchers to correct typographical or grammatical errors before quoting respondents since respondents do not have the ability to edit their responses. According to Goree and Marszalek (1995), researchers are also faced with the problem of "casual language" use common to electronic communication (p. 78). Casual language responses may be difficult to report within the formal language used in journal articles.

Response Rate Issues

Each year, nonresponse and response rates are becoming more and more important issues in survey research. According to Weisberg, Krosnick and Bowen (1989), in the 1950s it was not unusual for survey researchers to obtain response rates of 90 percent. Now, however, people are not as trusting of interviewers and response rates are much lower--typically 70 percent or less. Today, even when survey researchers obtain high response rates, they still have to deal with many potential respondent problems.

Nonresponse Issues

Nonresponse Errors Nonresponse is usually considered a source of bias in a survey, aptly called nonresponse bias . Nonresponse bias is a problem for almost every survey as it arises from the fact that there are usually differences between the ideal sample pool of respondents and the sample that actually responds to a survey. According to Fox and Tracy (1986), "when these differences are related to criterion measures, the results may be misleading or even erroneous" (p. 9). For example, a response rate of only 40 or 50 percent creates problems of bias since the results may reflect an inordinate percentage of a particular demographic portion of the sample. Thus, variance estimates and confidence intervals become greater as the sample size is reduced, and it becomes more difficult to construct confidence limits.

Nonresponse bias usually cannot be avoided and so inevitably negatively affects most survey research by creating errors in a statistical measurement. Researchers must therefore account for nonresponse either during the planning of their survey or during the analysis of their survey results. If you create a larger sample during the planning stage, confidence limits may be based on the actual number of responses themselves.

Household-Level Determinants of Nonresponse

According to Couper and Groves (1996), reductions in nonresponse and its errors should be based on a theory of survey participation. This theory of survey participation argues that a person's decision to participate in a survey generally occurs during the first moments of interaction with an interviewer or the text. According to Couper and Groves, four types of influences affect a potential respondent's decision of whether or not to cooperate in a survey. First, potential respondents are influenced by two factors that the researcher cannot control: by their social environments and by their immediate households. Second, potential respondents are influenced by two factors the researcher can control: the survey design and the interviewer.

To minimize nonresponse, Couper and Groves suggest that researchers manipulate the two factors they can control--the survey design and the interviewer.

Response Issues

Not only do survey researchers have to be concerned about nonresponse rate errors, but they also have to be concerned about the following potential response rate errors:

  • Response bias occurs when respondents deliberately falsify their responses. This error greatly jeopardizes the validity of a survey's measurements.
  • Response order bias occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate.
  • Response set bias occurs when respondents do not consider each question and just answer all the questions with the same response. For example, they answer "disagree" or "no" to all questions.

These response errors can seriously distort a survey's results. Unfortunately, according to Fox and Tracy (1986), response bias is difficult to eliminate; even if the same respondent is questioned repeatedly, he or she may continue to falsify responses. Response order bias and response set errors, however, can be reduced through careful development of the survey questionnaire.

Satisficing

Related to the issue of response errors, especially response order bias and response bias, is the issue of satisficing. According to Krosnick, Narayan, and Smith (1996) satisficing is the notion that certain survey response patterns occur as respondents "shortcut the cognitive processes necessary for generating optimal answers" (p. 29). This theoretical perspective arises from the belief that most respondents are not highly motivated to answer a survey's questions, as reflected in the declining response rates in recent years. Since many people are reluctant to be interviewed, it is presumptuous to assume that respondents will devote a lot of effort to answering a survey.

The theoretical notion of satisficing can be further understood by considering what respondents must do to provide optimal answers. According to Krosnick et. al. (1996), "respondents must carefully interpret the meaning of each question, search their memories extensively for all relevant information, integrate that information carefully into summary judgments, and respond in ways that convey those judgments' meanings as clearly and precisely as possible"(p. 31). Therefore, satisficing occurs when one or more of these cognitive steps is compromised.

Satisficing takes two forms: weak and strong . Weak satisficing occurs when respondents go through all of the cognitive steps necessary to provide optimal answers, but are not as thorough in their cognitive processing. For example, respondents can answer a question with the first response that seems acceptable instead of generating an optimal answer. Strong satisficing, on the other hand, occurs when respondents omit the steps of judgment and retrieval altogether.

Even though they believe that not enough is known yet to offer suggestions on how to increase optimal respondent answers, Krosnick et. al. (1996) argue that satisficing can be reduced by maximizing "respondent motivation" and by "minimizing task difficulty" in the survey questionnaire (p. 43).

Annotated Bibliography

General Survey Information:

Allan, Graham, & Skinner, Chris (eds.) (1991). Handbook for Research Students in the Social Sciences. The Falmer Press: London.

This book is an excellent resource for anyone studying in the social sciences. It is not only well-written, but it is clear and concise with pertinent research information.

Alreck, P. L., & Settle, R. B. (1995 ). The survey research handbook: Guidelines and strategies for conducting a survey (2nd). Burr Ridge, IL: Irwin.

Provides thorough, effective survey research guidelines and strategies for sponsors, information seekers, and researchers. In a very accessible, but comprehensive, format, this handbook includes checklists and guidelists within the text, bringing together all the different techniques and principles, skills and activities to do a "really effective survey."

Babbie, E.R. (1973). Survey research methods . Belmont, CA: Wadsworth.

A comprehensive overview of survey methods. Solid basic textbook on the subject.

Babbie, E.R. (1995). The practice of social research (7th). Belmont, CA: Wadsworth.

The reference of choice for many social science courses. An excellent overview of question construction, sampling, and survey methodology. Includes a fairly detailed critique of an example questionnaire. Also includes a good overview of statistics related to sampling.

Belson, W.A. (1986). Validity in survey research . Brookvield, VT: Gower.

Emphasis on construction of survey instrument to account for validity.

Bourque, Linda B. & Fiedler, Eve P. (1995). How to Conduct Self-Administered and Mail Surveys. Sage Publications: Thousand Oaks.

Contains current information on both self-administered and mail surveys. It is a great resource if you want to design your own survey; there are step-by-step methods for conducting these two types of surveys.

Bradburn, N.M., & Sudman, S. (1979). Improving interview method and questionnaire design . San Francisco: Jossey-Bass Publishers.

A good overview of polling. Includes setting up questionnaires and survey techniques.

Bradburn, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.

These veteran survey researchers answer questions about survey research that are commonly asked by the general public.

Campbell, Angus, A., ∧ Katona, Georgia. (1953). The Sample Survey: A Technique for Social Science Research. In Newcomb, Theodore M. (Ed). Research Methods in the Behavioral Sciences. The Dryden Press: New York. p 14-55.

Includes information on all aspects of social science research. Some chapters in this book are outdated.

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire . Newbury Park, CA: Sage.

A very helpful little publication that addresses the key issues in question construction.

Dillman, D.A. (1978). Mail and telephone surveys: The total design method . New York: John Wiley & Sons.

An overview of conducting telephone surveys.

Frey, James H., & Oishi, Sabine Mertens. (1995). How To Conduct Interviews By Telephone and In Person. Sage Publications: Thousand Oaks.

This book has a step-by-step breakdown of how to conduct and design telephone and in person interview surveys.

Fowler, Floyd J., Jr. (1993). Survey Research Methods (2nd.). Newbury Park, CA: Sage.

An overview of survey research methods.

Fowler, F. J. Jr., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related error . Newbury Park, CA: Sage.

Another aspect of validity/reliability--interviewer error.

Fox, J. & Tracy, P. (1986). Randomized Response: A Method for Sensitive Surveys . Beverly Hills, CA: Sage.

Authors provide a good discussion of response issues and methods of random response, especially for surveys with sensitive questions.

Frey, J. H. (1989). Survey research by telephone (2nd). Newbury Park, CA: Sage.

General overview to telephone polling.

Glock, Charles (ed.) (1967). Survey Research in the Social Sciences. New York: Russell Sage Foundation.

Although fairly outdated, this collection of essays is useful in illustrating the somewhat different ways in which different disciplines regard and use survey research.

Hoinville, G. & Jowell, R. (1978). Survey research practice . London: Heinemann.

Practical overview of the methods and procedures of survey research, particularly discussing problems which may arise.

Hyman, H. H. (1972). Secondary Analysis of Sample Surveys. New York: John Wiley & Sons.

This source is particularly useful for anyone attempting to do secondary analysis. It offers a comprehensive overview of this research method, and couches it within the broader context of social scientific research.

Hyman, H. H. (1955). Survey design and analysis: Principles, cases, and procedures . Glencoe, IL: Free Press.

According to Babbie, an oldie but goodie--a classic.

Jones, R. (1985). Research methods in the social and behavioral sciences . Sunderland, MA: Sinauer.

General introduction to methodology. Helpful section on survey research, especially the discussion on sampling.

Kalton, G. (1983). Compensating for missing survey data . Ann Arbor, MI: Survey Research Center, Institute for Social Research, the University of Michigan.

Addresses a problem often encountered in survey methodology.

Kish, L. (1965). Survey sampling . New York: John Wiley & Sons.

Classic text on sampling theories and procedures.

Lake, C.C., & Harper, P. C. (1987). Public opinion polling: A handbook for public interest and citizen advocacy groups . Washington, D.C.: Island Press.

Clearly written easy to read and follow guide for planning, conducting and analyzing public surveys. Presents material in a step-by-step fashion, including checklists, potential pitfalls and real-world examples and samples.

Lauer, J.M., & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford UP.

Excellent overview of a number of research methodologies applicable to composition studies. Includes a chapter on "Sampling and Surveys" and appendices on basic statistical methods and considerations.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Applied Social Research: Tool for the Human Services (2nd). Fort Worth, TX: Holt.

A good basic general research textbook which also includes sections on minority issues when doing research and the analysis of "available" or secondary data..

Rea, L. M., & Parker, R. A. (1992). Designing and conducting survey research: A comprehensive guide . San Francisco: Jossey-Bass.

Written for the social and behavioral sciences, public administration, and management.

Rossi, P.H., Wright, J.D., & Anderson, A.B. (eds.) (1983). Handbook of survey research . New York: Academic Press.

Handbook of quantitative studies in social relations.

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey . New York: Wiley.,

A how-to book written for the social sciences.

Sayer, Andrew. (1992). Methods In Social Science: A Realist Approach. Routledge: London and New York.

Gives a different perspective on social science research.

Schuldt, Barbara A., & Totter, Jeff W. (1994, Winter). Electronic Mail vs. Mail Survey Response Rates. Marketing Research, 6. 36-39.

An article with specific information for electronic and mail surveys. Mainly a technical resource.

Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys . New York: Academic Press.

Detailed analysis of research question wording and question order effects on respondents.

Schwartz, N. & Seymour, S. (1996) Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Josey-Bass.

Authors provide a summary of the latest research methods used for analyzing interpretive cognitive and communication processes in answering survey questions.

Seymour, S., Bradburn, N. & Schwartz, N. (1996) Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Josey-Bass.

Explores the survey as a "social conversation" to investigate what answers mean in relation to how people understand the world and communicate.

Simon, J. (1969). Basic research methods in social science: The art of empirical investigation. New York: Random .

An excellent discussion of survey analysis. The definitions and descriptions begin from a fairly understandable (simple) starting point, then the discussion unfolds to cover some fairly complex interpretive strategies.

Singleton, R. Jr., et. al. (1988). Approaches to social research . New York: Oxford UP.

Has a very accessible chapter on sampling as well as a chapter on survey research.

Smith, Robert B. (Ed.) (1982). A Handbook of Social Science Methods, Volume 3. Prayer: New York.

There is a series of handbooks, each one with specific topics in social science research. A good technical resource, yet slightly dated.

Sul Lee, E., Forthofer, R.N.,& Lorimor, R.J. (1989). Analyzing complex survey data . Newbury Park, CA: Sage Publications.

Details on the statistical analysis of survey data.

Singer, E., & Presser, S., eds. (1989). Survey research methods: A reader . Chicago: U of Chicago P.

The essays in this volume originally appeared in various issues of Public Opinion Quarterly.

Survey Research Center (1983). Interviewer's manual . Ann Arbor, MI: University of Michigan Press.

Very practical, step-by-step guide to conducting a survey and interview with lots of examples to illustrate the process.

Pearson, R.W., &Borouch, R.F. (Eds.) (1986). Survey Research Design: Towards a Better Understanding of Their Costs and Benefits. Springer-Verag: Berlin.

Explains, in a technical fashion, the financial aspects of research design. Somewhat of a cost-analysis book.

Weissberg, H.F., Krosnick , J.A., & Bowen, B.D. (1989). An introduction to survey research and data analysis . Glenview, IL: Scott Foresman.

A good discussion of basic analysis and statistics, particularly what statistical applications are appropriate for particular kinds of data.

Anderson, B., Puur, A., Silver, B., Soova, H., & Voormann, R. (1994). Use of a lottery as an incentive for survey participation: a pilot survey in Estonia. International Journal of Public Opinion Research, 6 , 64-71.

Looks at return results in a study that offers incentives, and recommends incentive use to increase response rates.

Bare, J. (1994). Truth about daily fluctuations in 1992 pre-election polls. Newspaper Research Journal, 15, 73-81.

Comparison of variations between daily poll results of the major polls used during the 1992 American Presidential race.

Chi, S. (1993). Computer knowledge, interests, attitudes, and uses among faculty in two teachers' universities in China. DAI-A, 54/12 , 4412-4623.

Survey indicating a strong link between subject area and computer usage.

Cowans, J. (1994). Wielding the people: Opinion polls and the problem of legitimacy in France since 1944. DAI-A, 54/12 , 4556-5027.

Study looks at how the advent of opinion polling has affected the legitimacy of French governments since World War II.

Crewe, I. (1993). A nation of liars? Opinion polls and the 1992 election. Journal of the Market Research Society, 35 , 341-359.

Poses possible reasons the British polls were so wrong in predicting the outcomes of the 1992 national elections.

Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the teaching of English , 9 (3), 242-249.

Discussion of basics in question development and data analysis. Also includes some sample questions.

Daniell, S. (1993). Graduate teaching assistants' attitudes toward and responses to academic dishonesty. DAI-A,54/06, 2065- 2257.

Study explores the ethical and academic responses to cheating, using a large survey tool.

Mittal, B. (1994). Public assessment of TV advertising: Faint praise and harsh criticism. Journal of Advertising Research, 34, 35-53.

Results of a survey of Southern U.S. television viewers' perceptions of television advertisements.

Palmquist, M., & Young, R.E. (1992). Is writing a gift? The impact on students who believe it is. Reading empirical research studies: The rhetoric of research . Hayes et al. eds. Hillsdale NJ: Erlbaum.

This chapter presents results of a study of student beliefs about writing. Includes sample questions and data analysis.

Serow, R. C., & Bitting, P. F. (1995). National service as educational reform: A survey of student attitudes. Journal of research and development in education , 28 (2), 87-90.

This study assessed college students' attitude toward a national service program.

Stouffer, Samuel. (1955). Communism, Conformity, and Civil Liberties. New York: John Wiley & Sons.

This is a famous old survey worth examining. This survey examined the impact of McCarthyism on the attitudes of both the general public and community leaders, a asking whether the repression of the early 1950s affected support for civil liberties.

Wanta, W. & Hu, Y. (1993). The agenda-setting effects of international news coverage: An examination of differing news frames. International Journal of Public Opinion Research, 5, 250-264.

Discusses results of Gallup polls on important problems in relation to the news coverage of international news.

Worcester, R. (1992). The performance of the political opinion polls in the 1992 British general election. Marketing and Research Today, 20, 256-263.

A critique of the use of polls in an attempt to predict voter actions.

Yamada, S, & Synodinos, N. (1994). Public opinion surveys in Japan. International Journal of Public Opinion Research, 6 , 118-138.

Explores trends in opinion poll usage, response rates, and refusals in Japanese polls from 1975 to 1990.

Criticism/Critique/Evaluation:

Bangura, A. K. (1992). The limitations of survey research methods in assessing the problem of minority student retention in higher education . San Francisco: Mellen Research UP.

Case study done at a Maryland university addressing an aspect of validity involving intercultural factors.

Bateson, N. (1984). Data construction in social surveys. London: Allen & Unwin.

Tackles the theory of the method (but not the methods of the method) of data construction. Deals with validity of the data by validizing the process of data construction.

Braverman, M. (1996). Sources of Survey Error: Implications for Evaluation Studies. New Directions for Evaluation: Advances in Survey Research ,70, 17-28.

Looks at how evaluations using surveys can benefit from using survey design methods that reduce various survey errors.

Brehm, J. (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research, 6 , 45-63.

Considers whether incentives or the original contact letter lead to increased response rates.

Bulmer, M. (1977). Social-survey research. In M. Bulmer (ed.), Sociological research methods: An introduction . London: Macmillan.

The section includes discussions of pros and cons of survey research findings, inferences and interpreting relationships found in social-survey analysis.

Couper, M. & Groves, R. (1996). Household-Level Determinants of Survey Nonresponse. . New Directions for Evaluation: Advances in Survey Research , 70, 63-80.

Authors discuss their theory of survey participation. They believe that decisions to participate are based on two occurences: interactions with the interviewer, and the sociodemographic characteristics of respondents.

Couto, R. (1987). Participatory research: Methodology and critique. Clinical Sociology Review, 5 , 83-90.

Criticism of survey research. Addresses knowledge/power/change issues through the critique.

Dillman, D., Sangster, R., Tarnai, J., & Rockwood, T. (1996) Understanding Differences in People's Answers to Telephone and Mail Surveys. New Directions for Evaluation: Advances in Survey Research , 70, 45-62.

Explores the issue of differences in respondents' answers in telephone and mail surveys, which can affect a survey's results.

Esaiasson, P. & Granberg, D. (1993). Hidden negativism: Evaluation of Swedish parties and their leaders under different survey methods. International Journal of Public Opinion Research, 5, 265-277.

Compares varying results of mailed questionnaires vs. telephone and personal interviews. Findings indicate methodology affected results.

Guastello, S. & Rieke, M. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.

Looks at the use of honesty, or integrity, testing to predict theft by employees, questioning further use of the tests due to extremely low validity. Social and legal implications are also considered.

Hamilton, R. (1991). Work and leisure: On the reporting of poll results. Public Opinion Quarterly, 55 , 347-356.

Looks at methodology changes that affected reports of results in the Harris poll on American Leisure.

Juster, F. & Stanford, F. (1991). Comment on work and leisure: On reporting of poll results. Public Opinion Quarterly, 55 , 357-359.

Rebuttal of the Hamilton essay, cited above. The rebuttal is based upon statistical interpretation methods used in the cited survey.

Krosnick, J., Narayan, S., & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. New Directions in Evaluation: Advances in Survey Research , 70, 29-44.

Authors discuss "satisficing," a cognitive approach to survey response, which they believe helps researchers understand how survey respondents arrive at their answers.

Lindsey, J.K. (1973). Inferences from sociological survey data: A unified approach . San Francisco: Jossey-Bass.

Examines the statistical analysis of survey data.

Morgan, F. (1990). Judicial standards for survey research: An update and guidelines. Journal of Marketing, 54 , 59-70.

Looks at legal use of survey information as defined and limited in recent cases. Excellent definitions.

Pottick, K. (1990). Testing the underclass concept by surveying attitudes and behavior. Journal of Sociology and Social Welfare, 17, 117-125.

Review of definitional tests constructed to define "underclass."

Rohme, N. (1992). The state of the art of public opinion polling worldwide. Marketing and Research Today, 20, 264-271.

A quick review of the use of polling in several countries, concluding that the use of polling is on the rise worldwide.

Sabatelli, R. (1988). Measurement issues in marital research: A review and critique of contemporary survey instruments. Journal of Marriage and the Family, 55 , 891-915.

Examines issues of methodology.

Schriesheim, C. A.,& Denisi, A. S. (1980). Item Presentation as an Influence on Questionnaire Validity: A Field Experiment. Educational-and-Psychological-Measurement ; 40 (1), 175-82.

Two types of questionnaire formats measuring leadership variables were examined: one with items measuring the same dimensions grouped together and the second with items measuring the same dimensions distributed randomly. The random condition showed superior validity.

Smith, T. (1990). "A critique of the Kinsey Institute/Roper organization national sex knowledge survey." Public Opinion Quarterly, Vol. 55 , 449-457.

Questions validity of the survey based upon question selection and response interpretations. A rejoinder follows, defending the poll.

Smith, Tom W. (1990). "The First Straw? A Study of the Origins of Election Polls," Public Opinion Quarterly, Vol. 54 (Spring: 21-36).

This article offers a look at the early history of American political polling, with special attention to media reactions to the polls. This is an interesting source for anyone interested in the ethical issues surrounding polling and survey.

Sniderman, P. (1986). Reflections on American racism. Journal of Social Issues, 42 , 173-187.

Rebuttal of critique of racism research. Addresses issues of bias and motive attribution.

Stanfield, J. H. II, & Dennis, R. M., eds (1993). Race and Ethnicity in Research Methods . Newbury Park, CA: Sage.

The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted.

Stapel, J. (1993). Public opinion polling: Some perspectives in response to 'critical perspectives.' International Journal of Public Opinion Research, 5, 193-194.

Discussion of the moral power of polling results.

Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of their validity . San Diego: Academic Press.

Reviews and analyzes data from studies that have, through the use of external criteria, assessed the validity of individuals' responses to questions concerning personal characteristics and behavior in a wide variety of areas.

Williams, R. M., Jr. (1989). "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly. Vol. 53 (Summer: 155-174).

One of the classic studies in the history of survey research is reviewed by one of its authors.

Secondary Analysis:

Jolliffe, F.R. (1986). Survey Design and Analysis. Ellis Horwood Limited: Chichester.

Information about survey design as well as secondary analysis of surveys.

Kiecolt, K. J., & Nathan, L. E. (1985). Secondary analysis of survey data . Beverly Hills, CA: Sage.

Discussion of how to use previously collected survey data to answer a new research question.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Analysis of available data. In Applied Social Research: Tool for the Human Services (2nd ed., pp. 202-230). Fort Worth, TX: Holt.

Gives some existing sources for statistical data as well as discussing ways in which to use it.

Rubin, A. (1988). Secondary analyses. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. (3rd ed., pp. 323-341). Itasca, IL: Peacock.

Chapter discusses inductive and deductive processes in relation to research designs using secondary data. It also discusses methodological issues and presents a case example.

Dale, A., Arber, S., & Procter, M. (1988). Doing Secondary Analysis . London: Unwin Hyman.

A whole book about how to do secondary analysis.

Electronic Surveys:

Carr, H. H. (1991). Is using computer-based questionnaires better than using paper? Journal of Systems Management September, 19, 37.

Reference from Thach.

Dunnington, Richard A. (1993). New methods and technologies in the organizational survey process. American Behavioral Scientist , 36 (4), 512-30.

Asserts that three decades of technological advancements in communications and computer techhnology have transformed, if not revolutionized, organizational survey use and potential.

Goree, C. & Marszalek, J. (1995). Electronic Surveys: Ethical Issues for Researchers. The College Student Affairs Journal , 15 (1), 75-79.

Explores how the use of electronic surveys challenge existing ethical standards of survey research, and how that researchers need to be aware of these new ethical issues.

Hsu, J. (1995). The Development of Electronic Surveys: A Computer Language-Based Method. The Electronic Library , 13 (3), 195-201.

Discusses the need for a markup language method to properly support the creation of survey questionnaires.

Kiesler, S. & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50 , 402-13.

Opperman, M. (1995) E-Mail Surveys--Potentials and Pitfalls. Marketing Research, 7 (3), 29-33.

A discussion of the advantages and disadvantages of using E-Mail surveys.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-69.

Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology & Marketing, 5 (2), 117-137.

Thach, Liz. (1995). Using electronic mail to conduct survey research. Educational Technology, 35, 27-31.

A review of the literature on the topic of survey research via electronic mail concentrating on the key issues in design, implementation, and response using this medium.

Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.

Further Investigation

Bery, David N., & Smith , Kenwyn K. (eds.) (1988). The Self in Social Inquiry: Researching Methods. Sage Publications: Newbury Park.

Has some ethical issues about the role of researcher in social science research.

Barribeau, Paul, Bonnie Butler, Jeff Corney, Megan Doney, Jennifer Gault, Jane Gordon, Randy Fetzer, Allyson Klein, Cathy Ackerson Rogers, Irene F. Stein, Carroll Steiner, Heather Urschel, Theresa Waggoner, & Mike Palmquist. (2005). Survey Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=68

  • NCSBN Member Login Submit

Access provided by

Login to your account

If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password

If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password

sample survey research papers

Download started.

  • Academic & Personal: 24 hour online access
  • Corporate R&D Professionals: 24 hour online access
  • Add To Online Library Powered By Mendeley
  • Add To My Reading List
  • Export Citation
  • Create Citation Alert

Survey Research: An Effective Design for Conducting Nursing Research

  • Vicki A. Keough, PhD, RN-BC, ACNP Vicki A. Keough Affiliations Dean and Professor at Loyola University Chicago, Marcella Niehoff School of Nursing, Maywood, Illinois Search for articles by this author
  • Paula Tanabe, PhD, MPH, RN Paula Tanabe Affiliations Research Assistant Professor in the Department of Emergency Medicine and the Institute for Healthcare Studies at Northwestern University, Feinberg School of Medicine, Chicago, Illinois Search for articles by this author

Purchase one-time access:

  • For academic or personal research use, select 'Academic and Personal'
  • For corporate R&D use, select 'Corporate R&D Professionals'
  • Clarke S.P.
  • Silber J.H.
  • Google Scholar
  • Blegen M.A.
  • Gearhart S.
  • Sehgal N.L.
  • Alldredge B.K.
  • Scopus (115)
  • Brommage D.
  • Full Text PDF
  • Dillman D.A.
  • Draugalis J.R.
  • Scopus (275)
  • Edwards P.J.
  • Clarke M.J.
  • DiGuiseppi C.
  • Grava-Gubins I.
  • Keating N.L.
  • Zaslavsky A.M.
  • Goldstein J.
  • Ayanian J.Z.
  • Scopus (48)
  • Kleinpell R.M.
  • McPherson L.
  • Leverene R.
  • The Prime Net Consortium
  • McCabe S.E.
  • Nelson T.F.
  • Weitzman E.R.
  • Scopus (83)
  • Thornlow D.
  • Nicotera N.
  • Scopus (56)
  • Paulhus D.L.
  • Scopus (10)
  • Ryan M.A.K.
  • Scopus (162)
  • McLean S.L.
  • Scopus (157)
  • U.S. Department of Health and Human Services Health Resources and Services Administration
  • • Describe the steps of the survey research project.
  • • Differentiate survey research methods.
  • a. social desirability
  • b. social status.
  • c. validated practice.
  • d. validated response.
  • a. Web-based
  • b. Face-to-face interviews
  • c. U.S. mail
  • a. They have the potential for researcher bias.
  • b. They are time consuming.
  • c. They reach too many participants.
  • d. They have the potential for subject bias.
  • a. A signed consent form from each participant is required.
  • b. Approval from an institutional review board is not needed.
  • c. Informed consent is implied when the survey is completed and returned.
  • d. Respondents cannot be asked for information that would identify them.
  • a. Purposive sample
  • b. Population study
  • c. Target survey
  • d. Subset sample
  • a. A questionnaire sent by registered mail
  • b. A questionnaire that is at least 10 pages long
  • c. Four contacts by mail followed by a "special" contact
  • d. The addition of a form letter to the questionnaire
  • a. outcome validity.
  • b. inter-rater validity.
  • c. face validity.
  • d. construct validity.
  • a. Outcome validity
  • b. Inter-rater validity
  • c. Face validity
  • d. Construct validity
  • a. inter-rater reliability.
  • b. intra-rater reliability.
  • c. concept validity.
  • d. database validity.
  • a. send the surveys out in waves.
  • b. send all surveys out at one time.
  • c. hold data entry until the end of data collection.
  • d. hold data cleaning until the end of data collection.
  • a. Statistical techniques should be independent of the design.
  • b. Statistical techniques should match the design.
  • c. Regression models should be used in the analysis.
  • d. Pattern testing should be used in the analysis.
  • c. Data analysis
  • d. Discussion
  • • Describe the steps of the survey research project. 1 2 3 4 5 ______________
  • • Differentiate survey research methods. 1 2 3 4 5 ______________
  • 2 Were the authors knowledgeable about the subject? 1 2 3 4 5 ______________
  • 3 Were the methods of presentation (text, tables, figures, etc.) effective? 1 2 3 4 5 ______________
  • 4 Was the content relevant to the objectives? 1 2 3 4 5 ______________
  • 5 Was the article useful to you in your work? 1 2 3 4 5 ______________
  • 6 Was there enough time allotted for this activity? 1 2 3 4 5 ______________ Comments: ______________ ______________ ______________ ______________ ______________ ______________
  • □ Member (no charge)
  • □ Nonmembers (must include a check for $15 payable to NCSBN)

Article info

Identification.

DOI: https://doi.org/10.1016/S2155-8256(15)30315-X

ScienceDirect

Related articles.

  • Download Hi-res image
  • Download .PPT
  • Access for Developing Countries
  • Articles & Issues
  • Current Issue
  • List of Issues
  • Supplements
  • For Authors
  • Guide for Authors
  • Author Services
  • Permissions
  • Researcher Academy
  • Submit a Manuscript
  • Journal Info
  • About the Journal
  • Contact Information
  • Editorial Board
  • New Content Alerts
  • Call for Papers
  • January & April 2022 Issues

The content on this site is intended for healthcare professionals.

  • Privacy Policy   
  • Terms and Conditions   
  • Accessibility   
  • Help & Contact

RELX

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

A Survey of U.S Adults’ Opinions about Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment

Contributed equally to this work with: David J. Kaufman, Rebecca Baker, Lauren C. Milner, Kathy L. Hudson

* E-mail: [email protected]

Affiliation National Human Genome Research Institute, Division of Genomics and Society, National Institutes of Health, Rockville, MD, United States of America

Affiliation National Institutes of Health, Office of the Director, Bethesda, MD, United States of America

  • David J. Kaufman, 
  • Rebecca Baker, 
  • Lauren C. Milner, 
  • Stephanie Devaney, 
  • Kathy L. Hudson

PLOS

  • Published: August 17, 2016
  • https://doi.org/10.1371/journal.pone.0160461
  • Reader Comments

Table 1

A survey of a population-based sample of U.S adults was conducted to measure their attitudes about, and inform the design of the Precision Medicine Initiative’s planned national cohort study.

An online survey was conducted by GfK between May and June of 2015. The influence of different consent models on willingness to share data was examined by randomizing participants to one of eight consent scenarios.

Of 4,777 people invited to take the survey, 2,706 responded and 2,601 (54% response rate) provided valid responses. Most respondents (79%) supported the proposed study, and 54% said they would definitely or probably participate if asked. Support for and willingness to participate in the study varied little among demographic groups; younger respondents, LGBT respondents, and those with more years of education were significantly more likely to take part if asked. The most important study incentive that the survey asked about was learning about one’s own health information. Willingness to share data and samples under broad, study-by-study, menu and dynamic consent models was similar when a statement about transparency was included in the consent scenarios. Respondents were generally interested in taking part in several governance functions of the cohort study.

Conclusions

A large majority of the U.S. adults who responded to the survey supported a large national cohort study. Levels of support for the study and willingness to participate were both consistent across most demographic groups. The opportunity to learn health information about one’s self from the study appears to be a strong motivation to participate.

Citation: Kaufman DJ, Baker R, Milner LC, Devaney S, Hudson KL (2016) A Survey of U.S Adults’ Opinions about Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment. PLoS ONE 11(8): e0160461. https://doi.org/10.1371/journal.pone.0160461

Editor: Alejandro Raul Hernandez Montoya, Universidad Veracruzana, MEXICO

Received: January 18, 2016; Accepted: July 19, 2016; Published: August 17, 2016

This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

Data Availability: With respect to our ability to share the data, the ethos of the Precision Medicine Initiative is to share data openly. However, in this instance, the survey data used in the paper were collected under a contractual agreement with the survey research company GfK. GfK carries out the survey on a sample drawn from a large population-based sample that GfK recruits and maintains. GfK has an ethical and contractual obligation to protect the privacy of its panel members and their households. To this end GfK passes these obligations on to its clients. We are bound ethically and legally not to share respondent identifiers, or data that could be linked to the larger dataset we possess that would allow for the identification of respondents or households. Requests for collaborations to examine aggregate analyses of these data are welcome and can be sent to [email protected] .

Funding: The Foundation for National Institutes of Health directly paid for GfK to field the survey. The authors themselves received no specific funding for the work. FNIH did not participate in data collection, analysis, decisions to publish or preparation of the manuscript. They were involved in discussions about the logistics of study design but did not influence survey content.

Competing interests: During the study and at the time of publication, DK worked at the National Institutes of Health (NIH) which is sponsoring and developing the PMI cohort study. He was not involved in those efforts. During the study and at the time of publication, KH, RB, and LM worked at the NIH which is sponsoring and developing the PMI cohort study, and were directly involved in development of the PMI cohort study. SD worked at the NIH at the time the survey was developed where she worked directly on development of the PMI study. At the time of publication, she worked at the White House where she was involved in some aspects of developing the PMI study. This does not alter the authors' adherence to all PLOS ONE policies on data sharing and materials.

Introduction

Precision medicine is an emerging approach to disease prevention, diagnosis and treatment that takes into account differences between individuals. While not new, to date it has only been applied to certain conditions. The Precision Medicine Initiative® (PMI) plans to build a comprehensive scientific knowledge base to implement precision medicine on a larger scale by launching a national cohort study of a million or more Americans [ 1 ]. The national cohort study will aim to foster open, responsible data sharing, maintain participant privacy, and build on strong partnerships between researchers and participants [ 2 ].

Prospective cohort studies using biospecimens are a common approach taken to examine the effects and interactions of genes, environment, and lifestyle [ 3 – 7 ]. Although they are labor-, time-, and capital-intensive [ 8 ], these studies can provide the statistical power needed to detect small biological effects on disease [ 9 – 10 ]. Both public [ 3 , 5 , 11 , 12 ] and private [ 4 , 6 , 7 ] cohort studies and biobanks have been created, and genetic analyses have been incorporated into existing cohort studies as genotyping and computational tools become more accessible [ 11 , 13 , 14 ].

The Precision Medicine Initiative® aims to expand on these efforts to engage a wider group who would volunteer a standardized set of health information that can be shared broadly with qualified researchers. Cohort volunteers would share their information and biological specimens for genomic and other analyses. Genomic information would be combined with clinical data from electronic health records, lifestyle data, and data measured through mobile health devices, for use by a broad range of researchers. Participants would have access to information about cohort-fueled research findings, as well as some individual research results.

The National Institutes of Health (NIH) along with other federal agencies has begun to design and execute this large prospective study as part of the White House’s Precision Medicine Initiative® [ 1 , 2 , 10 ]. During the initial planning process for this PMI Cohort Program, NIH engaged a wide variety of expertise through four public workshops on issues of design and vision for the cohort [ 15 – 18 ], and two Requests for Information [ 19 , 20 ].

At a July, 2015 workshop on participant engagement and health equity a broad range of experts discussed the role of participant engagement in the design and conduct of an inclusive PMI cohort [ 17 ]. The discussions, which focused on building and sustaining public trust, actively engaging participants, and enlisting participants to set research priorities and directly collect study data informed the strategic design of the PMI cohort [ 21 ].

The workshop concluded that continued engagement of a broad range of stakeholders will be needed to plan, carry out and sustain the PMI cohort program. As part of this larger public engagement effort, a survey of U. S. adults was conducted to measure support for such a study, to measure acceptability of various design features, and to identify and prioritize public concerns.

Materials and Methods

Survey methods.

A 44-question online survey, determined by the NIH Office of Human Subjects Research to be exempt from human subjects review (Exemption #12889), collected U.S. adults’ opinions about a national cohort study. Formal consent was not obtained both because the study was judged as exempt, and because completion of the survey is taken as a form of consent to participate.

The survey was not intended to collect psychometric data and thus did not rely on validated psychometric scales. However, to examine changes over time in public support for and hypothetical willingness to take part in a large U.S. cohort study, exact wording from two previous surveys was used for some questions.[ 22 – 24 ] Some questions came from a related survey of biobank participants under development at the time by the NIH-funded Electronic Medical Records and Genomics (or eMERGE) consortia [ 25 ]. Response choices consisted of pre-defined options. Most of these question choices were developed based on findings of focus groups conducted as part of prior studies [ 22 – 24 ].

The survey addressed support for and willingness to take part in the cohort study, specific aspects of participation, study oversight including participant involvement in governance, and the return of information to participants. Respondents were first shown a description of the cohort study. ( S1 Appendix ) At the end of the description, respondents were told that participants in the cohort study “might get access to the information collected about their health”.

Respondents were then asked several questions about their support for the concept and willingness to take part if they were asked. ( S2 Appendix contains exact wording of all of the questions analyzed here.) Respondents were also shown one of eight different scenarios, selected at random, describing study consent and data sharing and asked whether they would “consent to share your samples and information with researchers in this manner”. The eight scenarios varied with respect to two factors: the structure of consent (broad, study by study, menu, or dynamic consent) and the presence or absence of a statement that cohort study participants would “have access to a website where you would be able to see what studies are going on, which studies are using your information, and what each study has learned.” The exact wording of all eight versions of consent is found in S3 Appendix .

A pilot survey (n = 30) fielded between May 5 and May 7, 2015 evaluated the instrument length and logic. Median completion time for the pilot was 23 minutes; the instrument was shortened to 20 minutes. The final instrument was translated into Spanish for use by respondents who preferred it. The translation was back-translated and certified.

Sample selection and online survey administration was managed by the online survey firm GfK. The survey sample was drawn from GfK’s KnowledgePanel, which is itself a probability-based pool of approximately 55,000 people designed to be representative of the U.S. population.

Individuals can become GfK panelists only after being randomly selected by GfK; no one can volunteer to be a member. GfK selects people using probability-based sampling of addresses from the U.S. Postal Service’s Delivery Sequence File, which includes 97% of residential U.S. households. Excluded from eligibility are those in residential care settings, institutionalized and incarcerated people, and homeless individuals. Individuals residing at randomly sampled addresses are invited to join KnowledgePanel through a series of mailings in English and Spanish; non-responders are phoned when a telephone number can be matched to the sampled address.

For those who agree to be part of the GfK panel, but do not have Internet access, GfK provides at no cost a laptop and Internet connection. GfK also optimized the survey for administration via smartphone or tablet. When GfK enrolls participants into its panel, each panel participant answers demographic questions which are banked and periodically updated by GfK. GfK can then provide data on common demographics for each of its participants, allowing surveys to reduce burden by not asking these questions. Data in this paper on participants’ self-reported race and ethnic group, age, education, gender, sexual orientation or gender identity, household income, and residence in a metropolitan statistical area were all measured by GfK prior to this survey.

GfK attempts to address sources of survey error including sampling error, non-coverage and non-response due to panel recruitment methods and panel attrition by using demographic post-stratification weights based on demographics of the U.S. Current Population Survey (CPS) as the benchmark. Once the data are collected, post-stratification weights are constructed so that the study data can be adjusted for the study’s sample design and for survey nonresponse[ 26 ].

This series of methods has resulted in GfK survey samples that compare favorably to other gold standard methods designed to generate population-based samples [ 27 ]. During the field period for this survey, GfK first drew a random sample of 3,271 U.S. adults from their Web-enabled panel of approximately 55,000 U.S. residents. This included Hispanics and black non-Hispanics. In order to meet oversampling goals of 500 in these two groups, three additional random samples were drawn, including one of 665 Black non-Hispanic adults and two additional samples of 541 and 320 Hispanic adults. GfK contacted each of these 4,777 individuals via email to invite them to take part in this survey. Non-respondents received up to four email reminders from GfK.

The survey was fielded online between May 28, 2015 and June 9, 2015. Participants received the equivalent of $2 for their time. After survey data were collected, information previously collected by GfK on panel members’ demographics was added to the dataset.

Analysis Methods

Data were cleaned and analyzed using SPSS software [ 28 ]. Respondents who skipped more than one-third of the questions, or who completed the survey in less than one quarter of the median survey time were excluded from analyses.

Support for the study and willingness to participate were measured using 4-point Likert scales; two binary variables were created for analysis from these scales. Demographic variables were analyzed using the categories as shown in Table 1 . Two sets of multiple logistic regressions were conducted. ( Table 2 ) Support for the study and willingness to participate were the dependent variables. In these models, race and ethnic group were treated as a single categorical variable using dummy variables, and treating white non-Hispanics as the reference group. Education, household income and age were each modeled as ordinal variables using the categories shown in Tables 1 and 2 . Respondents who identified as lesbian, gay, bisexual or transgender (LGBT) were analyzed together as a single group.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

(n = 2,601).

https://doi.org/10.1371/journal.pone.0160461.t001

thumbnail

Each multiple logistic regression included independent covariates for gender, self-identified race and ethnic group, survey language (among Hispanics only), age, household income, educational attainment, residence within or outside a metropolitan statistical area, and identification as either lesbian, gay, bisexual or transgender. For purposes of the analysis, race and ethnicity was treated as a categorical variable, using dummy variables for black non-Hispanics, Hispanics, and other non-white, non-Hispanics. Education, household income, and age were each treated as 4-level variables using the categories shown below. To examine whether there were differences among Hispanics who took the survey in Spanish and English, separate regressions were conducted using Hispanic respondents’ data only, adjusting for all of the variables below except for race and ethnic group.

https://doi.org/10.1371/journal.pone.0160461.t002

The multiple logistic regressions examined demographic factors associated with support and participation. Analyses that included the entire sample were weighted to 2014 U.S. Census demographic benchmarks. To examine whether Hispanics who took the survey in English differed from those who took it in Spanish with respect to support and willingness to participate, separate regressions were carried out using only Hispanic respondents’ data, adjusting for the covariates in Table 1 except for race and ethnic group. Analyses within or between different races and ethnic groups used an alternate set of weights calculated for these oversampled groups.

In total, 4,777 people were contacted by GfK via email and invited to take the survey, and 2,706 provided at least some responses, for an overall response rate of 56.6%. Overall response rates were 62.2% (2,036 of 3,271) in the general population sample, 345/665 or 51.9%, in the Black-non-Hispanic oversample and 325/841 or 38.6% in the Hispanic oversample. It should be noted that 320 Hispanic cases in the oversample were invited to respond on June 5, 2015 and the survey was closed on June 9, 2015. Members of this oversample received fewer email reminders to take part and had a shorter field period to participate, which could account for some but not all of the lower completion rate in the Hispanic oversample.

Responses from 105 people (3.9%) were excluded from analysis because they skipped more than one-third of the questions or completed the survey in less than 6 minutes, leaving a valid response rate of 2,601/4,777 or 54%. The excluded people did not differ demographically from those retained in the analysis. The margin of error on opinion estimates based on the sample of 2,601 is +/- 1.9%.

Demographic characteristics of the surveyed population are found in Table 1 . After weighting the sample, people with less than twelve years of education were still somewhat underrepresented compared to census data. This should be considered where differences in opinions exist across education groups.

General Support for the Cohort Study

Immediately after viewing the description of the cohort study, participants were asked ‘Based on the description you just read, do you think this study should be done?’ Seventy-nine percent said the study definitely (22%) or probably (57%) should be done, while 21% said probably not (16%) or definitely not (5%).

Similar levels of support were observed across most demographic groups ( Table 2 ). A multiple logistic regression treating support for the cohort study as a binary independent variable showed that, adjusting for the other factors in Table 1 , no significant differences in support were observed between genders, age groups, races or ethnic groups, or between Hispanics who took the survey in Spanish and English. Fewer years of education (p<0.0001), lower household income (p = 0.04), and residence outside of metropolitan statistical areas (a proxy for rural residence, p = 0.03) were independently associated with lower levels of support for the study. However, in all but one of the demographic categories examined (0–11 years of education), 70% or more said they supported the study.

Stated Willingness to Participate in the Cohort Study if Asked

The question ‘Would you participate in the cohort study if you were asked?’ was also posed at the survey’s start. Prior to this question, the only possible personal benefit of participating that was mentioned was that cohort participants “might get access to the information collected about their health.” A majority of participants (54%) said they definitely (14%) or probably (40%) would participate if asked, while the rest said they probably (30%) or definitely (16%) would not take part. Willingness to participate did not vary considerably between demographic groups ( Table 2 ). Majorities (>50%) in most groups said they would participate if asked, and in each group, at least 1 in 9 people said they would definitely take part. A second multiple logistic regression treated willingness to participate as a binary independent variable. Adjusting for the other factors in Table 1 , increasing years of education (p<0.0001) and younger age (p<0.0001) were independently associated with increased likelihood of willingness to participate. Compared to white non-Hispanics, Hispanic respondents were more likely to say they would participate (59% vs 53%, adjusted p = 0.009). As a group, respondents who identified as lesbian, gay, bisexual or transgender were significantly more likely to say they would participate if asked (p = 0.01).

One in four respondents said that they supported the idea of the study, but also said they would not participate if they were asked. People who supported the study but would not participate if asked were more than twice as likely as those who would participate to agree that the study “would take too much of my time” (77% vs 30%), and were less likely than those who would take part to agree with the statement “I trust the study to protect my privacy” (51% vs. 81% respectively).

The survey was not designed to educate people about precision medicine or biomedical research. However it was hypothesized that thinking about some of the attitudinal questions in the survey could influence respondents’ opinions about taking part in the study. To test this hypothesis, near the end of the survey, respondents were asked again “Now that you have had a chance to think about the study, would you participate in the study if you were asked?” Overall, responses were fairly similar to the earlier question– 56% said they would definitely (15%) or probably (41%) take part if asked; 25% said they probably would not participate and 19% said they definitely would not. Seven in ten (70%) did not change their answer from the beginning to the end of the survey. However, 15% had grown more positive about participating by the end of the survey, and 15% had grown more negative. Some demographic differences were observed in shifts from the beginning of the survey to the end. For example, fewer people who took the survey in Spanish (61% vs 55%) said they would take part at the end of the survey, while more people with some college (55% vs 60%) or a bachelor’s degree (60% vs 64%) were willing to take part at the end of the survey.

Respondents were also asked about specific things they would be willing to do as study participants. Among all respondents, one in seven (14%) said they would participate for their lifetime, and an additional 11% said they would take part for at least ten years. Among people who said they would definitely or probably participate if asked, 42% said they would take part for at least ten years. However, only one in four of the Black non-Hispanics and Hispanics who were willing to take part said they would do so for at least ten years.

All respondents, including those who said they would not participate, were asked to “[i]magine you were considering participating in the study”, and then asked about their willingness to provide various types of data and samples. Nearly three quarters of respondents (73%) said that if they were participating they would be willing to provide the study with a blood sample. Higher fractions said they would provide urine, hair, and saliva samples (75%), data from an activity tracker (i.e. Fitbit) (75%), genetic information (76%), a family medical history (77%), soil and water samples from home (83%), and data on lifestyle, diet and exercise (84%). Among those with a social media account (n = 1,641) only 43% responded that they would share their social media information with the study.

In each demographic group listed in Table 1 , at least 9% of people (one in eleven) said they would definitely participate, would take part for at least ten years, and would provide the study with a blood sample.

In the sample, 87% owned either a smart phone (62%) or a cell phone (25%). Three-quarters of these phone users responded that if they “were texted or prompted on your cell phone to answer a question from the study, or measure your pulse”, they would be willing to respond at least once a week. A majority (59%) said they would respond at least once a day, and 28% would be willing to respond at least twice a day.

Incentives for participation

Respondents were asked about the importance of six different incentives in their decision about whether or not to participate. The most important incentive was “learning information about my health”, listed as either somewhat or very important by 90% of people, including at least 85% of people in each group in Table 1 . Receiving payment for their time (80%) and getting health care (77%) were important to more people than receiving free internet connections (56%), activity trackers (55%) or smartphones and data plans (52%). However, the technology incentives were of more interest to younger respondents, those with lower household incomes, and those with fewer years of education.

Respondents said they would be interested in a wide variety of information that the study might return to them ( Fig 1 ). Three in four would be interested in “lab results” (examples given were cholesterol and blood sugar) as well as genetic results. Slightly fewer (68%) said they would like a copy of their medical record. Six in ten (60%) said they would be interested in receiving information about other research studies related to their health.

thumbnail

Respondents’ interest in different types of information the study could return to participants.

https://doi.org/10.1371/journal.pone.0160461.g001

Consent and Sharing of Data and Samples

As described above, respondents were randomly selected to view one of eight consent scenarios and asked “would you consent to share your samples and information with researchers in this manner”. There were four models of consent: broad, study-by-study, menu, and dynamic consent. The exact wording of the four consent scenarios is found in S3 Appendix . Two versions of these four scenarios were presented; four where the consent description stood alone, and four where the consent option was followed by this sentence: “You would (also) have access to a website where you would be able to see what studies are going on, which studies are using your information, and what each study has learned.”

When the consent models were displayed alone, similar fractions of respondents said they would share samples and data under the study-by-study (72%), menu (75%), and dynamic consent models (73%) ( Fig 2 ) while 64% would share with the study under the broad consent model. However, when the consent scenarios were accompanied by the statement about a website that displays how samples and data are being used, there was essentially no difference in support for the four consent models.

thumbnail

Willingness to share samples and information under different consent models.

https://doi.org/10.1371/journal.pone.0160461.g002

When asked about allowing different categories of researchers to use their samples and information, people were most likely to say they would share with researchers at the NIH (79%) and U.S. academic researchers (71%). There was more reluctance to share with “pharmaceutical or drug company researchers” (52%) or “other government researchers” (44%). The category “other government researchers” may be overly broad and non-specific; for example, had the survey named researchers at specific health-related agencies, responses may have differed. Consistent with two prior surveys, people were least willing to share with university researchers in other countries (39%) [ 23 , 24 ].

In a separate question, 43% of people agreed that if their personal information was removed first, they would be willing to have their “information and research results available on the Internet to anyone”.

Involvement of Participants in Design and Conduct of the Study

To create a cohort study that addresses health related questions that are relevant to the lives of participants, study designers are embracing new models of participants as partners in research. Several questions addressed respondents’ interest in this area. A large majority (76%) agreed with the statement “research participants and researchers should be equal partners in the study”.

Fig 3 shows that between 34% and 62% of respondents said that participants should be involved in various phases of the study. Important to the most respondents was participant involvement in three governance tasks—deciding what kinds of research are appropriate, deciding what to do with study results, and deciding what research questions to answer. Between 35% and 45% said they would like to be involved themselves in those three aspects of the study.

thumbnail

Aspects of the study that participants should be involved in generally, and aspects the respondent themselves would want to be involved in.

https://doi.org/10.1371/journal.pone.0160461.g003

One in four people said that including research participants in planning and running the study would increase their willingness to participate, including 18% of those people who said earlier in the survey that they would not take part if asked. Another 17% said it would make them less willing to take part if participants were included, and 58% said it would not affect their decision.

The Precision Medicine Initiative® cohort program been engaging and partnering with participant representatives prior to the launch of the study, and plans to actively continue this work with cohort study participants. This survey reflects an early effort to understand the views and preferences of potential participants toward the PMI cohort program. Findings from this survey were incorporated into the final report that the Precision Medicine Initiative Working Group made to the National Institutes of Health, and are reflected in recommendations about how the cohort study might be designed.[ 21 ] As such, the survey represents one of several early efforts to engage the public in order to inform whether and how the PMI cohort study might move forward.

Across most demographic groups this survey found consistent levels of support for and willingness to participate in the PMI cohort study as it was described. The overall support (79%) and willingness to take part (54%) observed are comparable to measures in previous public surveys conducted using nationwide GfK samples in 2008 and 2012, which found overall support for a large nationwide biobank at 84% and 83% respectively, and willingness to participate if asked at 60% and 56% respectively [ 23 , 25 ]. Support in this study could be lower than that measured in earlier surveys in part due to the explicitly stated association with the NIH (and thus the federal government) in this survey, as well as high profile privacy breaches associated with the federal government and health providers in the six months prior to the survey [ 29 , 30 ]. Differences in levels of support and willingness to participate might also result from this survey’s mention of smartphones and activity trackers to collect data, especially among older respondents, who had more concern about the privacy of electronic media (data not shown). This study’s estimates of overall support and willingness to participate are also biased slightly upward, since people with fewer years of education, who were underrepresented in the sample compared to U.S. demographics, were less likely to support the study and participate. However, extrapolating support and willingness observed in each category of education to U.S. census frequencies of the education categories suggests the magnitude of inflation from this source is less than 1% in both figures.

The findings suggest that certain groups including older Americans and those with lower socioeconomic status may require additional engagement if they are to take part. However the survey findings do not support the idea that people from communities that have historically been understudied in research are not interested in participating in this cohort. In contrast, in each demographic group in Table 1 , at least one in eleven people (9%) said they would definitely participate if asked, would donate blood, and would take part for at least ten years. The willingness to take part observed here is only the foundation for efforts needed to engage, recruit and retain people in traditionally underrepresented groups. Researchers likely must work as part of communities that have been underrepresented, if those communities are going to feel and be a part of the study. [ 17 ] To this end, scientists may consider adopting language and policies that bond researchers and potential participants together todesign and govern the study [ 17 , 31 ].

Thirty percent of survey respondents shifted their opinions about their willingness to take part in the study from the beginning to the end of the survey. This suggests that considering some of the potential risks and benefits of participation may inform and influence people’s decision to take part. Engagement before and during study recruitment may help people make better informed decisions about participation.

The observation that receipt of health information was the most important incentive was consistent with results of a 2008 nationwide survey [ 23 ]. Maximizing information shared with research participants will be a key challenge of the PMI. Survey respondents expressed interest in a wide range of information, including but not limited to genetic information. Laboratory measurements such as blood sugar were seen as equally interesting. The return of information may also benefit research, encouraging participants to stay engaged and enrolled, and to take part in other research studies based on their results.

There was considerable enthusiasm among respondents about participant involvement in different phases of the study. Between 19% and 45% said they themselves would take part in various study related functions. The tasks of greatest interest to the most people were governance-related. Developing “participants as equal partners” may not drastically improve enrollment. However it may establish the kind of study identity and enthusiasm that others have cited as one key to the success of this effort [ 16 – 18 , 21 ].

NIH researchers were found to be trusted with the data and samples to be collected. If the NIH serves as a leader in the PMI cohort, it must be prepared to understand and meet those expectations. For example, if PMI cohort data are shared with foreign academics, study leaders may need to address negative attitudes about such sharing, perhaps by engaging the public to understand their reservations, and explaining how the sharing benefits U.S. medicine and research.

Limitations

It is very important to note that the results of this survey were not meant to, and do not accurately predict what portion of American adults would take part in the PMI cohort study if they were asked. First, although half of respondents said they would take part in PMI if asked, only 54% of the sample contacted for this survey agreed to participate. Second, respondents in this survey are members of the GfK panel; they may be more favorably inclined toward research participation than the general population. This limitation is inherent in most studies of attitudes about taking part in biomedical research, since people must be willing to take part in a survey study like this one to collect such data. On the other hand, the bias may not be particularly strong since sharing opinions on a survey is likely to be a smaller, lower risk commitment than sharing ones’ biospecimens and medical data. Third, people’s stated willingness to take part in a hypothetical study will not correlate perfectly with actual behaviors. The PMI study should not be expected to enjoy a 54% success rate in its recruitment based on these data. Given the limitations of the survey, the data likely provide valid estimates of support for the study as well as the relative willingness of different groups to participate, the relative importance of different incentives, and the relative acceptability of different consent models.

Supporting Information

S1 appendix. text used to describe the pmi cohort study in the survey..

https://doi.org/10.1371/journal.pone.0160461.s001

S2 Appendix. Exact wording of survey questions used in this manuscript.

https://doi.org/10.1371/journal.pone.0160461.s002

S3 Appendix. Wording used to describe eight consent scenarios in the survey.

https://doi.org/10.1371/journal.pone.0160461.s003

Acknowledgments

The authors wish to thank the Foundation for the National Institutes of Health, which funded this survey. The authors would also like to thank Vence Bonham, Laura Rodriguez, Alex Lee and the members of the Consent, Education, Regulation and Consultation Working Group of the eMERGE research consortium for their contributions to the survey and manuscript development.

Author Contributions

  • Conceived and designed the experiments: DK RB LM SD KH.
  • Performed the experiments: DK RB LM SD KH.
  • Analyzed the data: DK RB.
  • Wrote the paper: DK RB LM SD KH.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. The White House. Precision Medicine Initiative: Proposed Privacy and Trust Principals [Internet]. Washington: The White House; 2015 [cited 2015 October 8]. Available: https://www.whitehouse.gov/sites/default/files/docs/pmi_privacy_and_trust_principles_july_2015.pdf .
  • 3. Va.gov [Internet]. Washington: Million Veteran Program (MVP) [Accessed 8 October 2015]. Available: http://www.research.va.gov/mvp/ .
  • 4. Genomeweb.com [Internet]. New York: Regeneron Launches 100K-Patient Genomics Study with Geisinger, Forms New Genetics Center [updated 2014 January 14; Accessed 8 October 2015]. Available: https://www.genomeweb.com/sequencing/regeneron-launches-100k-patient-genomics-study-geisinger-forms-new-genetics-cent .
  • 5. Ukbiobank.ac.uk [Internet]. London: Biobank UK Homepage [updated 2015 October 7; Accessed 8 October 2015]. Available: http://www.ukbiobank.ac.uk .
  • 7. Dor.kaiser.org [Internet]. Oakland: Kaiser Permanente Division of Research The research program on genes, environment, and health [updated 2015 January; Accessed 8 October 2015]. Available: http://www.dor.kaiser.org/external/DORExternal/rpgeh/index.aspx .
  • 9. National Human Genome Research Institute. Design considerations for a potential United States population-based cohort to determine the relationships among genes, environment, and health: `Recommendations of an expert panel’ [Internet]. Bethesda: National Human Genome Research Institute; 2005 [Accessed 8 October 2015] Available: https://www.genome.gov/Pages/About/OD/ReportsPublications/PotentialUSCohort.pdf .
  • 15. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Unique Scientific Opportunities for the Precision Medicine Initiative National Research Cohort [Internet]. Bethesda: National Institutes of Health; 2015 April 28 [updated 2015 June 9; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150428.htm .
  • 16. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Digital Health Data in a Million-Person Precision Medicine Initiative Cohort [Internet]. Bethesda: National Institutes of Health; 2015 May 28 [updated 2015 June 30; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150528.htm .
  • 17. National Institutes of Health. ACD Precision Medicine Initiative Working Group Public Workshop: Participant Engagement and Health Equity Workshop [Internet]. Bethesda: National Institutes of Health; 2015 July 1 [updated 2015 August 18; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/workshop-20150701.htm .
  • 18. National Institutes of Health. Mobile and Personal Technologies in Precision Medicine Workshop—Precision Medicine Initiative Cohort [Internet]. Bethesda: National Institutes of Health; 2015 July 27 [updated 2015 August 5; Accessed 8 October 2015] Available: http://www.nih.gov/precisionmedicine/workshop-20150727.htm .
  • 19. National Institutes of Health. Summary of Responses from the Request for Information on Building the Precision Medicine Initiative National Research Participant Group [Internet]. Bethesda: National Institutes of Health; 2015 [Accessed 8 October 2015]. Available: https://www.nih.gov/sites/default/files/research-training/initiatives/pmi/pmi-workshop-20150528-rfi-summary.pdf .
  • 20. National Institutes of Health. National Institutes of Health. Request for Information: NIH Precision Medicine Cohort—Strategies to Address Community Engagement and Health Disparities [Internet]. Bethesda: National Institutes of Health; 2015 June 2 [updated 2015 August 18; Accessed 8 October 2015]. Available: http://www.nih.gov/precisionmedicine/rfi-announcement-06022015.htm .
  • 21. Precision Medicine Initiative (PMI) Working Group. The Precision Medicine Initiative Cohort Program—Building a Research Foundation for 21 st Century Medicine [Internet]. Bethesda: National Institutes of Health; 2015 September 17 [Accessed 8 October 2015]. Available: http://acd.od.nih.gov/reports/DRAFT-PMI-WG-Report-9-11-2015-508.pdf .
  • 25. Vanderbilt.edu [Internet]. Nashville: Welcome to eMerge: Collaborate [Accessed 8 October 2015]. Available: https://emerge.mc.vanderbilt.edu/ .
  • 26. GfK. GfK KnowledgePanel [Internet]. Nuremburg: Gfk 2015[Accessed 8 October 2015]. Available: http://www.gfk.com/Documents/GfK-KnowledgePanel.pdf .
  • 27. Baker LC, Bundorf MK, Singer S, Wagner TH. Validity of the survey of health and internet and Knowledge Network's panel and sampling. Available: http://cdc.gov/PCD/issues/2004/oct/pdf/04_0004_01.pdf . Accessed 14 January 2016.
  • 28. SPSS for Windows, rel. 21.0. 2012. Chicago, IL: SPSS Inc.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Med Libr Assoc
  • v.100(1); 2012 Jan

Survey research: we can do better

Survey research is a commonly employed methodology in library and information science and the most frequently used research technique in papers published in the Journal of the Medical Library Association (JMLA) [ 1 ]. Unfortunately, very few of the survey reports that the JMLA receives provide sufficiently sound evidence to qualify as full-length JMLA research papers. A great deal of effort often goes into such studies, and our profession really needs the kind of evidence that the authors of these studies hoped to provide. Fortunately, the problems in these studies are not all that difficult to resolve. However, the problems do have to be addressed at the outset , before the survey is sent to potential respondents. Once the survey has been administered, it is too late.

To determine if a report qualifies for publication as a research study, the JMLA uses the definition of research given by the US Department of Health and Human Services, “a systematic investigation…designed to develop or contribute to generalizable knowledge” [ 2 ]. Problems arise when submitted surveys do not meet these criteria; either the reader cannot generalize from the findings to the population at large and/or the survey does not add to the knowledgebase of health sciences librarianship. If the results seem interesting, the JMLA may publish the paper as a brief communication in the hope that others will follow up with more in-depth investigations. However, many of these problematic surveys could have provided critically needed information, if only they had been done slightly differently. There are three common problems with the surveys that the JMLA receives, and each has a relatively straightforward solution.

Three common problems

Problem #1: The survey has not been designed to answer a question of interest to a substantial group of potential readers of the JMLA. A survey intended for publication should be designed to shed light on research questions relevant to health sciences librarianship or the delivery of biomedical information. Questions regarding user behavior, the effectiveness of interventions, barriers to using information, the utility of metadata, and so on are all potentially answerable with survey methodology. For example, a survey could be designed to reveal what influences users' decisions to use a library, whether physicians retain information retrieval techniques that are taught in medical school, what prevents clinicians from consulting published research, whether users appreciate good metadata, and so on. These are all important questions on issues of general interest, and surveys to help answer them are suitable for publication.

Problems arise because surveys can be used to provide information on local issues as well. For example, a librarian may wish to determine whether library users will tolerate increases in interlibrary loan fees, whether searchers are having trouble with a proxy server, or if local administrators approve of library services. A survey can be the best method to uncover this kind of information. However, such surveys are usually not publishable, even as a brief communication, as the questions included relate almost exclusively to local problems.

Solution #1: Before embarking on a survey intended for publication, review the current literature on the topic of interest. Design the survey to specifically address an issue of general importance that is not already answered in the literature. Survey questions should be written to provide information that can be used by others. A few questions specific to your institution or user group can also be included if necessary.

Problem #2: The results cannot be generalized beyond the group of people who answered the survey. Unfortunately, a major problem in all survey research is that respondents are almost always self-selected. Not everyone who receives a survey is likely to answer it, no matter how many times they are reminded or what incentives are offered. If those who choose to respond are different in some important way from those who do not, the results may not reflect the opinions or behaviors of the entire population under study. For example, to identify barriers to nurses' use of information, a survey should be answered by a representative sample of the nursing population. If only recent graduates of a nursing program, only pediatric nurses, or only nurses who are very annoyed with lack of access to computers in their hospital answer, the results may well be biased and so cannot be generalized to all nurses. Such a survey could be published as a brief communication, if the results were provocative and might stimulate research by others, but it would not be publishable as a research paper.

Solution #2: To address sample bias, take these three steps:

  • Send the survey to a representative sample of the population. Use reminders and incentives to obtain a high response rate (over 60%), thus minimizing the chances that only those with a particular perspective are answering the survey. And…
  • Include questions designed to identify sample bias. Questions will vary according to the topic of the survey, but typically such questions identify the demographics (age, sex, educational level, position, etc.) of the respondents or the characteristics of the organization (size, budget, location, etc.). Then…
  • Compare the characteristics of those answering the survey to those of known distributions of the population to identify possible bias. Samples of librarians, for example, can be compared to Medical Library Association (MLA) member surveys to determine if they reflect the general characteristics of MLA members. Samples of clinicians can be compared to statistics on the nations' medical professionals, and samples of academic libraries can be compared to the characteristics reported in the Association of Academic Health Sciences Library (AAHSL) annual survey. If the sample appears to be biased, acknowledge that as a limitation of the study's results.

Problem #3: The answers to the survey questions do not provide the information needed to address the issue at hand. Many times survey questions in studies submitted to the JMLA are ambiguous. Since it is impossible to determine what the answers represent, the paper must be rejected. A related and more subtle problem occurs when the survey did not ask about all the relevant issues. For example, a librarian might decide to survey clinicians to identify barriers to their use of mobile devices. She designs a survey that includes questions related to physical barriers, such as screen size, and questions on availability issues, such as accessibility of a particular database. The paper reports that the major barriers to use of mobile devices are physical problems with the devices. However, reviewers may note that there are many other possible barriers to using mobile technology in a clinical setting. Infrastructure issues, such as wireless connectivity in the hospital, and organizational issues, such as policies with respect to using cell phones in front of patients, can be critical factors. As a result, the conclusion of the survey is misleading, and the paper cannot be published.

Solution #3: Interview a few representative members of the intended survey population to identify all the critical aspects of the study topic before designing the survey. Then, pretest the survey on others and discuss the survey with pretest participants to identify ambiguous answers or unintelligible questions.

Benchmarking surveys

Benchmarking surveys provide data on the characteristics of a particular population of individuals, businesses, or organizations. Their intention is not to add to the knowledgebase of a discipline, but instead to provide numerical information that others can use for that purpose. The US Census is an example of a benchmarking survey; the MLA membership survey is another benchmarking tool as are the AAHSL annual survey and many of the surveys undertaken by the Pew Research Center. The data in these surveys are used by others both for practical purposes and for research. Social scientists use census data to develop economic models; academic medical libraries use AAHSL data to justify their budgets; and policy makers use the Pew data to understand social trends in the United States.

To be useful, a benchmarking study must be structured so that the data can be used either by researchers to compare different groups or by organizations, such as hospitals or libraries, to identify a peer group for comparative purposes. Using data to reliably compare groups selected according to multiple variables requires a very large scale study, an unbiased sample, and a thoroughly pretested survey instrument. Because benchmarking surveys need to be large and use a professionally constructed sample and survey instrument, most such surveys are done by organizations rather than individuals. Few, if any, benchmarking surveys submitted to the JMLA have a large enough sample to permit detailed analysis or identification of peer groups. They remain suggestive rather than conclusive and are normally only published as brief communications.

Three problems and three solutions

The solutions are not all that difficult to implement but as noted at the beginning of this editorial, they must be put in place before the survey is administered. To “develop or contribute to generalizable knowledge,” a survey needs to be created to answer a question that is important to others, gather information that will allow the researcher to identify sample bias, and use a well-designed unambiguous set of questions. The research question comes first; if the answer is already in the literature, then no further research is required. Developing a sampling methodology comes next, including identifying possible sources of bias and creating questions that will allow them to be identified. Last are interviews to refine the questions and pretesting to identify problematic language. We can do better surveys, and if we do, we will have the evidence we need to improve the delivery of biomedical information.

sample survey research papers

How to Write a Survey Paper: Brief Overview

sample survey research papers

Every student wishes there was a shortcut to learning about a subject. Writing a survey paper can be an effective tool for synthesizing and consolidating information on a particular topic to gain mastery over it.

There are several techniques and best practices for writing a successful survey paper. Our team is ready to guide you through the writing process and teach you how to write a paper that will benefit your academic and professional career.

What is a Survey Paper

A survey paper is a type of academic writing that aims to give readers a comprehensive understanding of the current state of research on a particular topic. By synthesizing and analyzing already existing research, a survey paper provides good shortcuts highlighting meaningful achievements and recent advances in the field and shows the gaps where further research might be needed.

The survey paper format includes an introduction that defines the scope of the research domain, followed by a thorough literature review section that summarizes and critiques existing research while showcasing areas for further research. A good survey paper must also provide an overview of commonly used methodologies, approaches, key terms, and recent trends in the field and a clear summary that synthesizes the main findings presented.

Our essay writing service team not only provides the best survey paper example but can also write a custom academic paper based on your specific requirements and needs.

How to Write a Survey Paper: Important Steps

If you have your head in your hands, wondering how to write a survey paper, you must be new here. Luckily, our team of experts got you! Below you will find the steps that will guide you to the best approach to writing a successful survey paper. No more worries about how to research a topic . Let's dive in!

How to Write a Survey Paper

Obviously, the first step is to choose a topic that is both interesting to you and relevant to a large audience. If you are struggling with topic selection, go for only the ones that have the most literature to compose a comprehensive research paper.

Once you have selected your topic, define the scope of your survey paper and the specific research questions that will guide your literature review. This will help you establish boundaries and ensure that your paper is focused and well-structured.

Next, start collecting existing research on your topic through various academic databases and literature reviews. Make sure you are up to date with recent discoveries and advances. Before selecting any work for the survey, make sure the database is credible. Determine what sources are considered trustworthy and reputable within the specific domain.

Continue survey paper writing by selecting the most relevant and significant research pieces to include in your literature overview. Make sure to methodically analyze each source and critically evaluate its relevance, rigor, validity, and contribution to the field.

At this point, you have already undertaken half of the job. Maybe even more since collecting and analyzing the literature is often the most challenging part of writing a survey paper. Now it's time to organize and structure your paper. Follow the well-established outline, give a thorough review, and compose compelling body paragraphs. Don't forget to include detailed methodology and highlight key findings and revolutionary ideas.

Finish off your writing with a powerful conclusion that not only summarizes the key arguments but also indicates future research directions.

Feeling Overwhelmed by All the College Essays?

Our expert writers will ensure that you submit top-quality papers without missing any deadlines!

Survey Paper Outline

The following is a general outline of a survey paper.

  • Introduction - with background information on the topic and research questions
  • Literature Overview - including relevant research studies and their analysis
  • Methodologies and Approaches - detailing the methods used to collect and analyze data in the literature overview
  • Findings and Trends - summarizing the key findings and trends from the literature review
  • Challenges and Gaps - highlighting the limitations of studies reviewed
  • Future Research Direction - exploring future research opportunities and recommendations
  • Conclusion - a summary of the research conducted and its significance, along with suggestions for further work in this area.
  • References - a list of all the sources cited in the paper, including academic articles and reports.

You can always customize this outline to fit your paper's specific requirements, but none of the components can be eliminated. Our custom essay writer

Further, we can explore survey paper example formats to get a better understanding of what a well-written survey paper looks like. Our custom essay writer can assist in crafting a plagiarism-free essay tailored to meet your unique needs.

Survey Paper Format

Having a basic understanding of an outline for a survey paper is just the beginning. To excel in survey paper writing, it's important to become proficient in academic essay formatting techniques. Have the following as a rule of thumb: make sure each section relates to the others and that the flow of your paper is logical and readable.

Title - You need to come up with a clear and concise title that reflects the main objective of your research question.

Survey paper example title: 'The analysis of recommender systems in E-commerce.'

Abstract - Here, you should state the purpose of your research and summarize key findings in a brief paragraph. The abstract is a shortcut to the paper, so make sure it's informative.

Introduction - This section is a crucial element of an academic essay and should be intriguing and provide background information on the topic, feeding the readers' curiosity.

Literature with benefits and limitations - This section dives into the existing literature on the research question, including relevant studies and their analyses. When reviewing the literature, it is important to highlight both benefits and limitations of existing studies to identify gaps for future research.

Result analysis - In this section, you should present and analyze the results of your survey paper. Make sure to include statistical data, graphs, and charts to support your conclusions.

Conclusion - Just like in any other thesis writing, here you need to sum up the key findings of your survey paper. How it helped advance the research topic, what limitations need to be addressed, and important implications for future research.

Future Research Direction - You can either give this a separate section or include it in a conclusion, but you can never overlook the importance of a future research direction. Distinctly point out areas of limitations and suggest possible avenues for future research.

References - Finally, be sure to include a list of all the sources/references you've used in your research. Without a list of references, your work will lose all its credibility and can no longer be beneficial to other researchers.

Writing a Good Survey Paper: Helpful Tips

After mastering the basics of how to write a good survey paper, there are a few tips to keep in mind. They will help you advance your writing and ensure your survey paper stands out among others.

How to Write a Survey Paper

Select Only Relevant Literature

When conducting research, one can easily get carried away and start hoarding all available literature, which may not necessarily be relevant to your research question. Make sure to stay within the scope of your topic. Clearly articulate your research question, and then select only literature that directly addresses the research question. A few initial readings might not reveal the relevance, so you need a systematic review and filter of the literature that is directly related to the research question.

Use Various Sources and Be Up-to-Date

Our team suggests only using up-to-date material that was published within the last 5 years. Additional sources may be used if they contribute significantly to the research question, but it is important to prioritize current literature.

Use more than 10 research papers. Though narrowing your pool of references to only relevant literature is important, it's also crucial that you have a sufficient number of sources.

Rely on Reputable Sources

Writing a survey paper is a challenge. Don't forget that it is quality over quantity. Be sure to choose reputable sources that have been peer-reviewed and are recognized within your field of research. Having a large number of various research papers does not mean that your survey paper is of high quality.

Construct a Concise Research Question

Having a short and to-the-point research question not only helps the audience understand the direction of your paper but also helps you stay focused on a clear goal. With a clear research question, you will have an easier time selecting the relevant literature, avoiding unnecessary information, and maintaining the structure of your paper.

Use an Appropriate Format

The scholarly world appreciates when researchers follow a standard format when presenting their survey papers. Therefore, it is important to use a suitable and consistent format that adheres to the guidelines provided by your academic institution or field.

Our paper survey template offers a clear structure that can aid in organizing your thoughts and sources, as well as ensuring that you cover all the necessary components of a survey paper.

Don't forget to use appropriate heading, font, spacing, margins, and referencing style. If there is a strict word limit, be sure to adhere to it and use concise wording.

Use Logical Sequence

A survey paper is different from a regular research paper. Every element of the essay needs to relate to the research question and tie into the overall objective of the paper.

Writing research papers takes a lot of effort and attention to detail. You will have to revise, edit and proofread your work several times. If you are struggling with any aspect of the writing process, just say, ' Write my research paper for me ,' and our team of tireless writers will be happy to assist you.

Starting Point: Survey Paper Example Topics

Learning how to write a survey paper is important, but it is only one aspect of the process.

Now you need a powerful research question. To help get you started, we have compiled a list of survey paper example topics that may inspire you.

  • Survey of Evolution and Challenges of Electronic Search Engines
  • A Comprehensive Survey Paper on Machine Learning Algorithms
  • Survey of Leaf Image Analysis for Plant Species Recognition
  • Advances in Natural Language Processing for Sentiment Analysis
  • Emerging Trends in Cybersecurity Threat Detection
  • A Comprehensive Survey of Techniques in Big Data Analytics in Healthcare
  • A Survey of Advances in Digital Art and Virtual Reality
  • A Systematic Review of the Impact of Social Media Marketing Strategies on Consumer Behavior
  • A Survey of AI Systems in Artistic Expression
  • Exploring New Research Methods and Ethical Considerations in Anthropology
  • Exploring Data-driven Approaches for Performance Analysis and Decision Making in Sports
  • A Survey of Benefits of Optimizing Performance through Diet and Supplementation
  • A Critical Review of Existing Research on The Impact of Climate Change on Biodiversity Conservation Strategies
  • Investigating the Future of Blockchain Technology for Secure Data Sharing
  • A Critical Review of the Literature on Mental Health and Innovation in the Workplace

Final Thoughts

Next time you are asked to write a survey paper, remember it is not just following an iterative process of gathering and summarizing existing research; it requires a deep understanding of the subject matter as well as critical analysis skills. Creative thinking and innovative approaches also play a key role in producing high-quality survey papers.

Our expert writers can help you navigate the complex process of writing a survey paper, from topic selection to data analysis and interpretation.

Finding It Difficult to Write a Survey Paper?

Our essay writing service offers plagiarism-free papers tailored to your specific needs.

Are you looking for advice on how to create an engaging and informative survey paper? This frequently asked questions (FAQ) section offers valuable responses to common inquiries that researchers frequently come across when writing a survey paper. Let's delve into it!

What is Survey Paper in Ph.D.?

What is the difference between survey paper and literature review paper, related articles.

How to Write a Summary of a Book with an Example

  • Research article
  • Open access
  • Published: 12 September 2013

A survey study of the association between mobile phone use and daytime sleepiness in California high school students

  • Nila Nathan 1 &
  • Jamie Zeitzer 2 , 3  

BMC Public Health volume  13 , Article number:  840 ( 2013 ) Cite this article

139k Accesses

36 Citations

4 Altmetric

Metrics details

Mobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.

A questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).

Multivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.

Conclusions

The relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.

Peer Review reports

Mobile phone use has drastically increased in recent years, fueled by new technology such as ‘smart phones’. In 2012, it was estimated that 78% of all Americans aged 12–17 years had a mobile phone and 37% had a smart phone [ 1 ]. Despite the growing number of adolescent mobile phone users, there has been limited examination of the behavioral effects of mobile phone usage on adolescents and their sleep and subsequent daytime sleepiness.

Mobile phone use in teens likely compounds the biological causes of sleep loss. With the onset of puberty, there are changes in innate circadian rhythms that lead to a delay in the habitual timing of sleep onset [ 2 ]. As school start times are not correspondingly later, this leads to a reduction in the time available for sleep and is consequently thought to contribute to the endemic sleepiness of teenagers. The use of mobile phones may compound this sleepiness by extending the waking hours further into the night. Munezawa and colleagues [ 3 ] analyzed 94,777 responses to questionnaires sent out to junior and senior high school students in Japan and found that the use of mobile phones for calling or sending text messages after they went to bed was associated with sleep disturbances such as short sleep duration, subjective poor sleep quality, excessive daytime sleepiness and insomnia symptoms. Soderqvist et al. in their study of Swedish adolescents aged 15–19 years, found that regular users of mobile phones reported health symptoms such as tiredness, stress, headache, anxiety, concentration difficulties and sleep disturbances more often than less frequent users [ 4 ]. Van der Bulck studied 1,656 school children in Belgium and found that prevalent mobile phone use in adolescents was related to increased levels of daytime tiredness [ 5 ]. Punamaki et al. studied Finnish teens and found that intensive mobile phone use lead to more health complaints and musculoskeletal symptoms in girls both directly and through deteriorated sleep, as well as increased daytime tiredness [ 6 ]. In one prospective study of young Swedish adults, aged 20–24, those who were high volume mobile phone users and male, but not female, were at greater risk for developing sleep disturbances a year later [ 7 ]. The association of mobile phone utilization and either sleep or sleepiness in teens in the United States has only been described by a telephone poll. In the 2011 National Sleep Foundation poll, 20% of those under the age of 30 reported that they were awakened by a phone call, text or e-mail message at least a few nights a week [ 8 ]. This type of nocturnal awakening was self-reported more frequently by those who also reported that they drove while drowsy.

As there has been limited examination of how mobile phone usage affects the behavior of young children and adolescents, none of which have addressed the effects of such usage on daytime sleepiness in U.S. teens, it seemed worthwhile to attempt a cross-sectional study of sleep and mobile phone utilization in a U.S. high school. As such, it was the purpose of this study to examine the association of mobile phone utilization and sleepiness patterns in a sample of U.S. teens. We hypothesized that an increased number of calls would be associated with increased sleepiness.

We designed a survey that contained questions concerning sleepiness and mobile phone use (see Additional file 1 ). Sleepiness was assessed using a version of the Epworth Sleepiness Scale (ESS) [ 9 ] modified for use in adolescents [ 10 ]. The modified ESS consists of eight questions that assessed the likelihood of dozing in the following circumstances: sitting and reading, watching TV, sitting inactive in a public place, as a passenger in a car for an hour without a break, lying down to rest in the afternoon when circumstances permit, sitting and talking to someone, sitting quietly after a lunch, in a car while stopped for a few minutes in traffic. Responses were limited to a Likert-like scale using the following: no chance of dozing (0), slight chance of dozing (1), moderate chance of dozing (2), or high chance of dozing (3). This yielded total ESS scores ranging from 0 to 24, with scores over 10 being associated with clinically-significant sleepiness [ 9 ]. We also included a set of modified questions, originally designed by Thomée et al., that assess the subjective impact of mobile phone use [ 7 ]. These included the number of mobile calls made or received each day, the number of texts made or received each day, being awakened by the mobile phone at night (never/occasionally/monthly/weekly/daily), staying up late to use the mobile phone (never/occasionally/monthly/weekly/daily), expectations of accessibility by mobile phone (never/occasionally/daily/all day/around-the-clock), stressfulness of accessibility (not at all/a little bit/rather/very), use mobile phone too much (yes/no), and tried and failed to reduce mobile phone use (yes/no).

An email invitation to complete an electronic form of the survey ( http://www.surveymonkey.com ) was sent to the entire student body of the Mountain View High School, located in Mountain View, California, USA, on April 5, 2012. Out of the approximately 2,000 students attending the school, a total of 211 responded by the collection date of April 23, 2012. Data analyses are described below (OrginPro8, OriginLab, Northampton MA). Summary data are provided as mean ± SD for age and ESS and as median (range) for the number of texts and/or phone calls made or received per day as these were non-normally distributed (p’s <; 0.001, Kolmogorov Smirnov test). To examine the relationship between sleepiness and predictor variables, stepwise multivariate regression analyses were performed. Collinearity in the data was examined by calculating the Variance Inflation Factor (VIF). Post hoc t-tests, ANOVA, Mann–Whitney U tests, and Spearman correlations were used, as appropriate, to examine specific components of the model and their relationship to sleepiness. χ 2 tests were used to examine categorical variables. The study was done within the regulations codified by the Declaration of Helsinki and approved by the administration of Mountain View High School.

Sixty-eight males and 143 females responded to the survey. Most (96.7%) respondents owned a mobile phone. The remainder of the analyses presented herein is on the 202 respondents (64 male, 138 female) who indicated that they owned a mobile phone (Tables  1 and 2 ). The youngest participant in the survey was 14 years old and the oldest was 19 years old (16 ± 1.2 years), representative of the age range of this school. The median number of mobile phone calls made or received per day was 2 and ranged from 0 to 60. The median number of text messages sent or received per day was 22.5 and ranged from 0 to 700. While about half of the respondents (53%) had never been awakened by the mobile phone at night, 35% were occasionally awakened, 5.9% were awakened a few times a month, 5.0% were awakened a few times a week, and 1.0% were awakened almost every night. About one-quarter (27%) of respondents had never stayed awake later than a target bedtime in order to use the mobile phone, however 36% occasionally stayed awake, 19% stayed awake a few times a month, 8.5% stayed awake a few times a week, and 10% stayed awake almost every night in order to use the mobile phone. In regards to feeling an expectation of accessibility, 7.5% reported that they needed to be accessible around the clock, 26% reported that they needed to be accessible all day, 52% reported they needed to be accessible daily, 13% reported that they only needed to be accessible now and then, and 1.0% reported they never needed to be accessible. Nearly half (49%) of the survey participants viewed accessibility via mobile phones to be not at all stressful, 45% found it to be a little bit stressful, 4.5% found it rather stressful, and 1.0% found it very stressful. More than one-third (36%) reported that they or someone close to them thought that they used the mobile phone too much. Few (17%) had tried but were unable to reduce their mobile phone use.

Subjective sleepiness on the ESS ranged from 0 to 18 (6.8 ± 3.5, with higher numbers indicating greater sleepiness), with 25% of participants having ESS scores in the excessively sleepy range (ESS ≥ 10). We examined predictors of subjective sleepiness (ESS score) using stepwise multivariate regression analysis with the following independent variables: age, sex, frequency of nocturnal awakening by the phone, frequency of staying up too late to use the phone, self-perceived accessibility by phone, stressfulness of this accessibility, attempted and failed to reduce phone use, excessive phone use determined by others, number of texts per day, and number of phone calls per day. Only subjects with complete data sets were used in our modeling (n = 191 of 202). Our final model (Table  3 ) indicated that sex, frequency of accessibility, and a failed attempt to reduce mobile phone use were all predictive of daytime sleepiness (F 6,194  = 4.35, p <; 0.001, r 2  = 0.12). These model variables lacked collinearity (VIF’s <; 3.9), indicating that they were not likely to represent the same source of variance. Despite the lack of significance in the multivariate model, given previously published data [ 4 – 6 ], we independently tested if there was a relationship between the number of estimated texts and sleepiness, but found no such correlation (r = 0.13, p = 0.07; Spearman correlation). In examining the final model, it appears that those who felt that they needed to be accessible “around the clock” (ESS = 9.2 ± 2.9) were sleepier than all others (ESS = 6.7 ± 3.4) (p <; 0.01, post hoc t -test). The relationship between sleepiness and reporting having tried, but failed, to reduce mobile phone use was such that those who had tried to reduce phone use were more sleepy (ESS = 8.3 ± 3.6) than those who had not (ESS = 6.5 ± 3.4) (p <; 0.01, post hoc t -test). While more females had tried to reduce their mobile phone use, sex did not modify the relationship between the attempt to reduce mobile phone use and sleepiness (p = 0.32, two-way ANOVA), thus retaining attempt and failure to reduce mobile phone use as an independent modifier of ESS scores.

In an attempt to better understand the relationship between ESS and accessibility, we parsed the population into those who felt that they needed to be accessible around the clock (7.4%) and those who did not (92.6%). The most accessible group, as compared to the less accessible group, had a numerically though not statistically significantly higher texting rate (50 vs. 20 per day; p = 0.07, Mann–Whitney U test), but were awakened more at night by the phone (27% vs. 4%, weekly or daily; p <; 0.05, χ 2 test), and stayed awake later than desired more often (40% vs. 17%, weekly or daily; p <; 0.05, χ 2 test). We did a similar analysis, parsing the population into those who had attempted but failed to reduce their use of their mobile phone (17%) with those who had not (83%). Those who had attempted to reduce their mobile phone use had a higher texting rate (60 vs. 20 per day; p <; 0.01, Mann–Whitney U test) and stayed awake later than desired more often (53% vs. 11%, weekly or daily; p <; 0.01, χ 2 test), but were not awakened more at night by the phone (12% vs. 5%, weekly or daily; p = 0.26, χ 2 test).

Given previous research on the topic, our a priori hypothesis was that teenagers who use their phone more often at night are likely to be more prone to daytime sleepiness. We did not, however, observe this simple relationship in this sample of U.S. teens. We did find that being female, perceived need to be accessible by mobile phone, and having tried but failed to reduce mobile phone usage were all predictive of daytime sleepiness, with the latter two likely being moderated by increased use of the phone at night. Previous work has shown that being female was associated with higher ESS scores [ 11 ]. It may be that adolescent females score higher on the ESS without being objectively sleepier, though this remains to be tested. Our analyses revealed that staying up late to use the mobile phone and being awakened by the mobile phone may be involved in the relationship between increased ESS scores and perceived need to be accessible by mobile phone and a past attempt to decrease mobile phone use. These analyses reveal some of the complexity of assessing daytime sleepiness, which is undoubtedly multifactorial. If the sheer number of text messages being sent per day is directly associated daytime sleepiness, it is likely with a small effect size. Our work, of course, is not without its limitations. Data were collected from a sample of convenience at a single, public high school in California. Only 10% of students responded to the survey and this may have introduced some response bias to the data. The data collected were cross-sectional; a longitudinal collection would have enabled a more precise analysis of moderators and mediators as well as a more accurate interpretation of causal relationships. Also, we did not objectively record the number of texts, so there may be a certain degree of bias or uncertainty associated with self-report of number of texts and calls. Several variables that might influence sleepiness both directly and indirectly through mobile phone use (e.g., socioeconomic status, comorbid sleep disorders, medication use) were not assessed. Future studies on the impact of mobile phone use on sleep and sleepiness should take into account the multifactorial and temporal nature of these behaviors.

The endemic sleepiness found in adolescents is multifactorial with both intrinsic and extrinsic factors. Mobile phone use has been assumed to be one source of increased daytime sleepiness in adolescents. Our analyses revealed that use or perceived need of use of the mobile phone during normal sleeping hours may contribute to daytime sleepiness. As overall number of text messages did not significantly contribute to daytime sleepiness, it is possible that a temporal rearrangement of phone use (e.g., limiting phone use during prescribed sleeping hours) might help in alleviating some degree of daytime sleepiness.

Abbreviations

Epworth sleepiness scale

Standard deviation

Analysis of variance.

Madden M, Lenhart A, Duggan M, Cortesi S, Gasser U: Teens and Technology. 2013, http://www.pewinternet.org/Reports/2013/Teens-and-Tech/Summary-of-Findings.aspx ,

Google Scholar  

Crowley SJ, Acebo C, Carskadon MA: Sleep, circadian rhythms, and delayed phase in adolescence. Sleep Med. 2007, 8: 602-612. 10.1016/j.sleep.2006.12.002.

Article   PubMed   Google Scholar  

Munezawa T, Kaneita Y, Osaki Y, Kanda H, Minowa M, Suzuki K, Higuchi S, Mori J, Yamamoto R, Ohida T: The association between use of mobile phones after lights out and sleep disturbances among Japanese adolescents: a nationwide cross-sectional survey. Sleep. 2011, 34: 1013-1020.

PubMed   PubMed Central   Google Scholar  

Soderqvist F, Carlberg M, Hardell L: Use of wireless telephones and self-reported health symptoms: a population-based study among Swedish adolescents aged 15–19 years. Environ Health. 2008, 7: 18-10.1186/1476-069X-7-18.

Article   PubMed   PubMed Central   Google Scholar  

Van den Bulck J: Adolescent use of mobile phones for calling and for sending text messages after lights out: results from a prospective cohort study with a one-year follow-up. Sleep. 2007, 30: 1220-1223.

Punamaki RL, Wallenius M, Nygård CH, Saarni L, Rimpelä A: Use of information and communication technology (ICT) and perceived health in adolescence: the role of sleeping habits and waking-time tiredness. J Adolescence. 2007, 30: 95-103.

Article   Google Scholar  

Thomée S, Harenstam A, Hagberg M: Mobile phone use and stress, sleep disturbances and symptoms of depression among young adults – a prospective cohort study. BMC Publ Health. 2011, 11: 66-10.1186/1471-2458-11-66.

The National Sleep Foundation. 2011, http://www.sleepfoundation.org/article/sleep-america-polls/2011-communications-technology-use-and-sleep , Sleep in America poll,

Johns MW: A new method of measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991, 14: 540-545.

CAS   PubMed   Google Scholar  

Melendres MC, Lutz JM, Rubin ED, Marcus CL: Daytime sleepiness and hyperactivity in children with suspected sleep-disordered breathing. Pediatrics. 2004, 114: 768-775. 10.1542/peds.2004-0730.

Gibson ES, Powles ACP, Thabane L, O’Brien S, Molnar DS, Trajanovic N, Ogilvie R, Shapiro C, Yan M, Chilcott-Tanser L: “Sleepiness” is serious in adolescence: two surveys of 3235 Canadian students. BMC Publ Health. 2006, 6: 116-10.1186/1471-2458-6-116.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/13/840/prepub

Download references

Acknowledgements

The authors wish to thank the students of Mountain View High School (Mountain View, California) for participating in this study.

Author information

Authors and affiliations.

Mountain View High School, Mountain View, 3535 Truman Avenue, Mountain View, CA, 94040, USA

Nila Nathan

Department of Psychiatry and Behavioral Sciences, Stanford University, 3801 Miranda Avenue (151Y), Stanford CA 94305, Palo Alto, CA, 94304, USA

Jamie Zeitzer

Mental Illness Research, Education, and Clinical Center, VA Palo Alto Health Care System, 3801 Miranda Avenue (151Y), Palo Alto, CA, 94304, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jamie Zeitzer .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

JMZ and NN designed the study, analyzed the data, and drafted the manuscript. Both authors have read and approved the final manuscript.

Electronic supplementary material

Additional file 1: questionnaire.(doc 34 kb), rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Nathan, N., Zeitzer, J. A survey study of the association between mobile phone use and daytime sleepiness in California high school students. BMC Public Health 13 , 840 (2013). https://doi.org/10.1186/1471-2458-13-840

Download citation

Received : 10 November 2012

Accepted : 10 September 2013

Published : 12 September 2013

DOI : https://doi.org/10.1186/1471-2458-13-840

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sleep deprivation
  • Mobile phone

BMC Public Health

ISSN: 1471-2458

sample survey research papers

AI Index Report

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

Read the 2024 AI Index Report

The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on artificial intelligence. Previous editions have been cited in major newspapers, including the The New York Times, Bloomberg, and The Guardian, have amassed hundreds of academic citations, and been referenced by high-level policymakers in the United States, the United Kingdom, and the European Union, among other places. This year’s edition surpasses all previous ones in size, scale, and scope, reflecting the growing significance that AI is coming to hold in all of our lives.

Steering Committee Co-Directors

Jack Clark

Ray Perrault

Steering committee members.

Erik Brynjolfsson

Erik Brynjolfsson

John Etchemendy

John Etchemendy

Katrina light

Katrina Ligett

Terah Lyons

Terah Lyons

James Manyika

James Manyika

Juan Carlos Niebles

Juan Carlos Niebles

Vanessa Parli

Vanessa Parli

Yoav Shoham

Yoav Shoham

Russell Wald

Russell Wald

Staff members.

Loredana Fattorini

Loredana Fattorini

Nestor Maslej

Nestor Maslej

Letter from the co-directors.

A decade ago, the best AI systems in the world were unable to classify objects in images at a human level. AI struggled with language comprehension and could not solve math problems. Today, AI systems routinely exceed human performance on standard benchmarks.

Progress accelerated in 2023. New state-of-the-art systems like GPT-4, Gemini, and Claude 3 are impressively multimodal: They can generate fluent text in dozens of languages, process audio, and even explain memes. As AI has improved, it has increasingly forced its way into our lives. Companies are racing to build AI-based products, and AI is increasingly being used by the general public. But current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions.

AI faces two interrelated futures. First, technology continues to improve and is increasingly used, having major consequences for productivity and employment. It can be put to both good and bad uses. In the second future, the adoption of AI is constrained by the limitations of the technology. Regardless of which future unfolds, governments are increasingly concerned. They are stepping in to encourage the upside, such as funding university R&D and incentivizing private investment. Governments are also aiming to manage the potential downsides, such as impacts on employment, privacy concerns, misinformation, and intellectual property rights.

As AI rapidly evolves, the AI Index aims to help the AI community, policymakers, business leaders, journalists, and the general public navigate this complex landscape. It provides ongoing, objective snapshots tracking several key areas: technical progress in AI capabilities, the community and investments driving AI development and deployment, public opinion on current and potential future impacts, and policy measures taken to stimulate AI innovation while managing its risks and challenges. By comprehensively monitoring the AI ecosystem, the Index serves as an important resource for understanding this transformative technological force.

On the technical front, this year’s AI Index reports that the number of new large language models released worldwide in 2023 doubled over the previous year. Two-thirds were open-source, but the highest-performing models came from industry players with closed systems. Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark; performance on the benchmark has improved by 15 percentage points since last year. Additionally, GPT-4 achieved an impressive 0.97 mean win rate score on the comprehensive Holistic Evaluation of Language Models (HELM) benchmark, which includes MMLU among other evaluations.

Although global private investment in AI decreased for the second consecutive year, investment in generative AI skyrocketed. More Fortune 500 earnings calls mentioned AI than ever before, and new studies show that AI tangibly boosts worker productivity. On the policymaking front, global mentions of AI in legislative proceedings have never been higher. U.S. regulators passed more AI-related regulations in 2023 than ever before. Still, many expressed concerns about AI’s ability to generate deepfakes and impact elections. The public became more aware of AI, and studies suggest that they responded with nervousness.

Ray Perrault Co-director, AI Index

Our Supporting Partners

Supporting Partner Logos

Analytics & Research Partners

sample survey research papers

Stay up to date on the AI Index by subscribing to the  Stanford HAI newsletter.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

What does friendship look like in America?  

Friends enjoy a birthday picnic in East Meadow, New York. (Steve Pfost/Newsday RM via Getty Images)

Americans place a lot of importance on friendship. In fact, 61% of U.S. adults say having close friends is extremely or very important for people to live a fulfilling life, according to a recent Pew Research Center survey . This is far higher than the shares who say the same about being married (23%), having children (26%) or having a lot of money (24%).

Pew Research Center conducted this analysis to understand Americans’ views of and experiences with friendship. It is based on a survey of 5,057 U.S. adults conducted from July 17-23, 2023. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories.  Read more about the ATP’s methodology .

Here are the  questions used for the analysis , along with responses, and its methodology .

We decided to ask a few more questions to better understand how Americans are experiencing friendship today. Here’s what we found:  

Number of close friends

A bar chart showing that 8% of Americans say they have no close friends; 38% report 5 or more.

A narrow majority of adults (53%) say they have between one and four close friends, while a significant share (38%) say they have five or more. Some 8% say they have no close friends.

There’s an age divide in the number of close friends people have. About half of adults 65 and older (49%) say they have five or more close friends, compared with 40% of those 50 to 64, 34% of those 30 to 49 and 32% of those younger than 30. In turn, adults under 50 are more likely than their older counterparts to say they have between one and four close friends.

There are only modest differences in the number of close friendships men and women have. Half of men and 55% of women say they have between one and four close friends. And 40% of men and 36% of women say they have five or more close friends.

Gender of friends

Most adults (66%) say all or most of their close friends are the same gender as them. Women are more likely to say this than men (71% vs. 61%).

Among adults ages 50 and older, 74% of women – compared with 59% of men – say all or most of their close friends are the same gender as them. Among adults younger than 50, the difference is much smaller: 67% of women in this age group say this, as do 63% of men.

Race and ethnicity of friends

A bar chart that shows a majority of U.S. adults say most of their close friends share their race or ethnicity.

A majority of adults (63%) say all or most of their close friends are the same race or ethnicity as them – though this varies across racial and ethnic groups.

White adults (70%) are more likely than Black (62%), Hispanic (47%) and Asian adults (52%) to say this.

This also differs by age. Adults 65 and older are the most likely (70%) to say all or most of their close friends share their race or ethnicity, compared with 53% of adults under 30 – the lowest share among any age group.

Satisfaction with friendships

The majority of Americans with at least one close friend (72%) say they are either completely or very satisfied with the quality of their friendships. Those 50 and older are more likely than their younger counterparts to be highly satisfied with their friendships (77% vs. 67%).

The survey also finds that having more friends is linked to being more satisfied with those friendships. Some 81% of those with five or more close friends say they are completely or very satisfied with their friendships. By comparison, 65% of those with one to four close friends say the same.

The survey didn’t ask adults who reported having no close friends about their level of satisfaction with their friendships.

What do friends talk about?

Of the conversation topics asked about, the most common are work and family life. Among those with at least one close friend, 58% say work comes up in conversation extremely often or often, while 57% say family comes up this often. About half say the same about current events (48%).

A dot plot showing that work and family are some of the most popular conversation topics among close friends in the U.S.

There are differences by gender and age in the subjects that Americans discuss with their close friends:

Differences by gender

Women are much more likely than men to say they talk to their close friends about their family extremely often or often (67% vs. 47%).

Women also report talking about their physical health (41% vs. 31%) and mental health (31% vs. 15%) more often than men do with close friends. The gender gap on mental health is particularly wide among adults younger than 50: 43% of women in this age group, compared with 20% of men, say they often discuss this topic with close friends.

By smaller but still significant margins, women are also more likely than men to talk often about their work (61% vs. 54%) and pop culture (37% vs. 32%) with their close friends.

Men, in turn, are more likely than women to say they talk with their close friends about sports (37% vs. 13%) and current events (53% vs. 44%).

Differences by age

Those ages 65 and older (45%) are more likely than younger Americans to say they often talk with their close friends about their physical health.

There are two topics where young adults – those under 30 – stand out from other age groups.

About half of these young adults (52%) say they often talk with their friends about pop culture. This compares with about a third or fewer among older age groups. And young adults are more likely to say they often talk about their mental health with close friends: 37% say this, compared with 29% of those 30 to 49 and 14% of those 50 and older.

Note: Here are the  questions used for the analysis , along with responses, and its methodology .

  • Family & Relationships
  • Friendships

Isabel Goddard is a research associate focusing on social trends at Pew Research Center

Few East Asian adults believe women have an obligation to society to have children

Among parents with young adult children, some dads feel less connected to their kids than moms do, how teens and parents approach screen time, most east asian adults say men and women should share financial and caregiving duties, among young adults without children, men are more likely than women to say they want to be parents someday, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

COMMENTS

  1. Survey Research

    Survey research uses a list of questions to collect data about a group of people. You can conduct surveys online, by mail, or in person. ... Instead, you will usually survey a sample from the population. ... Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the ...

  2. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  3. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  4. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  5. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  6. Journal of Survey Statistics and Methodology

    Why Submit to JSSAM?. The Journal of Survey Statistics and Methodology is an international, high-impact journal sponsored by the American Association for Public Opinion Research (AAPOR) and the American Statistical Association.Published since 2013, the journal has quickly become a trusted source for a wide range of high quality research in the field.

  7. Survey Research

    Types of survey research. Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time.

  8. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  9. A Comprehensive Guide to Survey Research Methodologies

    A survey is a research method that is used to collect data from a group of respondents in order to gain insights and information regarding a particular subject. It's an excellent method to gather opinions and understand how and why people feel a certain way about different situations and contexts. ‍.

  10. Sample Survey

    The goal is to ensure the sample is (a) representative of the population and (b) that the survey findings from the sample can be generalized back to the theoretical population from which the sample was drawn. There are two major types of sampling methods: random/probability sampling and non-probability sampling.

  11. PDF How to Run Surveys: A guide to creating your own identifying variation

    The Appendix contains useful supplementary materials, including reviews of many papers relevant to each section. 2 Sample 2.1 Types of samples The first question is what kind of sample you need for your research question. A nationally representative sample can be valuable in many settings, while a more targeted sample, e.g., one obtained by ...

  12. How to write a Survey Paper

    Instead, a survey is a research paper whose data and results are taken from other papers. This means that you should have a point to make or some new conclusion to draw. You'll make this point or draw these conclusions based upon a broad reading of previous works. You need to really know the topic in order to have the audacity to claim that a ...

  13. Guide: Conducting Survey Research

    Conducting Survey Research. Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person ...

  14. Reporting Survey Based Studies

    Abstract. The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the ...

  15. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  16. (PDF) Questionnaires and Surveys

    The paper presents the results of a research study that was held in a girls' primary church school in Malta, with students from Grades 1, 3 and 6 participating in either an analogy group ...

  17. How to Write a Literature Review

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic. There are five key steps to writing a literature review:

  18. PDF Survey Research

    cost. Survey data can be collected from many people at relatively low cost and, depending on the survey design, relatively quickly. Survey methods lend themselves to probability sampling from large populations. Thus, survey . research is very appealing when . sample generalizability. is a central research goal. In fact, survey research

  19. Survey Research: An Effective Design for Conducting Nursing Research

    An important advantage of survey research is its flexibility. Surveys can be used to conduct large national studies or to query small groups. Surveys can be made up of a few unstructured questions or can involve a large-scale, multisite longitudinal study with multiple highly validated questionnaires. Regardless of the study's degree of sophistication and rigor, nurses must understand how to ...

  20. A Survey of U.S Adults' Opinions about Conduct of a Nationwide ...

    Objectives A survey of a population-based sample of U.S adults was conducted to measure their attitudes about, and inform the design of the Precision Medicine Initiative's planned national cohort study. Methods An online survey was conducted by GfK between May and June of 2015. The influence of different consent models on willingness to share data was examined by randomizing participants to ...

  21. Survey research: we can do better

    Survey research is a commonly employed methodology in library and information science and the most frequently used research technique in papers published in the Journal of the Medical Library Association (JMLA) [].Unfortunately, very few of the survey reports that the JMLA receives provide sufficiently sound evidence to qualify as full-length JMLA research papers.

  22. How to Write a Survey Paper: Best Guide and Practices

    A survey paper is different from a regular research paper. Every element of the essay needs to relate to the research question and tie into the overall objective of the paper. Writing research papers takes a lot of effort and attention to detail. You will have to revise, edit and proofread your work several times.

  23. A survey study of the association between mobile phone use and daytime

    Sixty-eight males and 143 females responded to the survey. Most (96.7%) respondents owned a mobile phone. The remainder of the analyses presented herein is on the 202 respondents (64 male, 138 female) who indicated that they owned a mobile phone (Tables 1 and 2).The youngest participant in the survey was 14 years old and the oldest was 19 years old (16 ± 1.2 years), representative of the age ...

  24. U.S. Surveys

    Pew Research Center has deep roots in U.S. public opinion research. Launched initially as a project focused primarily on U.S. policy and politics in the early 1990s, the Center has grown over time to study a wide range of topics vital to explaining America to itself and to the world.Our hallmarks: a rigorous approach to methodological quality, complete transparency as to our methods, and a ...

  25. AI Index Report

    Mission. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

  26. Gender pay gap remained stable over past 20 years in US

    The gender gap in pay has remained relatively stable in the United States over the past 20 years or so. In 2022, women earned an average of 82% of what men earned, according to a new Pew Research Center analysis of median hourly earnings of both full- and part-time workers. These results are similar to where the pay gap stood in 2002, when women earned 80% as much as men.

  27. Use of ChatGPT for schoolwork among US teens

    Pew Research Center conducted this analysis to understand American teens' use and understanding of ChatGPT in the school setting. The Center conducted an online survey of 1,453 U.S. teens from Sept. 26 to Oct. 23, 2023, via Ipsos. Ipsos recruited the teens via their parents, who were part of its KnowledgePanel. The KnowledgePanel is a ...

  28. How many close friends do Americans have?

    Americans place a lot of importance on friendship. In fact, 61% of U.S. adults say having close friends is extremely or very important for people to live a fulfilling life, according to a recent Pew Research Center survey. This is far higher than the shares who say the same about being married (23%), having children (26%) or having a lot of ...