Loading metrics

Open Access

Peer-reviewed

Research Article

Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices

* E-mail: [email protected]

Affiliation Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliation Canadian Institutes of Health Research, Ottawa, Canada

Affiliation Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada

Affiliations Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada, Department of Medicine, University of Ottawa, Ottawa, Canada

  • Carol Bennett, 
  • Sara Khangura, 
  • Jamie C. Brehaut, 
  • Ian D. Graham, 
  • David Moher, 
  • Beth K. Potter, 
  • Jeremy M. Grimshaw

PLOS

  • Published: August 2, 2011
  • https://doi.org/10.1371/journal.pmed.1001069
  • Reader Comments

Table 1

Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.

Methods and Findings

We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).

Conclusions

There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.

Please see later in the article for the Editors' Summary

Citation: Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, et al. (2011) Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices. PLoS Med 8(8): e1001069. https://doi.org/10.1371/journal.pmed.1001069

Academic Editor: Rachel Jewkes, Medical Research Council, South Africa

Received: December 23, 2010; Accepted: June 17, 2011; Published: August 2, 2011

Copyright: © 2011 Bennett et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: Funding, in the form of salary support, was provided by the Canadian Institutes of Health Research [MGC – 42668]. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Editors' Summary

Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.

Why Was This Study Done?

This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.

What Did the Researchers Do and Find?

The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.

The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

What Do These Findings Mean?

Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Additional Information

Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069 .

  • More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
  • More information about STROBE is available on the STROBE Statement web site

Introduction

Surveys are a research method by which information is typically gathered by asking a subset of people questions on a specific topic and generalising the results to a larger population [1] , [2] . They are an essential component of many types of research including public opinion, politics, health, and others. Surveys are especially important when addressing topics that are difficult to assess using other approaches (e.g., in studies assessing constructs that require individual self-report about beliefs, knowledge, attitudes, opinions, or satisfaction). However, there is substantial literature to show that the methods used in conducting survey research can significantly affect the reliability, validity, and generalisability of study results [3] , [4] . Without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics.

Reporting guidelines are evidence-based, validated tools that employ expert consensus to specify minimum criteria for authors to report their research such that readers can critically appraise and interpret study findings [5] – [7] . More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Network's website ( www.equator-network.org ). There is increasing evidence that reporting guidelines are achieving their aim of improving the quality of reporting of health research [8] – [11] .

Given the growth in the number and range of reporting guidelines, the need for guidance on how to develop a guideline has been addressed [7] . A well-structured development process for reporting guidelines includes a review of the literature to determine whether a reporting guideline already exists (i.e., a needs assessment) [7] . The needs assessment should also include a search for evidence on the quality of reporting of published research in the domain of interest [7] .

The series of studies reported here was conducted to help determine whether there is a need for survey research reporting guidelines. We sought to identify any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research. The objectives of our study were:

  • to identify current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature;
  • to conduct a systematic review of evidence on the quality of reporting of surveys; and
  • to identify key quality criteria for the conduct of survey research and to review how they are being reported through a review of recently published reports of self-administered surveys.

Part 1: Identification of Current Guidance for Survey Research

Identifying guidance in “instructions to authors” sections in peer reviewed journals..

Using a strategy originally developed by Altman [12] to assess endorsement of CONSORT by top medical journals, we identified the top five journals from each of 33 medical specialties, and the top 15 journals from the general and internal medicine category, using Web of Science citation impact factors (list of journals available on request). The final sample consisted of 165 unique journals (15 appeared in more than one specialty).

We reviewed each journal's “Instructions to Authors” web pages as well as related PDF documents between January 12 and February 9, 2009. We used the “find” features of the Firefox web browser and Adobe Reader software to identify the following search terms: survey, questionnaire, response, response rate, respond, and non-responder. Web pages were hand searched for statements relevant to survey research. We also conducted an electronic search (MEDLINE 1950 – February Week 1, 2009; terms: survey, questionnaire) to identify whether the journals have published survey research.

Any relevant text was summarized by journal into categories: “No guidance” (survey related term found; however, no reporting guidance provided); “One directive” (survey related term(s) found that included one brief statement, directive or reference(s) relevant to reporting survey research); and “Guidance” (survey related term(s) including more than one statement, instruction and/or directive relevant to reporting survey research). Coding was carried out by one coder (SK) and verified by a second coder (CB).

Identifying published survey reporting guidelines.

MEDLINE (1950 – April Week 1, 2011) and PsycINFO (1806 – April Week 1, 2011) electronic databases were searched via Ovid to identify relevant citations. The MEDLINE electronic search strategy ( Text S1 ), developed by an information specialist, was modified as required for the PsycINFO database. For all papers meeting eligibility criteria, we hand-searched the reference lists and used the “Related Articles” feature in PubMed. Additionally, we reviewed relevant textbooks and web sites. Two reviewers (SK, CB) independently screened titles and abstracts of all unique citations to identify English language papers and resources that provided explicit guidance on the reporting of survey research. Full-text reports of all records passing the title/abstract screen were retrieved and independently reviewed by two members of the research team; there were no disagreements regarding study inclusion and all eligible records passing this stage of screening were included in this review. One researcher (CB) undertook a thematic analysis of identified guidance (e.g., sample selection, response rate, background, etc.), which was subsequently reviewed by all members of the research team. Data were summarized as frequencies.

Part 2: Systematic Review of Published Studies on the Quality of Survey Reporting

The results of the above search strategy ( Text S1 ) were also screened by the two reviewers to identify publications providing evidence on the quality of reporting of survey research in the health science literature. We identified the aspects of reporting survey research that were addressed in these evaluative studies and summarized their results descriptively.

Part 3: Assessment of Quality of Survey Reporting

The results from Part 1 and Part 2 identified items critical to reporting survey research and were used to inform the development of a data abstraction tool. Thirty-two items were deemed most critical to the reporting of survey research on that basis. These were compiled and categorized into a draft data abstraction tool that was reviewed and modified by all the authors, who have expertise in research methodology and survey research. The resulting draft data abstraction instrument was piloted by two researchers (CB, SK) on a convenience sample of survey articles identified by the authors. Items were added and removed and the wording was refined and edited through discussion and consensus among the coauthors. The revised final data abstraction tool ( Table S1 ) comprised 33 items.

Aiming for a minimum sample size of 100 studies, we searched the top 15 journals (by impact factor) from each of four broad areas of health research: health science, public health, general/internal medicine, and medical informatics. These categories, identified through Web of Science, were known to publish survey research and covered a broad range of the biomedical literature. An Ovid MEDLINE search of these 57 journals (three were included in more than one topic area) included Medical Subject Heading (MeSH) terms (“Questionnaires,” “Data Collection,” and “Health Surveys”) and keyword terms (“survey” and “questionnaire”). The search was limited to studies published between January 2008 and February 2009.

We defined a survey as a research method by which information is gathered by asking people questions on a specific topic and the data collection procedure is standardized and well defined. The information is gathered from a subset of the population of interest with the intent of generating summary statistics that are generalisable to the larger population [1] , [2] .

Two reviewers (CB, SK) independently screened all citations (title and abstract) to determine whether the study used a survey instrument consistent with our definition. The same reviewers screened all full-text articles of citations meeting our inclusion criteria, and those whose eligibility remained unclear. We included all primary reports of self-administered surveys, excluding secondary analyses, longitudinal studies, or surveys that were administered openly through the web (i.e., studies that lacked a clearly defined sampling frame). Duplicate data extraction was completed by the two reviewers. Inconsistencies were resolved by discussion and consensus.

Part 1: Identification of Current Guidance for Survey Research – “Instructions to Authors”

Of the 165 journals searched, 154 (93.3%) did not provide any guidance on survey reporting. Of these 154, 126 (81.8%) have published survey research, while 28 have not. Of the 11 journals providing some guidance, eight provided a brief phrase, statement of guidance, or reference; and three provided more substantive guidance, including more than one directive or statement. Examples are provided in Table 1 . Although no reporting guidelines for survey research were identified, several journals referenced the EQUATOR Network's web site. The EQUATOR Network includes two papers relevant to reporting survey research [13] , [14] .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmed.1001069.t001

The EQUATOR Network also links to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement ( www.strobe-statement.org ). Although the STROBE Statement includes cross-sectional studies, a class of studies that subsumes surveys, not all surveys are epidemiological. Additionally, STROBE does not include Methods ' and Results ' reporting characteristics that are unique to surveys ( Table S1 ).

Part 1: Identification of Current Guidance for Survey Research - Published Survey Reporting Guidelines

Our search identified 2,353 unique records ( Figure 1 ), which were title-screened. One-hundred sixty-four records were included in the abstract screen, from which 130 were excluded. The remaining 34 records were retrieved for full-text screening to determine eligibility. There was substantial agreement between reviewers across all the screening phases (kappa  =  0.73; 95% CI 0.69–0.77).

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g001

We identified five papers [13] – [17] and one internet site [18] that provided guidance on the reporting of survey research. None of these sources reported using valid measures or explicit methods for development. In all cases, in addition to more descriptive details, the guidance was presented in the form of a numbered or bulleted checklist. One checklist was excluded from our descriptive analysis as it was very specific to the reporting of internet surveys [16] . Two checklists were combined for analysis because one [14] was a slightly modified version of the other [17] .

Amongst the four checklists, 38 distinct reporting items were identified and grouped in eight broad themes: background, methods, sample selection, research tool, results, response rates, interpretation and discussion, and ethics and disclosure ( Table 2 ). Only two items appeared in all four checklists: providing a description of the questionnaire instrument and describing the representativeness of the sample to the population of interest. Nine items appear in three checklists, 17 items appear in two checklists, and 10 items appear in only one checklist.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t002

Screening results are presented in Figure 1 . Eight papers were identified that addressed the quality of reporting of some aspect of survey research. Five studies [19] – [23] addressed the reporting of response rates; three evaluated the reporting of non-response analyses in survey research [20] , [21] , [24] ; and two assessed the degree to which authors make their survey instrument available to readers ( Table 3 ) [25] , [26] .

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t003

Part 3: Assessment of Quality of Survey Reporting from the Biomedical Literature

Our search identified 1,719 citations: 1,343 citations were excluded during title/abstract screening because these studies did not use a survey instrument as their primary research tool. Three hundred seventy-six citations were retrieved for full-text review. Of those, 259 did not meet our eligibility criteria; reasons for their exclusion are reported in Figure 2 . The remaining 117 articles, reporting results from self-administered surveys, were retained for data abstraction.

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.g002

The 117 articles were published in 34 different journals: 12 journals from health science, seven from medical informatics, 10 from general/internal medicine, and eight from public health ( Table S2 ). The median number of pages per study was 8 (range 3–26). Of the 33 items that were assessed using our data abstraction form, the median number of items reported was 18 (range 11–25).

Reporting Characteristics: Title, Abstract, and Introduction

The majority (113 [97%]) of articles used the term “survey” or “questionnaire” in the title or abstract; four articles did not use a term to indicate that the study was a survey. While all of the articles presented a background to their research, 17 (15%) did not identify a specific purpose, aim, goal, or objective of the study. ( Table 4 )

thumbnail

https://doi.org/10.1371/journal.pmed.1001069.t004

Reporting Characteristics: Methods

Approximately one-third (40 [34%]) of survey research reports did not provide access to the questionnaire items used in the study in either the article, appendices, or an online supplement. Of those studies that reported the use of existing survey questionnaires, the majority (40/52 [77%]) did not report the psychometric properties of the tool (although all but two did reference their sources). The majority of studies that developed a novel questionnaire (91/111 [82%]) failed to clearly describe the development process and/or did not describe the methods used to pre-test the tool; the majority (89/111 [80%]) also failed to report the reliability or validity of a newly developed survey instrument. For those papers which used survey instruments that required scoring (n = 95), 63 (66%) did not provide a description of the scoring procedures.

With respect to a description of sample selection methods, 104 (89%) studies did not describe the sample's representativeness of the population of interest. The majority (110 [94%]) of studies also did not present a sample size calculation or other justification of the sample size.

There were 23 (20%) papers for which we could not determine the mode of survey administration (i.e., in-person, mail, internet, or a combination of these). Forty-one (35%) articles did not provide information on either the type (i.e. phone, e-mail, postal mail) or the number of contact attempts. For 102 (87%) papers, there was no description of who was identified as the organization/group soliciting potential research subjects for their participation in the survey.

Twelve (10%) papers failed to provide a description of the methods used to analyse the data (i.e., a description of the variables that were analysed, how they were manipulated, and the statistical methods used). However, for a further 55 (47%) studies, the data analysis would be a challenge to replicate based on the description provided in the research report. Very few studies provided methods for analysis of non-response error, calculating response rates, or handling missing item data (15 [13%], 5 [4%], and 13 [11%] respectively). The majority (112 [96%]) of the articles did not provide a definition or cut-off limit for partial completion of questionnaires.

Reporting Characteristics: Results

While the majority (89 [76%]) of papers provided a defined response rate, 28 studies (24%) failed to define the reported response rate (i.e., no information was provided on the definition of the rate or how it was calculated), provided only partial information (e.g., response rates were reported for only part of the data, or some information was reported but not a response rate), or provided no quantitative information regarding a response rate. The majority (104 [87%]) of studies did not report the sample disposition (i.e., describing the number of complete and partial returned questionnaires according to the number of potential participants known to be eligible, of unknown eligibility, or known to be ineligible). More than two-thirds (80 [68%]) of the reports provided no information on how non-respondents differed from respondents.

Reporting Characteristics: Discussion and Ethical Quality Indicators

While all of the articles summarized their results with regard to the objectives, and the majority (110 [94%]) described the limitations of their study, most (90 [77%]) did not outline the strengths of their study and 70 (60%) did not include any discussion of the generalisability of their results.

When considering the ethical quality indicators, reporting was varied. While three-quarters (86 [74%]) of the papers reported their source of funding, approximately the same proportion (88 [75%]) did not include any information on consent procedures for research participants. One-third (40 [34%]) of papers did not report whether the study had received research ethics board review.

Our comprehensive review, to identify relevant guidance for survey research and evidence on the quality of reporting of surveys, substantiates the need for a reporting guideline for survey research. Overall, our results show that few medical journals provide guidance to authors regarding survey research. Furthermore, no validated guidelines for reporting surveys currently exist. Previous reviews of survey reporting quality and our own review of 117 published studies revealed that many criteria are poorly reported.

Surveys are common in health care research; we identified more than 117 primary reports of self-administered surveys in 34 high-impact factor journals over a one-year period. Despite this, the majority of these journals provided no guidance to authors for reporting survey research. This may stem, at least partly, from the fact that validated guidelines for survey research do not exist and that recommended quality criteria vary considerably. The recommended reporting criteria that we identified in the published literature are not mutually exclusive, and there is perhaps more overlap if one takes into account implicit and explicit considerations. Regardless of these limitations, the lack of clear guidance has contributed to inconsistency in the literature; both this work and that of others [19] – [26] shows that key survey quality characteristics are often under-reported.

Self-administered sample surveys are a type of observational study and for that reason they can fall within the scope of STROBE. However, there are methodological features relevant to sample surveys that need to be highlighted in greater detail. For example, surveys that use a probability sampling design do so in order to be able to generalise to a specific target population (many other types of observational research may have a more “infinite” target population); this emphasizes the importance of coverage error and non-response error – topics that have received attention in the survey literature. Thus, in our data abstraction tool, we placed emphasis on specific methodological details excluded from STROBE – such as non-response analysis, details of strategies used to increase response rates (e.g., multiple contacts, mode of contact of potential participants), and details of measurement methods (e.g., making the instrument available so that readers can consider questionnaire formatting, question framing, choice of response categories, etc.).

Consistent with previous work [25] , [26] , fully one-third of our sample failed to provide access to any survey questions used in the study. This poses challenges both for critical analysis of the studies and for future use of the tools, including replication in new settings. These challenges will be particularly apparent as the articles age and study authors become more difficult to contact [25] .

Assessing descriptions of the study population and sampling frame posed particular challenges in this study. It was often unclear whom the authors considered to be the population of interest. To standardise our assessment of this item, we used a clearly delineated definition of “survey population” and “sampling frame” [3] , [27] . A survey reporting guideline could help this issue by clearly defining the difference between the terms and descriptions of “population” and “sampling frame.”

Our results regarding reporting of response rates and non-response analysis were similar to previously published studies [19] – [24] . In our sample, 24% of papers assessed did not provide a defined response rate and 68% did not provide results from non-response analysis. The wide variation in how response rates are reported in the literature is perhaps a historical reflection of the limited consensus or explicit journal policy for response rate reporting [22] , [28] , [29] . However, despite lack of explicit policies regarding acceptable standards for response rates or the reporting of response rates, journal editors are known to have implicit policies for acceptable response rates when considering the publication of surveys [17] , [22] , [29] , [30] . Given the concern regarding declining response rates to surveys [31] , there is a need to ensure that aspects of the survey's design and conduct are well reported so that reviewers can adequately assess the degree of bias that may be present and allay concerns over the representativeness of the survey population.

With regard to the ethical quality indicators, sources of study funding were often reported (74%) in this sample of articles. However, the reporting of research ethics board approval and subject consent procedures were reported far less often. In particular, the reporting of informed consent procedures was often absent in studies where physicians, residents, other clinicians or health administrators were the subjects. This finding may suggest that researchers do not perceive doctors and other health-care professionals and administrators to be research subjects in the same way they perceive patients and members of the public to be. It could also reflect a lack of current guidelines that specifically address the ethical use of health services professionals and staff as research subjects.

Our research is not without limitations. With respect to the review of journals' “Instructions to Authors,” the study was cross-sectional in contrast with the dynamic nature of web pages. Since our searches in early 2009, several journals have updated their web pages. It has been noted that at least one has added a brief reference to the reporting of survey research.

A second limitation is that our sample included only the contents of “Instructions to Authors” web pages for higher-impact factor journals. It is possible that journals with lower impact factors contain guidance for reporting survey research. We chose this approach, which replicates previous similar work [12] , to provide a defensible sample of journals.

</?twb=.3w?>Third, the problem of identifying non-randomised studies in electronic searches is well known and often related to the inconsistent use of terminology in the original papers. It is possible that our search strategy failed to identify relevant articles. However, it is unlikely that there is an existing guideline for survey research that is in widespread use, given our review of actual surveys, instructions to authors, and reviews of reporting quality.

Fourth, although we restricted our systematic review search strategy to two health science databases, our hand search did identify one checklist that was not specific to the health science literature [18] . The variation in recommended reporting criteria amongst the checklists may, in part, be due to variation in the different domains (i.e., health science research versus public opinion research).

Additionally, we did not critically appraise the quality of evidence for items included in the checklists nor the quality of the studies that addressed the quality of reporting of some aspect of survey research. For our review of current reporting practices for surveys, we were unable to identify validated tools for evaluation of these studies. While we did use a comprehensive and iterative approach to develop our data abstraction tool, we may not have captured information on characteristics deemed important by other researchers. Lastly, our sample was limited to self-administered surveys, and the results may not be generalisable to interviewer-administered surveys.

Recently, Moher and colleagues outlined the importance of a structured approach to the development of reporting guidelines [7] . Given the positive impact that reporting guidelines have had on the quality of reporting of health research [8] – [11] , and the potential for a positive upstream effect on the design and conduct of research [32] , there is a fundamental need for well-developed reporting guidelines. This paper provides results from the initial steps in a structured approach to the development of a survey reporting guideline and forms the foundation for our further work in this area.

In conclusion, there is limited guidance and no consensus regarding the optimal reporting of survey research. While some key criteria are consistently reported by authors publishing their survey research in peer-reviewed journals, the majority are under-reported. As in other areas of research, poor reporting compromises both transparency and reproducibility, which are fundamental tenets of research. Our findings highlight the need for a well developed reporting guideline for survey research – possibly an extension of the guideline for observational studies in epidemiology (STROBE) – that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.

Supporting Information

Data abstraction tool items and overlap with STROBE.

https://doi.org/10.1371/journal.pmed.1001069.s001

Journals represented by 117 included articles.

https://doi.org/10.1371/journal.pmed.1001069.s002

Ovid MEDLINE search strategy.

https://doi.org/10.1371/journal.pmed.1001069.s003

Acknowledgments

We thank Risa Shorr (Librarian, The Ottawa Hospital) for her assistance with designing the electronic search strategy used for this study.

Author Contributions

Conceived and designed the experiments: JG DM CB SK JB IG BP. Analyzed the data: CB SK JB DM JG. Contributed to the writing of the manuscript. CB SK JB IG DM BP JG. ICMJE criteria for authorship read and met. CB SK JB IG DM BP JG. Acqusition of data: CB SK.

  • 1. Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, et al. (2004) Survey Methodology. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • 2. Aday LA, Cornelius LJ (2006) Designing and Conducting Health Surveys. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • View Article
  • Google Scholar
  • 6. EQUATOR NetworkIntroduction to Reporting Guidelines. Available: http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines/#what . Accessed 23 November 2009.
  • 18. AAPORHome page of the American Association of Public Opinion Research (AAPOR). Available: http://www.aapor.org . Accessed 20 January 2009.
  • 22. Johnson T, Owens L (2003) Survey Response Rate Reporting in the Professional Literature. Available: http://www.amstat.org/sections/srms/proceedings/y2003/Files/JSM2003-000638.pdf . Accessed 11 July 2011.
  • 27. Dillman DA (2007) Mail and Internet Surveys: The Tailored Design Method. Hoboken (New Jersey): John Wiley & Sons, Inc.
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

survey research article

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

survey research article

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

What is survey research, advantages and disadvantages of survey research, essential steps in survey research, research methods, designing the research tool, sample and sampling, data collection, data analysis.

  • < Previous

Good practice in the conduct and reporting of survey research

  • Article contents
  • Figures & tables
  • Supplementary Data

KATE KELLEY, BELINDA CLARK, VIVIENNE BROWN, JOHN SITZIA, Good practice in the conduct and reporting of survey research, International Journal for Quality in Health Care , Volume 15, Issue 3, May 2003, Pages 261–266, https://doi.org/10.1093/intqhc/mzg031

  • Permissions Icon Permissions

Survey research is sometimes regarded as an easy research approach. However, as with any other research approach and method, it is easy to conduct a survey of poor quality rather than one of high quality and real value. This paper provides a checklist of good practice in the conduct and reporting of survey research. Its purpose is to assist the novice researcher to produce survey work to a high standard, meaning a standard at which the results will be regarded as credible. The paper first provides an overview of the approach and then guides the reader step-by-step through the processes of data collection, data analysis, and reporting. It is not intended to provide a manual of how to conduct a survey, but rather to identify common pitfalls and oversights to be avoided by researchers if their work is to be valid and credible.

Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth [ 1 ] and Joseph Rowntree [ 2 ]), and indeed survey research remains most used in applied social research. The term ‘survey’ is used in a variety of ways, but generally refers to the selection of a relatively large sample of people from a pre-determined population (the ‘population of interest’; this is the wider group of people in whom the researcher is interested in a particular study), followed by the collection of a relatively small amount of data from those individuals. The researcher therefore uses information from a sample of individuals to make some inference about the wider population.

Data are collected in a standardized form. This is usually, but not necessarily, done by means of a questionnaire or interview. Surveys are designed to provide a ‘snapshot of how things are at a specific time’ [ 3 ]. There is no attempt to control conditions or manipulate variables; surveys do not allocate participants into groups or vary the treatment they receive. Surveys are well suited to descriptive studies, but can also be used to explore aspects of a situation, or to seek explanation and provide data for testing hypotheses. It is important to recognize that ‘the survey approach is a research strategy, not a research method’ [ 3 ]. As with any research approach, a choice of methods is available and the one most appropriate to the individual project should be used. This paper will discuss the most popular methods employed in survey research, with an emphasis upon difficulties commonly encountered when using these methods.

Descriptive research

Descriptive research is a most basic type of enquiry that aims to observe (gather information on) certain phenomena, typically at a single point in time: the ‘cross-sectional’ survey. The aim is to examine a situation by describing important factors associated with that situation, such as demographic, socio-economic, and health characteristics, events, behaviours, attitudes, experiences, and knowledge. Descriptive studies are used to estimate specific parameters in a population (e.g. the prevalence of infant breast feeding) and to describe associations (e.g. the association between infant breast feeding and maternal age).

Analytical studies

Analytical studies go beyond simple description; their intention is to illuminate a specific problem through focused data analysis, typically by looking at the effect of one set of variables upon another set. These are longitudinal studies, in which data are collected at more than one point in time with the aim of illuminating the direction of observed associations. Data may be collected from the same sample on each occasion (cohort or panel studies) or from a different sample at each point in time (trend studies).

Evaluation research

This form of research collects data to ascertain the effects of a planned change.

Advantages:

The research produces data based on real-world observations (empirical data).

The breadth of coverage of many people or events means that it is more likely than some other approaches to obtain data based on a representative sample, and can therefore be generalizable to a population.

Surveys can produce a large amount of data in a short time for a fairly low cost. Researchers can therefore set a finite time-span for a project, which can assist in planning and delivering end results.

Disadvantages:

The significance of the data can become neglected if the researcher focuses too much on the range of coverage to the exclusion of an adequate account of the implications of those data for relevant issues, problems, or theories.

The data that are produced are likely to lack details or depth on the topic being investigated.

Securing a high response rate to a survey can be hard to control, particularly when it is carried out by post, but is also difficult when the survey is carried out face-to-face or over the telephone.

Research question

Good research has the characteristic that its purpose is to address a single clear and explicit research question; conversely, the end product of a study that aims to answer a number of diverse questions is often weak. Weakest of all, however, are those studies that have no research question at all and whose design simply is to collect a wide range of data and then to ‘trawl’ the data looking for ‘interesting’ or ‘significant’ associations. This is a trap novice researchers in particular fall into. Therefore, in developing a research question, the following aspects should be considered [ 4 ]:

Be knowledgeable about the area you wish to research.

Widen the base of your experience, explore related areas, and talk to other researchers and practitioners in the field you are surveying.

Consider using techniques for enhancing creativity, for example brainstorming ideas.

Avoid the pitfalls of: allowing a decision regarding methods to decide the questions to be asked; posing research questions that cannot be answered; asking questions that have already been answered satisfactorily.

The survey approach can employ a range of methods to answer the research question. Common survey methods include postal questionnaires, face-to-face interviews, and telephone interviews.

Postal questionnaires

This method involves sending questionnaires to a large sample of people covering a wide geographical area. Postal questionnaires are usually received ‘cold’, without any previous contact between researcher and respondent. The response rate for this type of method is usually low, ∼20%, depending on the content and length of the questionnaire. As response rates are low, a large sample is required when using postal questionnaires, for two main reasons: first, to ensure that the demographic profile of survey respondents reflects that of the survey population; and secondly, to provide a sufficiently large data set for analysis.

Face-to-face interviews

Face-to-face interviews involve the researcher approaching respondents personally, either in the street or by calling at people’s homes. The researcher then asks the respondent a series of questions and notes their responses. The response rate is often higher than that of postal questionnaires as the researcher has the opportunity to sell the research to a potential respondent. Face-to-face interviewing is a more costly and time-consuming method than the postal survey, however the researcher can select the sample of respondents in order to balance the demographic profile of the sample.

Telephone interviews

Telephone surveys, like face-to-face interviews, allow a two-way interaction between researcher and respondent. Telephone surveys are quicker and cheaper than face-to-face interviewing. Whilst resulting in a higher response rate than postal surveys, telephone surveys often attract a higher level of refusals than face-to-face interviews as people feel less inhibited about refusing to take part when approached over the telephone.

Whether using a postal questionnaire or interview method, the questions asked have to be carefully planned and piloted. The design, wording, form, and order of questions can affect the type of responses obtained, and careful design is needed to minimize bias in results. When designing a questionnaire or question route for interviewing, the following issues should be considered: (1) planning the content of a research tool; (2) questionnaire layout; (3) interview questions; (4) piloting; and (5) covering letter.

Planning the content of a research tool

The topics of interest should be carefully planned and relate clearly to the research question. It is often useful to involve experts in the field, colleagues, and members of the target population in question design in order to ensure the validity of the coverage of questions included in the tool (content validity).

Researchers should conduct a literature search to identify existing, psychometrically tested questionnaires. A well designed research tool is simple, appropriate for the intended use, acceptable to respondents, and should include a clear and interpretable scoring system. A research tool must also demonstrate the psychometric properties of reliability (consistency from one measurement to the next), validity (accurate measurement of the concept), and, if a longitudinal study, responsiveness to change [ 5 ]. The development of research tools, such as attitude scales, is a lengthy and costly process. It is important that researchers recognize that the development of the research tool is equal in importance—and deserves equal attention—to data collection. If a research instrument has not undergone a robust process of development and testing, the credibility of the research findings themselves may legitimately be called into question and may even be completely disregarded. Surveys of patient satisfaction and similar are commonly weak in this respect; one review found that only 6% of patient satisfaction studies used an instrument that had undergone even rudimentary testing [ 6 ]. Researchers who are unable or unwilling to undertake this process are strongly advised to consider adopting an existing, robust research tool.

Questionnaire layout

Questionnaires used in survey research should be clear and well presented. The use of capital (upper case) letters only should be avoided, as this format is hard to read. Questions should be numbered and clearly grouped by subject. Clear instructions should be given and headings included to make the questionnaire easier to follow.

The researcher must think about the form of the questions, avoiding ‘double-barrelled’ questions (two or more questions in one, e.g. ‘How satisfied were you with your personal nurse and the nurses in general?’), questions containing double negatives, and leading or ambiguous questions. Questions may be open (where the respondent composes the reply) or closed (where pre-coded response options are available, e.g. multiple-choice questions). Closed questions with pre-coded response options are most suitable for topics where the possible responses are known. Closed questions are quick to administer and can be easily coded and analysed. Open questions should be used where possible replies are unknown or too numerous to pre-code. Open questions are more demanding for respondents but if well answered can provide useful insight into a topic. Open questions, however, can be time consuming to administer and difficult to analyse. Whether using open or closed questions, researchers should plan clearly how answers will be analysed.

Interview questions

Open questions are used more frequently in unstructured interviews, whereas closed questions typically appear in structured interview schedules. A structured interview is like a questionnaire that is administered face to face with the respondent. When designing the questions for a structured interview, the researcher should consider the points highlighted above regarding questionnaires. The interviewer should have a standardized list of questions, each respondent being asked the same questions in the same order. If closed questions are used the interviewer should also have a range of pre-coded responses available.

If carrying out a semi-structured interview, the researcher should have a clear, well thought out set of questions; however, the questions may take an open form and the researcher may vary the order in which topics are considered.

A research tool should be tested on a pilot sample of members of the target population. This process will allow the researcher to identify whether respondents understand the questions and instructions, and whether the meaning of questions is the same for all respondents. Where closed questions are used, piloting will highlight whether sufficient response categories are available, and whether any questions are systematically missed by respondents.

When conducting a pilot, the same procedure as as that to be used in the main survey should be followed; this will highlight potential problems such as poor response.

Covering letter

All participants should be given a covering letter including information such as the organization behind the study, including the contact name and address of the researcher, details of how and why the respondent was selected, the aims of the study, any potential benefits or harm resulting from the study, and what will happen to the information provided. The covering letter should both encourage the respondent to participate in the study and also meet the requirements of informed consent (see below).

The concept of sample is intrinsic to survey research. Usually, it is impractical and uneconomical to collect data from every single person in a given population; a sample of the population has to be selected [ 7 ]. This is illustrated in the following hypothetical example. A hospital wants to conduct a satisfaction survey of the 1000 patients discharged in the previous month; however, as it is too costly to survey each patient, a sample has to be selected. In this example, the researcher will have a list of the population members to be surveyed (sampling frame). It is important to ensure that this list is both up-to date and has been obtained from a reliable source.

The method by which the sample is selected from a sampling frame is integral to the external validity of a survey: the sample has to be representative of the larger population to obtain a composite profile of that population [ 8 ].

There are methodological factors to consider when deciding who will be in a sample: How will the sample be selected? What is the optimal sample size to minimize sampling error? How can response rates be maximized?

The survey methods discussed below influence how a sample is selected and the size of the sample. There are two categories of sampling: random and non-random sampling, with a number of sampling selection techniques contained within the two categories. The principal techniques are described here [ 9 ].

Random sampling

Generally, random sampling is employed when quantitative methods are used to collect data (e.g. questionnaires). Random sampling allows the results to be generalized to the larger population and statistical analysis performed if appropriate. The most stringent technique is simple random sampling. Using this technique, each individual within the chosen population is selected by chance and is equally as likely to be picked as anyone else. Referring back to the hypothetical example, each patient is given a serial identifier and then an appropriate number of the 1000 population members are randomly selected. This is best done using a random number table, which can be generated using computer software (a free on-line randomizer can be found at http://www.randomizer.org/index.htm ).

Alternative random sampling techniques are briefly described. In systematic sampling, individuals to be included in the sample are chosen at equal intervals from the population; using the earlier example, every fifth patient discharged from hospital would be included in the survey. Stratified sampling selects a specific group and then a random sample is selected. Using our example, the hospital may decide only to survey older surgical patients. Bigger surveys may employ cluster sampling, which randomly assigns groups from a large population and then surveys everyone within the groups, a technique often used in national-scale studies.

Non-random sampling

Non-random sampling is commonly applied when qualitative methods (e.g. focus groups and interviews) are used to collect data, and is typically used for exploratory work. Non-random sampling deliberately targets individuals within a population. There are three main techniques. (1) purposive sampling: a specific population is identified and only its members are included in the survey; using our example above, the hospital may decide to survey only patients who had an appendectomy. (2) Convenience sampling: the sample is made up of the individuals who are the easiest to recruit. Finally, (3) snowballing: the sample is identified as the survey progresses; as one individual is surveyed he or she is invited to recommend others to be surveyed.

It is important to use the right method of sampling and to be aware of the limitations and statistical implications of each. The need to ensure that the sample is representative of the larger population was highlighted earlier and, alongside the sampling method, the degree of sampling error should be considered. Sampling error is the probability that any one sample is not completely representative of the population from which it has been drawn [ 9 ]. Although sampling error cannot be eliminated entirely, the sampling technique chosen will influence the extent of the error. Simple random sampling will give a closer estimate of the population than a convenience sample of individuals who just happened to be in the right place at the right time.

Sample size

What sample size is required for a survey? There is no definitive answer to this question: large samples with rigorous selection are more powerful as they will yield more accurate results, but data collection and analysis will be proportionately more time consuming and expensive. Essentially, the target sample size for a survey depends on three main factors: the resources available, the aim of the study, and the statistical quality needed for the survey. For ‘qualitative’ surveys using focus groups or interviews, the sample size needed will be smaller than if quantitative data is collected by questionnaire. If statistical analysis is to be performed on the data then sample size calculations should be conducted. This can be done using computer packages such as G * Power [ 10 ]; however, those with little statistical knowledge should consult a statistician. For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health [ 11 ] should be consulted.

Larger samples give a better estimate of the population but it can be difficult to obtain an adequate number of responses. It is rare that everyone asked to participate in the survey will reply. To ensure a sufficient number of responses, include an estimated non-response rate in the sample size calculations.

Response rates are a potential source of bias. The results from a survey with a large non-response rate could be misleading and only representative of those who replied. French [ 12 ] reported that non-responders to patient satisfaction surveys are less likely to be satisfied than people who reply. It is unwise to define a level above which a response rate is acceptable, as this depends on many local factors; however, an achievable and acceptable rate is ∼75% for interviews and 65% for self-completion postal questionnaires [ 9 , 13 ]. In any study, the final response rate should be reported with the results; potential differences between the respondents and non-respondents should be explicitly explored and their implications discussed.

There are techniques to increase response rates. A questionnaire must be concise and easy to understand, reminders should be sent out, and method of recruitment should be carefully considered. Sitzia and Wood [ 13 ] found that participants recruited by mail or who had to respond by mail had a lower mean response rate (67%) than participants who were recruited personally (mean response 76.7%). A most useful review of methods to maximize response rates in postal surveys has recently been published [ 14 ].

Researchers should approach data collection in a rigorous and ethical manner. The following information must be clearly recorded:

How, where, how many times, and by whom potential respondents were contacted.

How many people were approached and how many of those agreed to participate.

How did those who agreed to participate differ from those who refused with regard to characteristics of interest in the study, for example how were they identified, where were they approached, and what was their gender, age, and features of their illness or health care.

How was the survey administered (e.g. telephone interview).

What was the response rate (i.e. the number of usable data sets as a proportion of the number of people approached).

The purpose of all analyses is to summarize data so that it is easily understood and provides the answers to our original questions: ‘In order to do this researchers must carefully examine their data; they should become friends with their data’ [ 15 ]. Researchers must prepare to spend substantial time on the data analysis phase of a survey (and this should be built into the project plan). When analysis is rushed, often important aspects of the data are missed and sometimes the wrong analyses are conducted, leading to both inaccurate results and misleading conclusions [ 16 ]. However, and this point cannot be stressed strongly enough, researchers must not engage in data dredging, a practice that can arise especially in studies in which large numbers of dependent variables can be related to large numbers of independent variables (outcomes). When large numbers of possible associations in a dataset are reviewed at P < 0.05, one in 20 of the associations by chance will appear ‘statistically significant’; in datasets where only a few real associations exist, testing at this significance level will result in the large majority of findings still being false positives [ 17 ].

The method of data analysis will depend on the design of the survey and should have been carefully considered in the planning stages of the survey. Data collected by qualitative methods should be analysed using established methods such as content analysis [ 18 ], and where quantitative methods have been used appropriate statistical tests can be applied. Describing methods of analysis here would be unproductive as a multitude of introductory textbooks and on-line resources are available to help with simple analyses of data (e.g. [ 19 , 20 ]). For advanced analysis a statistician should be consulted.

When reporting survey research, it is essential that a number of key points are covered (though the length and depth of reporting will be dependent upon journal style). These key points are presented as a ‘checklist’ below:

Explain the purpose or aim of the research, with the explicit identification of the research question.

Explain why the research was necessary and place the study in context, drawing upon previous work in relevant fields (the literature review).

State the chosen research method or methods, and justify why this method was chosen.

Describe the research tool. If an existing tool is used, briefly state its psychometric properties and provide references to the original development work. If a new tool is used, you should include an entire section describing the steps undertaken to develop and test the tool, including results of psychometric testing.

Describe how the sample was selected and how data were collected, including:

How were potential subjects identified?

How many and what type of attempts were made to contact subjects?

Who approached potential subjects?

Where were potential subjects approached?

How was informed consent obtained?

How many agreed to participate?

How did those who agreed differ from those who did not agree?

What was the response rate?

Describe and justify the methods and tests used for data analysis.

Present the results of the research. The results section should be clear, factual, and concise.

Interpret and discuss the findings. This ‘discussion’ section should not simply reiterate results; it should provide the author’s critical reflection upon both the results and the processes of data collection. The discussion should assess how well the study met the research question, should describe the problems encountered in the research, and should honestly judge the limitations of the work.

Present conclusions and recommendations.

The researcher needs to tailor the research report to meet:

The expectations of the specific audience for whom the work is being written.

The conventions that operate at a general level with respect to the production of reports on research in the social sciences.

Anyone involved in collecting data from patients has an ethical duty to respect each individual participant’s autonomy. Any survey should be conducted in an ethical manner and one that accords with best research practice. Two important ethical issues to adhere to when conducting a survey are confidentiality and informed consent.

The respondent’s right to confidentiality should always be respected and any legal requirements on data protection adhered to. In the majority of surveys, the patient should be fully informed about the aims of the survey, and the patient’s consent to participate in the survey must be obtained and recorded.

The professional bodies listed below, among many others, provide guidance on the ethical conduct of research and surveys.

American Psychological Association: http://www.apa.org

British Psychological Society: http://www.bps.org.uk

British Medical Association: http://www.bma.org.uk .

UK General Medical Council: http://www.gmc-uk.org

American Medical Association: http://www.ama-assn.org

UK Royal College of Nursing: http://www.rcn.org.uk

UK Department of Health: http://www.doh.gov

Survey research demands the same standards in research practice as any other research approach, and journal editors and the broader research community will judge a report of survey research with the same level of rigour as any other research report. This is not to say that survey research need be particularly difficult or complex; the point to emphasize is that researchers should be aware of the steps required in survey research, and should be systematic and thoughtful in the planning, execution, and reporting of the project. Above all, survey research should not be seen as an easy, ‘quick and dirty’ option; such work may adequately fulfil local needs (e.g. a quick survey of hospital staff satisfaction), but will not stand up to academic scrutiny and will not be regarded as having much value as a contribution to knowledge.

Address reprint requests to John Sitzia, Research Department, Worthing Hospital, Lyndhurst Road, Worthing BN11 2DH, West Sussex, UK. E-mail: [email protected]

London School of Economics, UK. Http://booth.lse.ac.uk/ (accessed 15 January 2003 ).

Vernon A. A Quaker Businessman: Biography of Joseph Rowntree (1836–1925) . London: Allen & Unwin, 1958 .

Denscombe M. The Good Research Guide: For Small-scale Social Research Projects . Buckingham: Open University Press, 1998 .

Robson C. Real World Research: A Resource for Social Scientists and Practitioner-researchers . Oxford: Blackwell Publishers, 1993 .

Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to their Development and Use . Oxford: Oxford University Press, 1995 .

Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care 1999 ; 11: 319 –328.

Bowling A. Research Methods in Health. Investigating Health and Health Services . Buckingham: Open University Press, 2002 .

American Statistical Association, USA. Http://www.amstat.org (accessed 9 December 2002 ).

Arber S. Designing samples. In: Gilbert N, ed. Researching Social Life . London: SAGE Publications, 2001 .

Heinrich Heine University, Dusseldorf, Germany. Http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/index.html (accessed 12 December 2002 ).

Department of Health, England. Http://www.doh.gov.uk/acutesurvey/index.htm (accessed 12 December 2002 ).

French K. Methodological considerations in hospital patient opinion surveys. Int J Nurs Stud 1981 ; 18: 7 –32.

Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care 1998 ; 10: 311 –317.

Edwards P, Roberts I, Clarke M et al. Increasing response rates to postal questionnaires: systematic review. Br Med J 2002 ; 324: 1183 .

Wright DB. Making friends with our data: improving how statistical results are reported. Br J Educ Psychol 2003 ; in press.

Wright DB, Kelley K. Analysing and reporting data. In: Michie S, Abraham C, eds. Health Psychology in Practice . London: SAGE Publications, 2003 ; in press.

Davey Smith G, Ebrahim S. Data dredging, bias, or confounding. Br Med J 2002 ; 325: 1437 –1438.

Morse JM, Field PA. Nursing Research: The Application of Qualitative Approaches . London: Chapman and Hall, 1996 .

Wright DB. Understanding Statistics: An Introduction for the Social Sciences . London: SAGE Publications, 1997 .

Sportscience, New Zealand. Http://www.sportsci.org/resource/stats/index.html (accessed 12 December 2002 ).

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

Google Forms: How to use this free Google Workspace tool to create surveys, quizzes, and questionnaires

  • Google Forms is a free online software for creating surveys and questionnaires.
  • You need a Google account to create a Google Form, but anyone can fill out a Google Form.
  • You can personalize your Google Form with question types, header images, and color themes.

Insider Today

Google Forms is free online software that allows you to create surveys, quizzes, and more. 

Google Forms is part of Google's web-based apps suite, which also includes Google Docs, Google Sheets, Google Slides , and more. It's a versatile tool that can be used for various applications, from gathering RSVPs for an event to creating a pop quiz. You'll need a Google account to create a Google Form, but you can adjust the form settings so that recipients can fill it out regardless of whether they have a Google account.

Currently, Google Forms does not offer a native mobile app but you can access it on your desktop computer.

Here's everything else you need to know about Google Forms.

How can I create a Google Form?

Google Forms differentiates itself from similar online software through its library of customization options. When creating your new form, you'll have the ability to select from a series of templates or design your very own. 

If you choose to make a new template, consider adding your logo and photos, and watch Google generate a custom color set to match.

Here's how to do it: 

  • Go to docs.google.com/forms
  • Click Blank form to create a new form, or choose a pre-made template to kick-start the process. Google has a number of helpful template options, including feedback forms, order forms, job applications, worksheets, registration forms, and even "Find a Time" forms if you're trying to schedule an event or Google Meet conference call.

With the Q&A format at the heart of Google Forms, the Workspace tool offers various question and response options, including multiple-choice, dropdown, linear scale, and multiple-choice and tick-box grid.

With each new question, you can integrate multimedia, such as images or YouTube videos, or add text descriptions that offer hints or expound on the question.

Related stories

If you're a Google Classroom user, you can use Google Forms to create quiz assignments for your students.

How can I customize or organize my Google Form?

In the Settings tab, you can customize options in the Responses dropdown, like Collect email addresses .

You can choose to require respondents to enter an email address to submit the Form by selecting Responder input or force respondents to sign into their Google accounts to respond by selecting Verified . You can also let respondents submit anonymously by choosing Do not collect .

In the Presentation dropdown below, you can click boxes to include a progress bar, shuffle the order of the questions, and set a custom confirmation message that respondents will receive upon submitting the Form.

In the Quizzes dropdown, you can turn your form into a quiz.

Organizational features let you determine the order of your queries through a drag-and-drop tool or randomize the answer order for specific questions through the form's settings.

Another way to organize your form is through Google Forms' section tool. These can be helpful for longer surveys, as they break questions up into manageable chunks. To create a section, click the Add section icon (two vertically stacked rectangles) on the right toolbar. It's located on the same toolbar as the "+" for adding a question.

Once you're ready to share your Google Form, clicking the Send button at the top right of the screen will let you send the Form via email, copy a link, or copy an embedded HTML code to add the form to your website or blog.

How to navigate Google Forms responses

Once your Google Form is published and you've shared it using either the multiple public and private share options, it will automatically collect responses as people fill out and submit their responses. Answers gathered by a Google Form are only viewable to you, the creator, and any collaborators you add.

To view responses for your Google Form, open your Google Form and navigate to the Responses tab. Here, you will see a summary of the responses collected. Click the green Google Sheets icon to create a spreadsheet that displays all of the information gathered from the Form, which will automatically update as people submit your Google Form.

In the Responses tab, you can also elect to get email notifications for new responses, select a response destination (either a new or existing spreadsheet), download, or print the answers by clicking the three dots next to the Google Sheets icon. There's also an option to delete all replies, which can be useful in deleting responses collected when testing your sheet.

On February 28, Axel Springer, Business Insider's parent company, joined 31 other media groups and filed a $2.3 billion suit against Google in Dutch court, alleging losses suffered due to the company's advertising practices.

survey research article

  • Main content

America’s best decade, according to data

One simple variable, more than anything, determines when you think the nation peaked.

survey research article

How do you define the good old days?

Department of Data

survey research article

The plucky poll slingers at YouGov, who are consistently willing to use their elite-tier survey skills in service of measuring the unmeasurable, asked 2,000 adults which decade had the best and worst music, movies, economy and so forth, across 20 measures . But when we charted them, no consistent pattern emerged.

We did spot some peaks: When asked which decade had the most moral society, the happiest families or the closest-knit communities, White people and Republicans were about twice as likely as Black people and Democrats to point to the 1950s. The difference probably depends on whether you remember that particular decade for “Leave it to Beaver,” drive-in theaters and “12 Angry Men” — or the Red Scare, the murder of Emmett Till and massive resistance to school integration.

“This was a time when Repubs were pretty much running the show and had reason to be happy,” pioneering nostalgia researcher Morris Holbrook told us via email. “Apparently, you could argue that nostalgia is colored by political preferences. Surprise, surprise.”

And he’s right! But any political, racial or gender divides were dwarfed by what happened when we charted the data by generation. Age, more than anything, determines when you think America peaked.

So, we looked at the data another way, measuring the gap between each person’s birth year and their ideal decade. The consistency of the resulting pattern delighted us: It shows that Americans feel nostalgia not for a specific era, but for a specific age.

The good old days when America was “great” aren’t the 1950s. They’re whatever decade you were 11, your parents knew the correct answer to any question, and you’d never heard of war crimes tribunals, microplastics or improvised explosive devices. Or when you were 15 and athletes and musicians still played hard and hadn’t sold out.

Not every flavor of nostalgia peaks as sharply as music does. But by distilling them to the most popular age for each question, we can chart a simple life cycle of nostalgia.

The closest-knit communities were those in our childhood, ages 4 to 7. The happiest families, most moral society and most reliable news reporting came in our early formative years — ages 8 through 11. The best economy, as well as the best radio, television and movies, happened in our early teens — ages 12 through 15.

Slightly spendier activities such as fashion, music and sporting events peaked in our late teens — ages 16 through 19 — matching research from the University of South Australia’s Ehrenberg-Bass Institute, which shows music nostalgia centers on age 17 .

YouGov didn’t just ask about the best music and the best economy. The pollsters also asked about the worst music and the worst economy. But almost without exception, if you ask an American when times were worst, the most common response will be “right now!”

This holds true even when “now” is clearly not the right answer. For example, when we ask which decade had the worst economy, the most common answer is today. The Great Depression — when, for much of a decade, unemployment exceeded the what we saw in the worst month of pandemic shutdowns — comes in a grudging second.

To be sure, other forces seem to be at work. Democrats actually thought the current economy wasn’t as bad as the Great Depression. Republicans disagreed. In fact, measure after measure, Republicans were more negative about the current decade than any other group — even low-income folks in objectively difficult situations.

So, we called the brilliant Joanne Hsu, director of the University of Michigan’s Surveys of Consumers who regularly wrestles with partisan bias in polling.

Hsu said that yes, she sees a huge partisan split in the economy, and yes, Republicans are far more negative than Democrats. But it hasn’t always been that way.

“People whose party is in the White House always have more favorable sentiment than people who don’t,” she told us. “And this has widened over time.”

In a recent analysis , Hsu — who previously worked on some of our favorite surveys at the Federal Reserve — found that while partisanship drove wider gaps in economic expectations than did income, age or education even in the George W. Bush and Barack Obama years, they more than doubled under Donald Trump as Republicans’ optimism soared and Democrats’ hopes fell.

Our attitudes reversed almost the instant President Biden took office, but the gap remains nearly as wide. That is to say, if we’d asked the same questions about the worst decades during the Trump administration, Hsu’s work suggests the partisan gap could have shriveled or even flipped eyeglasses over teakettle.

To understand the swings, Hsu and her friends spent the first part of 2024 asking 2,400 Americans where they get their information about the economy. In a new analysis , she found Republicans who listen to partisan outlets are more likely to be negative, and Democrats who listen to their own version of such news are more positive — and that Republicans are a bit more likely to follow partisan news.

But while Fox and friends drive some negativity, only a fifth of Republicans get their economic news from partisan outlets. And Democrats and independents give a thumbs down to the current decade, too, albeit at much lower rates.

There’s clearly something more fundamental at work. As YouGov’s Carl Bialik points out, when Americans were asked last year which decade they’d most want to live in, the most common answer was now. At some level then, it seems unlikely that we truly believe this decade stinks by almost every measure.

A deeper explanation didn’t land in our laps until halfway through a Zoom call with four well-caffeinated Australian marketing and consumer-behavior researchers: the Ehrenberg-Bass folks behind the music study we cited above. (Their antipodean academic institute has attracted massive sponsorships by replacing typical corporate marketing fluffery with actual evidence.)

Their analysis began when Callum Davies needed to better understand the demographics of American music tastes to interpret streaming data for his impending dissertation. Since they were already asking folks about music, Davies and his colleagues decided they might as well seize the opportunity to update landmark research from Holbrook and Robert Schindler about music nostalgia.

Building on the American scholars’ methods, they asked respondents to listen to a few seconds each of 34 songs , including Justin Timberlake’s “Sexy Back” and Johnny Preston’s “ Running Bear .” Then respondents were asked to rate each song on a zero-to-10 scale. (In the latter case, we can’t imagine the high end of the scale got much use, especially if the excerpt included that song’s faux-tribal “hooga-hooga” chant and/or its climactic teen drownings.)

Together, the songs represented top-10 selections from every even-numbered year from 1950 (Bing and Gary Crosby’s “Play a Simple Melody”) to 2016 (Rihanna’s “Work”), allowing researchers to gather our preferences for music released throughout our lives.

Like us, they found that you’ll forever prefer the music of your late teens. But their results show one big difference: There’s no sudden surge of negative ratings for the most recent music.

Marketing researcher Bill Page said that by broadly asking when music, sports or crime were worst, instead of getting ratings for specific years or items, YouGov got answers to a question they didn’t ask.

“When you ask about ‘worst,’ you’re not asking for an actual opinion,” Page said. “You’re asking, ‘Are you predisposed to think things get worse?’”

“There’s plenty of times surveys unintentionally don’t measure what they claim to,” his colleague Zac Anesbury added.

YouGov actually measured what academics call “declinism,” his bigwig colleague Carl Driesener explained. He looked a tiny bit offended when we asked if that was a real term or slang they’d coined on the spot. But in our defense, only a few minutes had passed since they had claimed “cozzie livs” was Australian for “the cost of living crisis.”

Declinists believe the world keeps getting worse. It’s often the natural result of rosy retrospection, or the idea that everything — with the possible exception of “Running Bear” — looks better in memory than it did at the time. This may happen in part because remembering the good bits of the past can help us through difficult times, Page said.

It’s a well-established phenomenon in psychology, articulated by Leigh Thompson, Terence Mitchell and their collaborators in a set of analyses . They found that when asked to rate a trip mid-vacation, we often sound disappointed. But after we get home — when the lost luggage has been found and the biting-fly welts have stopped itching — we’re as positive about the trip as we were in the early planning stage. Sometimes even more so.

So saying the 2020s are the worst decade ever is akin to sobbing about “the worst goldang trip ever” at 3 a.m . in a sketchy flophouse full of Russian-speaking truckers after you’ve run out of cash and spent three days racing around Urumqi looking for the one bank in Western China that takes international cards.

A few decades from now, our memories shaped by grainy photos of auroras and astrolabes, we’ll recall only the bread straight from streetside tandoor-style ovens and the locals who went out of their way to bail out a couple of distraught foreigners.

In other words, the 2020s will be the good old days.

Greetings! The Department of Data curates queries. What are you curious about: How many islands have been completely de-ratted? Where is America’s disc-golf heartland? Who goes to summer camp? Just ask!

If your question inspires a column, we’ll send you an official Department of Data button and ID card. This week’s buttons go to YouGov’s Taylor Orth, who correctly deduced we’d be fascinated by decade-related polls, and Stephanie Killian in Kennesaw, Ga., who also got a button for our music column , with her questions about how many people cling to the music of their youth.

survey research article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A critical look at online survey or questionnaire-based research studies during COVID-19

In view of restrictions imposed to control COVID-19 pandemic, there has been a surge in online survey-based studies because of its ability to collect data with greater ease and faster speed compared to traditional methods. However, there are important concerns about the validity and generalizability of findings obtained using the online survey methodology. Further, there are data privacy concerns and ethical issues unique to these studies due to the electronic and online nature of survey data. Here, we describe some of the important issues associated with poor scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead.

1. Introduction

Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in the rising number studies employing online survey to collect data since the beginning of COVID-19 pandemic ( Akintunde et al., 2021 ). This could be due to the relative ease of online data collection over traditional face-to-face interviews while following the travel restrictions and distancing guidelines for controlling the spread of COVID-19 pandemic. Further, it offers a cost-effective and faster way of data collection (with no interviewer requirement and automatic data entry) as compared to other means of remote data collection (e.g. telephonic interview) ( Hlatshwako et al., 2021 ), both of which are important for getting rapid results to guide development and implementation public-health interventions for preventing and/or mitigating the harms related to COVID-19 pandemic (e.g. mental health effects of COVID-19, misconceptions related to spread of COVID-19, factors affecting vaccine hesitancy etc.). However, there have been several concerns raised about the validity and generalizability of findings obtained from online survey studies ( Andrade et al., 2020 ; Sagar et al., 2020 ). Here, we describe some of the important issues associated with scientific quality of online survey findings, and provide suggestions to address them in future studies going ahead. The data privacy concerns and ethical issues unique to these studies due to the electronic and online nature survey data have also briefly discussed.

2. Limited generalizability of online survey sample to the target general population

The findings obtained from online surveys need to be generalized to the target population in the real world. For this, the online survey population needs to be clearly defined and should be representative of the target population as much as possible. This would be possible when there is reliable sampling frame for online surveys, and participants could be selected using randomized or probability sampling method. However, online surveys are often conducted via email or online survey platform, with survey link shared on social media platforms or websites or directory of email ids accessed by researchers. Also, participants might be asked to share the survey link further with their eligible contacts. In turn, the population from which the study sample is selected often not clearly defined, and information about response rates (i.e. out of the total number people who viewed the survey link, how many of them did actually respond) are seldom available with the researcher. This makes generalization of study findings unreliable.

This problem may be addressed by sending survey link individually to all the people comprising the study population via email and/ or telephonic message (e.g. all the members of a professional society through membership directory, people residing in a society through official records etc.), with a request not to share the survey link with anyone else. Alternatively, required number of people could be randomly selected from the entire list of potential subjects and approached telephonically for taking consent. Basic socio-demographic details could be obtained from those who refused to participate and share the survey link with those agreeing to participate. Although, if the response rates are low or the socio-demographic details of non-responders significantly differ from that of responders, then the online survey sample is unlikely to be representative of the target study population. Further, this is a more resource intensive strategy and might not be always feasible (as it requires a list of contact details for the entire study population prior to beginning of data collection). In certain situations, when the area of research is relatively new and/or needs urgent exploration for hypothesis generation or guiding immediate response; the online survey study should list all possible attempts made to achieve a representative sample and clearly acknowledge it as a limitation while discussing their study findings ( Zhou et al., 2021 ).

A more recent innovative solution to this problem involves partnership between academic institutions (Maryland University and Carnegie Mellon University) and the Facebook company for conducting online COVID-19 related research ( Barkay et al., 2020 ). The COVID-19 Symptom Survey (CSS) conducted (in more than 200 countries since April 2020) using this approach involves exchange of information between the researchers and the Facebook without compromising the data privacy of information collected from survey participants. The survey link is shared on the Facebook, and user voluntary choose to participate in the study. The Facebook’s active user base is leveraged to provide a reliable sampling frame for the CSS survey. The researchers select random ID numbers for the users who completed the survey, and calculate survey weights for each them on a given day. Survey weights adjust for both non-response errors (helps in making them sample more representative of the Facebook users) and coverage related errors (helps in making generalizing findings obtained using FAUB to the general population) ( Barkay et al., 2020 ). A respondent belonging to a demographic group with a high likelihood of responding to the survey might get a weight of 10, whereas another respondent belonging to a demographic group with less likelihood of responding to survey might get a weight of 50. It also accounts for the proportion or density of Facebook or internet users in a given geographical area. Thus, findings obtained using this approach could be used for drawing inferences about the target general population. The survey weights to be used for weighted analysis of global CSS survey findings for different geographical regions are available to researchers upon request from either of the two above-mentioned academic institutions. For example, spatio-temporal trends in COVID-19 vaccine related hesitancy across different states of India was estimated by a group of Indian researchers using this approach ( Chowdhury et al., 2021 ).

3. Survey fraud and participant disinterest

Survey fraud is when a person takes the online survey more than once with or without any malicious intent (e.g. monetary compensation, helping researchers collect the requisite number of responses). Another related problem is when the participant responds to some or all the survey questions in a casual manner without actually making any attempt at reading and/or understanding them due to reasons like participant disinterest or survey fatigue. This affects the representativeness and validity of online survey findings, and is increasingly being recognized as an important challenge for researchers ( Chandler et al., 2020 ). While providing monetary incentives improves low response rates, it also increases the risk of survey fraud. Similarly, having a shorter survey length with few simple questions decreases the chances of survey fatigue, but limits the ability of researchers to obtain meaningful information about relatively complex issues. A researcher can take different approaches to address these concerns, ranging from relatively simpler ones such as requesting people to not participate more than once, providing different kind of monetary incentives (e.g. donation to a charity instead of the participant), or manually checking survey responses for inconsistent (e.g. age and date of birth responses not consistent) or implausible response patterns (e.g. average daily smartphone use of greater than 24 h, “all or none” response pattern) to more complex ones involving use of computer software or online survey platform features to block multiple entries by same person using IP address and/or internet cookies check, analysis of response time, latency or total time taken to complete survey for detecting fraudulent responses. There have been several different ways described in the available literature to detect fraudulent or inattentive survey responses, with a discussion about merits and demerits of each of them ( Teitcher et al., 2015 ). However, no single method is completely fool proof, and it is recommended to use a combination of different methods to ensure adequate data quality in online surveys.

4. Possible bias introduced in results by the online survey administration mode

One of the contributory reasons for surge in online survey studies assessing mental health related aspects during the COVID-19 pandemic stems from the general thought that psychiatry research could be easily accomplished through scales or questionnaires administered through online survey methods, especially with the reliance on physical examination and other investigation findings being much less or non-existent. However, the reliability and validity of the scales or instruments used in online surveys have been traditionally established in studies administering them in face-to-face settings (often in pen/pencil-paper format) rather than online mode. There could be variation introduced in the results with different survey administration modes, which is often described as the measurement effect ( Jäckle et al., 2010 ). This could be due to differences in the participants’ level of engagement, understanding of questions, social desirability bias experienced across different survey administration methods. Few studies using the same study sample or sample sampling frame have compared the results obtained with difference in survey administration mode (ie. traditional face-to-face [paper format] vs. online survey), with mixed findings suggesting large significant differences to small significant difference or no significant differences ( Determann et al., 2017 , Norman et al., 2010 , Saloniki et al., 2019 ). This suggests the need for conducting further studies before arriving at a final conclusion. Hence, we need to be careful while interpreting the results of online survey studies. Ideally, online survey findings should be compared with those obtained using traditional survey administration mode, and validation studies should be conducted to establish the psychometric properties of these scales for online survey mode.

5. Inadequately described online survey methodology

A recent systematic review assessing the quality of 80 online survey based published studies assessing the mental health impact of COVID-19 pandemic, reported that a large majority of them did not adhere to the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) guideline aimed at improving the quality of online surveys ( Eysenbach, 2004 , Sharma et al., 2021 ). Information related to parameters such as view rate (Ratio of unique survey visitors/unique site visitors), participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors), and completion rate (Ratio of users who finished the survey/users who agreed to participate); which gives an idea about the representativeness of the online study sample as described previously were not mentioned in about two-third studies. Similarly, information about steps taken to prevent multiple entries by same participant or analysis of atypical timestamps to check for fraudulent and inattentive survey responses was provided by less than 5% studies. Thus, it is imperative to popularize and emphasize upon the use of these reporting guidelines for online survey studies to improve the scientific value of findings obtained from internet-based studies.

6. Data privacy and ethics of online survey studies

Lastly, most of the online survey studies either did not mention at all or mentioned in passing about maintain the anonymity and confidentiality of information obtained from online survey. However, details about the various steps or precautions taken by the researchers to ensure data safety and privacy were seldom mentioned (e.g. de-identified data, encryption process or password protected data storage, use of HIPAA-compliant online survey form/platform etc.). The details and limitations of safety steps taken, and the possibility of data leak should be clearly mentioned/ communicated to participants at the time of taking informed consent (rather than simply mentioning anonymity and confidentiality of information obtained will be ensured, as is the case with offline studies). Moreover, obtaining ethical approval prior to conducting online survey studies is a must. The various ethical concerns unique to online survey methodology (e.g. issues with data protection, informed consent process, survey fraud, online survey administration etc.) should be adequately described in the protocol and deliberated upon by the review boards ( Buchanan and Hvizdak, 2009 , Gupta, 2017 ).

In conclusion, there is an urgent need to consider the above described issues while planning and conducting an online survey, and also reviewing the findings obtained from these studies to improve the overall quality and utility of internet-based research during COVID-19 and post-COVID era.

Financial disclosure

The authors did not receive any funding for this work.

Acknowledgments

Conflict of interest.

The authors have no conflict of interest to declare.

  • Akintunde T.Y., Musa T.H., Musa H.H., Musa I.H., Chen S., Ibrahim E., Tassang A.E., Helmy M. Bibliometric analysis of global scientific literature on effects of COVID-19 pandemic on mental health. Asian J. Psychiatry. 2021; 63 doi: 10.1016/j.ajp.2021.102753. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andrade C. The limitations of online surveys. Indian J. Psychol. Med. 2020; 42 (6):575–576. doi: 10.1177/0253717620957496. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barkay, N., Cobb, C., Eilat, R., Galili, T., Haimovich, D., LaRocca, S., ..., Sarig, T., 2020. Weights and methodology brief for the COVID-19 symptom survey by University of Maryland and Carnegie Mellon University, in partnership with Facebook, arXiv preprint arXiv:2009.14675.
  • Buchanan E.A., Hvizdak E.E. Online survey tools: ethical and methodological concerns of human research ethics committees. J. Empir. Res. Hum. Res. Ethics. 2009; 4 (2):37–48. doi: 10.1525/jer.2009.4.2.37. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chandler J., Sisso I., Shapiro D. Participant carelessness and fraud: consequences for clinical research and potential solutions. J. Abnorm. Psychol. 2020; 129 (1):49–55. doi: 10.1037/abn0000479. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chowdhury, S.R., Motheram, A., Pramanik, S., 2021. Covid-19 vaccine hesitancy: trends across states, over time. Ideas For India, 14 April, Available from: 〈https://www.ideasforindia.in/topics/governance/covid-19-vaccine-hesitancy-trends-across-states-over-time.html%20%20〉 (Accessed 4 August 2021).
  • Determann D., Lambooij M.S., Steyerberg E.W., de Bekker-Grob E.W., de Wit G.A. Impact of survey administration mode on the results of a health-related discrete choice experiment: online and paper comparison. Value Health.: J. Int. Soc. Pharm. Outcomes Res. 2017; 20 (7):953–960. doi: 10.1016/j.jval.2017.02.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) J. Med. Internet Res. 2004; 6 (3):34. doi: 10.2196/jmir.6.3.e34. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gupta S. Ethical issues in designing internet-based research: recommendations for good practice. J. Res. Pract. 2017; 13 (2) Article D1. [ Google Scholar ]
  • Hlatshwako T.G., Shah S.J., Kosana P., Adebayo E., Hendriks J., Larsson E.C., Hensel D.J., Erausquin J.T., Marks M., Michielsen K., Saltis H., Francis J.M., Wouters E., Tucker J.D. Online health survey research during COVID-19. Lancet Digit. Health. 2021; 3 (2):e76–e77. doi: 10.1016/S2589-7500(21)00002-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jäckle A., Roberts C., Lynn P. Assessing the effect of data collection mode on measurement. Int. Stat. Rev. 2010; 78 (1):3–20. doi: 10.1111/j.1751-5823.2010.00102.x. [ CrossRef ] [ Google Scholar ]
  • Norman R., King M.T., Clarke D., Viney R., Cronin P., Street D. Does mode of administration matter? Comparison of online and face-to-face administration of a time trade-off task. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2010; 19 (4):499–508. doi: 10.1007/s11136-010-9609-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sagar R., Chawla N., Sen M.S. Is it correct to estimate mental disorder through online surveys during COVID-19 pandemic? Psychiatry Res. 2020; 291 doi: 10.1016/j.psychres.2020.113251. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saloniki E.C., Malley J., Burge P., Lu H., Batchelder L., Linnosmaa I., Trukeschitz B., Forder J. Comparing internet and face-to-face surveys as methods for eliciting preferences for social care-related quality of life: evidence from England using the ASCOT service user measure. Qual. Life Res.: Int. J. Qual. Life Asp. Treat., Care Rehabil. 2019; 28 (8):2207–2220. doi: 10.1007/s11136-019-02172-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma R., Tikka S.K., Bhute A.R., Bastia B.K. Adherence of online surveys on mental health during the early part of the COVID-19 outbreak to standard reporting guidelines: a systematic review. Asian J. Psychiatry. 2021; 65 doi: 10.1016/j.ajp.2021.102799. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Teitcher J.E., Bockting W.O., Bauermeister J.A., Hoefer C.J., Miner M.H., Klitzman R.L. Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs. J. Law, Med. Ethics: J. Am. Soc. Law, Med. Ethics. 2015; 43 (1):116–133. doi: 10.1111/jlme.12200. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhou T., Chen W., Liu X., Wu T., Wen L., Yang X., Hou Z., Chen B., Zhang T., Zhang C., Xie C., Zhou X., Wang L., Hua J., Tang Q., Zhao M., Hong X., Liu W., Du C., Li Y., Yu X. Children of parents with mental illness in the COVID-19pandemic: a cross-sectional survey in China. Asian J. Psychiatry. 2021; 64 doi: 10.1016/j.ajp.2021.102801. (Advance online publication) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

LGBTQI+ People and Substance Use

Partner showing compassion towards their partner with a kiss on the forehead.

  • Research has found that sexual and gender minorities, including lesbian, gay, bisexual, transgender, queer, and intersex people (LGBTQI+), have higher rates of substance misuse and substance use disorders than people who identify as heterosexual. People from these groups are also more likely to enter treatment with more severe disorders.
  • People in LGBTQI+ communities can face stressful situations and environments like stigma and discrimination , harassment, and traumatic experiences . Coping with these issues may raise the likelihood of a person having substance use problems.
  • NIDA supports research to help identify the particular challenges that sexual and gender minority people face, to prevent or reduce substance use disorders among these groups, and to promote treatment access and better health outcomes.

Latest from NIDA

Women in masks touching elbows

A Plan to Address Racism in Addiction Science

Find more resources on lgbtqi+ health.

  • Hear the latest approaches in treatment and care from experts in the fields of HIV and SUD in this NIDA video series, “ At the Intersection .”
  • See the Stigma and Discrimination Research Toolkit from the National Institute of Mental Health.

IMAGES

  1. (PDF) Principles of survey research part 6: Data analysis

    survey research article

  2. (PDF) Best Practices for Survey Research Reports: A Synopsis for

    survey research article

  3. (PDF) Survey as a Quantitative Research Method

    survey research article

  4. FREE 9+ Sample Survey Form Examples in PDF

    survey research article

  5. (PDF) Survey Research

    survey research article

  6. Research Thesis Sample Survey Questionnaire

    survey research article

VIDEO

  1. Applied Survey Research Survey Fundamentals and Terminology

  2. Survey Research/types and advantages of survey research

  3. Research in 3 Minutes: Peer Review

  4. What is a Survey and How to Design It? Research Beast

  5. What is survey research?

  6. Applied Survey Research Data Analysis Reliability Validity Differences

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. (PDF) Understanding and Evaluating Survey Research

    Survey research is defined as. "the collection of information from. a sample of individuals through their. responses to questions" (Check &. Schutt, 2012, p. 160). This type of r e -. search ...

  3. A quick guide to survey research

    Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates. Go to:

  4. Survey response rates: Trends and a validity assessment framework

    Survey methodology has been and continues to be a pervasively used data-collection method in social science research. To better understand the state of the science, we first analyze response-rate information reported in 1014 surveys described in 703 articles from 17 journals from 2010 to 2020.

  5. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  6. Reducing respondents' perceptions of bias in survey research

    Survey research has become increasingly challenging. In many nations, response rates have continued a steady decline for decades, and the costs and time involved with collecting survey data have risen with it (Connelly et al., 2003; Curtin et al., 2005; Keeter et al., 2017).Still, social surveys are a cornerstone of social science research and are routinely used by the government and private ...

  7. Advance articles

    Research Article 27 June 2023. Survey Consent to Administrative Data Linkage: Five Experiments on Wording and Format. ... How the Timing of Text Message Contacts in Mixed-Mode Surveys Impacts Response . Advertisement. close advertisement. Advertisement. About Journal of Survey Statistics and Methodology; Editorial Board; Author Guidelines;

  8. Survey Research

    Learn how to conduct survey research by following six steps: defining the population and sample, choosing the type of survey, designing the questions, distributing the survey, analyzing the results, and writing up the report. Find out the advantages and disadvantages of different types of surveys and question formats.

  9. Journal of Survey Statistics and Methodology

    Why Submit to JSSAM?. The Journal of Survey Statistics and Methodology is an international, high-impact journal sponsored by the American Association for Public Opinion Research (AAPOR) and the American Statistical Association.Published since 2013, the journal has quickly become a trusted source for a wide range of high quality research in the field.

  10. Conducting Online Surveys

    Abstract. There is an established methodology for conducting survey research that aims to ensure rigorous research and robust outputs. With the advent of easy-to-use online survey platforms, however, the quality of survey studies has declined. This article summarizes the pros and cons of online surveys and emphasizes the key principles of ...

  11. High-Impact Articles

    High-Impact Articles. Journal of Survey Statistics and Methodology, sponsored by the American Association for Public Opinion Research and the American Statistical Association, began publishing in 2013.Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data.

  12. SURVEY RESEARCH

    Abstract For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings ...

  13. Reporting Guidelines for Survey Research: An Analysis of ...

    Methods and Findings. We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of ...

  14. The online survey as a qualitative research tool

    ABSTRACT. Fully qualitative surveys, which prioritise qualitative research values, and harness the rich potential of qualitative data, have much to offer qualitative researchers, especially given online delivery options. Yet the method remains underutilised, and there is little in the way of methodological discussion of qualitative surveys. Underutilisation and limited methodological ...

  15. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  16. Survey Research

    Survey designs. Kerry Tanner, in Research Methods (Second Edition), 2018. Conclusion. Survey research designs remain pervasive in many fields. Surveys can appear deceptively simple and straightforward to implement. However valid results depend on the researcher having a clear understanding of the circumstances where their use is appropriate and the constraints on inference in interpreting and ...

  17. Reporting Survey Based Studies

    CHOOSING A TARGET JOURNAL FOR SURVEY-BASED RESEARCH. Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or ...

  18. PDF Survey Research

    of survey research. Survey research owes its continuing popularity to its versatility, efficiency, and generalizability. First and . foremost is the . versatility. of survey methods. Researchers have used survey methods to investigate areas of education as diverse as school desegregation, academic achievement, teaching practice, and leadership.

  19. PDF SURVEY AND CORRELATIONAL RESEARCH DESIGNS

    A surveyis a series of questions or statements, called items, used in a questionnaire or an interview to mea- sure the self-reports or responses of respondents. Chapter 8 Survey and Correlational Research Designs | 227. Privitera & Wallace, 2011) is identified as an 11-item scale, meaning that the scale or survey includes 11 items or statements ...

  20. PDF Survey Research

    Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined population (e.g., all adult women living in the United States) through the use of a questionnaire (for more lengthy discussions, see Babbie, 1990; Fowler, 1988; ...

  21. Good practice in the conduct and reporting of survey research

    What is survey research? Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth and Joseph Rowntree ), and indeed survey research remains most used in applied ...

  22. The state of AI in early 2024: Gen AI adoption spikes and starts to

    About the research. The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and ...

  23. PDF The Medill survey: How the Chicago area gets its news

    survey with the hope that the public could better understand how news is being consumed during this transformational moment. Medill Associate Dean of Research Stephanie Edg-erly, Assistant Professor Yu Xu and I are the authors of this report. We worked with NORC at the Uni-versity of Chicago on this comprehensive survey of

  24. Behind the Numbers: Questioning Questionnaires

    1. The research community needs to become more aware of and open to issues related to interpretation, language, and communication when conducting or assessing the quality of a survey study. The idea of so much of social reality being readily measurable (or even straightforwardly reported in interview statements) needs to be critically addressed.

  25. Google Forms: How to Use the Free Online Form Creator

    Google Forms is a free online software for creating surveys and questionnaires. You need a Google account to create a Google Form, but anyone can fill out a Google Form. You can personalize your ...

  26. America's best decade, according to data

    America's best decade, according to data. One simple variable, more than anything, determines when you think the nation peaked. Mike Lee and his daughter, Zoey, play at Alethia Tanner Park in D ...

  27. B2B Content Marketing Trends 2024 [Research]

    The online survey was emailed to a sample of marketers using lists from CMI and MarketingProfs. This article presents the findings from the 894 respondents, mostly from North America, who indicated their organization is primarily B2B and that they are either content marketers or work in marketing, communications, or other roles involving content.

  28. Research article on hurricane impacts to Looe Key Reef, Florida

    Figure 4 from the paper "Impact of Hurricane Irma on coral reef sediment redistribution at Looe Key Reef, Florida, USA" is also listed as a Highlight article on the journal's home page.The journal Ocean Science highlights a new USGS research article describing the impacts of Hurricane Irma on seafloor elevation and structure at Looe Key Reef located along the lower Florida Reef Tract.

  29. A critical look at online survey or questionnaire-based research

    Online survey or questionnaire-based studies collect information from participants responding to the study link using internet-based communication technology (e.g. E-mail, online survey platform). There has been a growing interest among researchers for using internet-based data collection methods during the COVID-19 pandemic, also reflected in ...

  30. LGBTQI+ People and Substance Use

    People in LGBTQI+ communities can face stressful situations and environments like stigma and discrimination, harassment, and traumatic experiences. Coping with these issues may raise the likelihood of a person having substance use problems. NIDA supports research to help identify the particular challenges that sexual and gender minority people ...