• Open access
  • Published: 11 January 2010

Questionnaires in clinical trials: guidelines for optimal design and administration

  • Phil Edwards 1  

Trials volume  11 , Article number:  2 ( 2010 ) Cite this article

73k Accesses

107 Citations

25 Altmetric

Metrics details

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). The mode of administration can also impact on the cost, quality and completeness of data collected. There is good evidence for design features that improve data completeness but further research is required to evaluate strategies in clinical trials. Theory-based guidelines for style, appearance, and layout of self-administered questionnaires have been proposed but require evaluation.

Peer Review reports

Introduction

With fixed trial resources there will usually be a trade off between the number of participants that can be recruited into a trial and the quality and quantity of information that can be collected from each participant [ 1 ]. Although half a century ago there was little empirical evidence for optimal questionnaire design, Bradford Hill suggested that for every question asked of a study participant the investigator should be required to answer three himself, perhaps to encourage the investigator to keep the number of questions to a minimum [ 2 ].

To assess the empirical evidence for how questionnaire length and other design features might influence data completeness in a clinical trial, a systematic review of randomised controlled trials (RCTs) was conducted, and has recently been updated [ 3 ]. The strategies found to be effective in increasing response to postal and electronic questionnaires are summarised in the section on increasing data completeness below.

Clinical trial investigators have also relied on principles of questionnaire design that do not have an established empirical basis, but which are nonetheless considered to present 'good practice', based on expert opinion. The section on questionnaire development below includes some of that advice and presents general guidelines for questionnaire development which may help investigators who are about to design a questionnaire for a clinical trial.

As this paper concerns the collection of outcome data by questionnaire from trial participants (patients, carers, relatives or healthcare professionals) it begins by introducing the regulatory guidelines for data collection in clinical trials. It does not address the parallel (and equally important) needs of data management, cleaning, validation or processing required in the creation of the final clinical database.

Regulatory guidelines

The International Conference on Harmonisation (ICH) of technical requirements for registration of pharmaceuticals for human use states:

'The collection of data and transfer of data from the investigator to the sponsor can take place through a variety of media, including paper case record forms, remote site monitoring systems, medical computer systems and electronic transfer. Whatever data capture instrument is used, the form and content of the information collected should be in full accordance with the protocol and should be established in advance of the conduct of the clinical trial. It should focus on the data necessary to implement the planned analysis, including the context information (such as timing assessments relative to dosing) necessary to confirm protocol compliance or identify important protocol deviations. 'Missing values' should be distinguishable from the 'value zero' or 'characteristic absent'...' [ 4 ].

This suggests that the choice of variables that are to be measured by the questionnaire (or case report form) is constrained by the trial protocol, but that the mode of data collection is not. The trial protocol is unlikely, however, to list all of the variables that may be required to evaluate the safety of the experimental treatment. The choice of variables to assess safety will depend on the possible consequences of treatment, on current knowledge of possible adverse effects of related treatments, and on the duration of the trial [ 5 ]. In drug trials there may be many possible reactions due to the pharmacodynamic properties of the drug. The Council for International Organisations of Medical Sciences (CIOMS) advises that:

'Safety data that cannot be categorized and succinctly collected in predefined data fields should be recorded in the comment section of the case report form when deemed important in the clinical judgement of the investigator' [ 5 ].

Safety data can therefore initially be captured on a questionnaire as text responses to open-ended questions that will subsequently be coded using a common adverse event dictionary, such as the Medical Dictionary for Drug Regulatory Activities (MEDRA). The coding of text responses should be performed by personnel who are blinded to treatment allocation. Both ICH and CIOMS warn against investigators collecting too much data that will not be analysed, potentially wasting time and resources, reducing the rate of recruitment, and increasing losses to follow-up.

Before questionnaire design begins, the trial protocol should be available at least in draft. This will state which outcomes are to be measured and which parameters are of interest (for example, percentage, mean, and so on). Preferably, a statistical analysis plan will also be available that makes explicit how each variable will be analysed, including how precisely each is to be measured and how each variable will be categorised in analysis. If these requirements are known in advance, the questionnaire can be designed in such a way that will reduce the need for data to be coded once questionnaires have been completed and returned.

Questionnaire development

If a questionnaire has previously been used in similar trials to the one planned, its use will bring the added advantage that the results will be comparable and may be combined in a meta-analysis. However, if the mode of administration of the questionnaire will change (for example, questions developed for administration by personal interview are to be included in a self-administered questionnaire), the questionnaire should be piloted before it is used (see section on piloting below). To encourage the consistent reporting of serious adverse events across trials, the CIOMS Working Group has prepared an example of the format and content of a possible questionnaire [ 5 ].

If a new questionnaire is to be developed, testing will establish that it measures what is intended to be measured, and that it does so reliably. The validity of a questionnaire may be assessed in a reliability study that assesses the agreement (or correlation) between the outcome measured using the questionnaire with that measured using the 'gold standard'. However, this will not be possible if there is no recognised gold standard measurement for outcome. The reliability of a questionnaire may be assessed by quantifying the strength of agreement between the outcomes measured using the questionnaire on the same patients at different times. The methods for conducting studies of validity and reliability are covered in depth elsewhere [ 6 ]. If new questions are to be developed, the reading ease of the questions can be assessed using the Flesch reading ease score. This score assesses the number of words in sentences, and the number syllables in words. Higher Flesch reading scores indicate material that is easier to read [ 7 ].

Types of questions

Open-ended questions offer participants a space into which they can answer by writing text. These can be used when there are a large number of possible answers and it is important to capture all of the detail in the information provided. If answers are not factual, open-ended questions might increase the burden on participants. The text responses will subsequently need to be reviewed by the investigator, who will (whilst remaining blind to treatment allocation) assign one or more codes that categorise the response (for example, applying an adverse event dictionary) before analysis. Participants will need sufficient space so that full and accurate information can be provided.

Closed-ended questions contain either mutually exclusive response options only, or must include a clear instruction that participants may select more than one response option (for example, 'tick all that apply'). There is some evidence that answers to closed questions are influenced by the values chosen by investigators for each response category offered and that respondents may avoid extreme categories [ 8 ]. Closed-ended questions where participants are asked to 'tick all that apply' can alternatively be presented as separate questions, each with a 'yes' or 'no' response option (this design may be suitable if the analysis planned will treat each response category as a binary variable).

Asking participants subsidiary questions (that is, 'branching off') depending on their answers to core questions will provide further detail about outcomes, but will increase questionnaire length and could make a questionnaire harder to follow. Similarly 'matrix' style questions (that is, multiple questions with common response option categories) might seem complicated to some participants, adding to the data collection burden [ 9 ].

Style, appearance and layout

The way that a self-administered questionnaire looks is considered to be as important as the questions that are asked [ 9 , 10 ]. There is good evidence that in addition to the words that appear on the page (verbal language) the questionnaire communicates meaning and instructions to participants via symbols and graphical features (non-verbal language). The evidence from several RCTs of alternative question response styles and layouts suggests that participants view the middle (central) response option as the one that represents the midpoint of an outcome scale. Participants then expect response options to appear in an order of increasing or decreasing progression, beginning with the leftmost or uppermost category; and they expect response options that are closer to each other to also have values that are 'conceptually closer'. The order, spacing and grouping of response options are therefore important design features, as they will affect the quality of data provided on the questionnaire, and the time taken by participants to provide it [ 10 ].

Some attempts have been made to develop theory-based guidelines for self-administered questionnaire design [ 11 ]. Based on a review of psychological and sociological theories about graphic language, cognition, visual perception and motivation, five principles have been derived:

'Use the visual elements of brightness, colour, shape, and location in a consistent manner to define the desired navigational path for respondents to follow when answering the questionnaire;

When established format conventions are changed in the midst of a questionnaire use prominent visual guides to redirect respondents;

Place directions [instructions] where they are to be used and where they can be seen;

Present information in a manner that does not require respondents to connect information from separate locations in order to comprehend it;

Ask people to answer only one question at a time' [ 11 ].

Adherence to these principles may help to ensure that when participants complete a questionnaire they understand what is being asked, how to give their response, and which question to answer next. This will help participants to give all the information being sought and reduce the chances that they become confused or frustrated when completing the questionnaire. These principles require evaluation in RCTs.

Font size and colour may further affect the legibility of a questionnaire, which may also impact on data quality and completeness. Questionnaires for trials that enrol older participants may therefore require the use of a larger font (for example, 11 or 12 point minimum) than those for trials including younger participants. The legibility and comprehension of the questionnaire can be assessed during the pilot phase (see section on piloting below).

Perhaps most difficult to define are the factors that make a questionnaire more aesthetically pleasing to participants, and that may potentially increase compliance. The use of space, graphics, underlining, bold type, colour and shading, and other qualities of design may affect how participants react and engage with a questionnaire. Edward Tufte's advice for achieving graphical excellence [ 12 ] might be adapted to consider how to achieve excellence in questionnaire design, viz : ask the participant the simplest, clearest questions in the shortest time using the fewest words on the fewest pages; above all else ask only what you need to know.

Further research is therefore needed (as will be seen in the section on increasing data completeness) into the types of question and the aspects of style, appearance and layout of questionnaires that are effective in increasing data quality and completeness.

Mode of administration

Self-administered questionnaires are usually cheaper to use as they require no investigator input other than that for their distribution. Mailed questionnaires require correct addresses to be available for each participant, and resources to cover the costs of delivery. Electronically distributed questionnaires require correct email addresses as well as access to computers and the internet. Mailed and electronically distributed questionnaires have the advantage that they give participants time to think about their responses to questions, but they may require assistance to be available for participants (for example, a telephone helpline).

As self-administered questionnaires have least investigator involvement they are less susceptible to information bias (for example, social desirability bias) and interviewer effects, but are more susceptible to item non-response [ 8 ]. Evidence from a systematic review of 57 studies comparing self-reported versus clinically verified compliance with treatment suggests that questionnaires and diaries may be more reliable than interviews [ 13 ].

In-person administration allows a rapport with participants to be developed, for example through eye contact, active listening and body language. It also allows interviewers to clarify questions and to check answers. Telephone administration may still provide the aural dimension (active listening) of an in-person interview. A possible disadvantage of telephone interviews is that participants may become distracted by other things going on around them, or decide to end the call [ 9 ].

A mixture of modes of administration may also be considered: for example, participant follow-up might commence with postal or email administration of the questionnaire, with subsequent telephone calls to non-respondents. The offer of an in-person interview may also be necessary, particularly if translation to a second language is required, or if participants are not sufficiently literate. Such approaches may risk introducing selection bias if participants in one treatment group are more or less likely than the other group to respond to one mode of administration used (for example, telephone follow-up in patients randomised to a new type of hearing aid) [ 14 ].

An advantage of electronic and web-based questionnaires is that they can be designed automatically to screen and filter participant responses. Movement from one question to the next can then appear seamless, reducing the data collection burden on participants who are only asked questions relevant to previous answers. Embedded algorithms can also check the internal consistency of participant responses so that data are internally valid when submitted, reducing the need for data queries to be resolved later. However, collection of data from participants using electronic means may discriminate against participants without access to a computer or the internet. Choice of mode of administration must therefore take into account its acceptability to participants and any potential for exclusion of eligible participants that may result.

Piloting is a process whereby new questionnaires are tested, revised and tested further before they are used in the main trial. It is an iterative process that usually begins by asking other researchers who have some knowledge and experience in a similar field to comment on the first draft of the questionnaire. Once the questionnaire has been revised, it can then be piloted in a non-expert group, such as among colleagues. A further revision of the questionnaire can be piloted with individuals who are representative of the population who will complete it in the main trial. In-depth 'cognitive interviewing' might also provide insights into how participants comprehend questions, process and recall information, and decide what answers to give [ 15 ]. Here participants are read each question and are either asked to 'think aloud' as they consider what their answer will be, or are asked further 'probing' questions by the interviewer.

For international multicentre trials it will be necessary to translate a questionnaire. Although a simple translation to, and translation back from the second language might be sufficient, further piloting and cognitive interviews may be required to identify and correct for any cultural differences in interpretation of the translated questionnaire. Translation into other languages may alter the layout and formatting of words on the page from the original design and so further redesign of the questionnaire may be required. If a questionnaire is to be developed for a clinical trial, sufficient resources are therefore required for its design, piloting and revision.

Increasing data completeness

Loss to follow-up will reduce statistical power by reducing the effective sample size. Losses may also introduce bias if the trial treatment is an effect modifier for the association between outcome and participation at follow-up [ 16 ].

There may be exceptional circumstances for allowing participants to skip certain questions (for example, sensitive questions on sexual lifestyle) to ensure that the remainder of the questionnaire is still collected; the data that are provided may then be used to impute the values of variables that were not provided. Although the impact of missing outcome data and missing covariates on study results can be reduced through the use of multiple imputation techniques, no method of analysis can be expected to overcome them completely [ 17 ].

Longer and more demanding tasks might be expected to have fewer volunteers than shorter, easier tasks. The evidence from randomised trials of questionnaire length in a range of settings seems to support the notion that when it comes to questionnaire design 'shorter is better' [ 18 ]. Recent evidence that a longer questionnaire achieved the same high response proportion as that of a shorter alternative might cast doubt on the importance of the number of questions included in a questionnaire [ 19 ]. However, under closer scrutiny the results of this study (96.09% versus 96.74%) are compatible with an average 2% reduction in odds of response for each additional page added to the shorter version [ 18 ]. The main lesson seems to be that when the baseline response proportion is very high (for example, over 95%) then few interventions are likely to have effects large enough to increase it further.

There is a trade off between increased measurement error from using a simplified outcome scale and increased power from achieving measurement on a larger sample of participants (from fewer losses to follow-up). If a shorter version of an outcome scale provides measures of an outcome that are highly correlated with the longer version, then it will be more efficient for the trial to use the shorter version [ 1 ]. A moderate reduction to the length of a shorter questionnaire will be more effective in reducing losses to follow-up than a moderate change to the length of a longer questionnaire [ 18 ].

In studies that seek to collect information on many outcomes, questionnaire length will necessarily be determined by the number of items required from each participant. In very compliant populations there may be little lost by using a longer questionnaire. However, using a longer questionnaire to measure more outcomes may also increase the risk of false positive findings that result from multiple testing (for example, measuring 100 outcomes may produce 5 that are significantly associated with treatment by chance alone) [ 4 , 20 ].

Other strategies to increase completeness

A recently updated Cochrane systematic review presents evidence from RCTs of methods to increase response to postal and electronic questionnaires in a range of health and non-health settings [ 3 ]. The review includes 481 trials that evaluated 110 different methods for increasing response to postal questionnaires and 32 trials that evaluated 27 methods for increasing response to electronic questionnaires. The trials evaluate aspects of questionnaire design, the introductory letter, packaging and methods of delivery that might influence the tendency for participants to open the envelope (or email) and to engage with its contents. A summary of the results follows.

What participants are offered

Postal questionnaires.

The evidence favours offering monetary incentives and suggests that money is more effective than other types of incentive (for example, tokens, lottery tickets, pens, and so on). The relationship between the amount of monetary incentive offered and questionnaire response is non-linear with diminishing marginal returns for each additional amount offered [ 21 ]. Unconditional incentives appear to be more effective, as are incentives offered with the first rather than a subsequent mailing. There is less evidence for the effects of offering the results of the study (when complete) or offering larger non-monetary incentives.

Electronic questionnaires

The evidence favours non-monetary incentives (for example, Amazon.com gift cards), immediate notification of lottery results, and offering study results. Less evidence exists for the effect of offering monetary rather than non-monetary incentives.

How questionnaires look

The evidence favours using personalised materials, a handwritten address, and printing single sided rather than double sided. There is also evidence that inclusion of a participant's name in the salutation at the start of the cover letter increases response and that the addition of a handwritten signature on letters will further increase response [ 22 ]. There is less evidence for positive effects of using coloured or higher quality paper, identifying features (for example, identity number), study logos, brown envelopes, coloured ink, coloured letterhead, booklets, larger paper, larger fonts, pictures in the questionnaire, matrix style questions, or questions that require recall in order of time period.

The evidence favours using a personalised approach, a picture in emails, a white background for emails, a simple header, and textual rather than a visual presentation of response categories. Response may be reduced when 'survey' is mentioned in the subject line. Less evidence exists for sending emails in text format or HTML, including a topic in email subject lines, or including a header in emails.

How questionnaires are received or returned

The evidence favours sending questionnaires by first class or recorded delivery, using stamped return envelopes, and using several stamps. There is less evidence for effects of mailing soon after discharge from hospital, mailing or delivering on a Monday, sending to work addresses, using stamped outgoing envelopes (rather than franked), using commemorative or first class stamps on return envelopes, including a prepaid return envelope, using window or larger envelopes, or offering the option of response by internet.

Methods and number of requests for participation

The evidence favours contacting participants before sending questionnaires, follow-up contact with non-responders, providing another copy of the questionnaire at follow-up and sending text message reminders rather than postcards. There is less evidence for effects of precontact by telephone rather than by mail, telephone follow-up rather than by mail, and follow-up within a month rather than later.

Nature and style of questions included

The evidence favours placing more relevant questions and easier questions first, user friendly and more interesting or salient questionnaires, horizontal orientation of response options rather than vertical, factual questions only, and including a 'teaser'. Response may be reduced when sensitive questions are included or when a questionnaire for carers or relatives is included. There is less evidence for asking general questions or asking for demographic information first, using open-ended rather than closed questions, using open-ended questions first, including 'don't know' boxes, asking participants to 'circle answer' rather than 'tick box', presenting response options in increasing order, using a response scale with 5 levels rather than 10 levels, or including a supplemental questionnaire or a consent form.

The evidence favours using a more interesting or salient e-questionnaire.

Who sent the questionnaire

The evidence favours questionnaires that originate from a university rather than government department or commercial organisation. Less evidence exists for the effects of precontact by a medical researcher (compared to non-medical), letters signed by more senior or well known people, sending questionnaires in university-printed envelopes, questionnaires that originate from a doctor rather than a research group, names that are ethnically identifiable, or questionnaires that originate from male rather than female investigators.

The evidence suggests that response is reduced when e-questionnaires are signed by male rather than female investigators. There is less evidence for the effectiveness of e-questionnaires originating from a university or when sent by more senior or well known people.

What participants are told

The evidence favours assuring confidentiality and mentioning an obligation to respond in follow-up letters. Response may be reduced when endorsed by an 'eminent professional' and requesting participants to not remove ID codes. Less evidence exists for the effects of stating that others have responded, a choice to opt out of the study, providing instructions, giving a deadline, providing an estimate of completion time, requesting a telephone number, stating that participants will be contacted if they do not respond, requesting an explanation for non-participation, an appeal or plea, requesting a signature, stressing benefits to sponsor, participants or society, or assuring anonymity rather than participants being identifiable.

The evidence favours stating that others have responded and giving a deadline. There is less evidence for the effect of an appeal (for example, 'request for help') in the subject line of an email.

So although uncertainty remains about whether some strategies increase data completeness there is sufficient evidence to produce some guidelines. Where there is a choice, a shorter questionnaire can reduce the size of the task and burden on respondents. Begin a questionnaire with the easier and most relevant questions, and make it user friendly and interesting for participants. A monetary incentive can be included as a little unexpected 'thank you for your time'. Participants are more likely to respond with advance warning (by letter, email or phone call in advance of being sent a questionnaire). This is a simple courtesy warning participants that they are soon to be given a task to do, and that they may need to set some time aside to complete it. The relevance and importance of participation in the trial can be emphasised by addressing participants by name, signing letters by hand, and using first class postage or recorded delivery. University sponsorship may add credibility, as might the assurance of confidentiality. Follow-up contact and reminders to non-responders are likely to be beneficial, but include another copy of the questionnaire to save participants having to remember where they put it, or if they have thrown it away.

The effects of some strategies to increase questionnaire response may differ when used in a clinical trial compared with a non-health setting. Around half of trials included in the Cochrane review were health related (patient groups, population health surveys and surveys of healthcare professionals). The other included trials were conducted among business professionals, consumers, and the general population. To assess whether the size of the effects of each strategy on questionnaire response differ in health settings will require a sufficiently sophisticated analysis that controls for covariates (for example, number of pages in the questionnaire, use of incentives, and so on). Unfortunately, these details are seldom included by investigators in the published reports [ 3 ].

However, a review of 15 RCTs of methods to increase response in healthcare professionals and patients found evidence for using some strategies (for example, shorter questionnaires and sending reminders) in the health-related setting [ 23 ]. There is also evidence that incentives do improve questionnaire response in clinical trials [ 24 , 25 ]. The offer of monetary incentives to participants for completion of a questionnaire may, however, be unacceptable to some ethics committees if they are deemed likely to exert pressure on individuals to participate [ 26 ]. Until further studies establish whether other strategies are also effective in the clinical trial setting, the results of the Cochrane review may be used as guidelines for improving data completeness. More discussion on the design and administration of questionnaires is available elsewhere [ 27 ].

Risk factors for loss to follow-up

Irrespective of questionnaire design it is possible that some participants will not respond because: (a) they have never received the questionnaire or (b) they no longer wish to participate in the study. An analysis of the information collected at randomisation can be used to identify any factors (for example, gender, severity of condition) that are predictive of loss to follow-up [ 28 ]. Follow-up strategies can then be tailored for those participants most at risk of becoming lost (for example, additional incentives for 'at risk' participants). Interviews with a sample of responders and non-responders may also identify potential improvements to the questionnaire design, or to participant information. The need for improved questionnaire saliency, explanations of trial procedures, and stressing the importance of responding have all been identified using this method [ 29 ].

Further research

Few clinical trials appear to have nested trials of methods that might increase the quality and quantity of the data collected by questionnaire, and of participation in trials more generally. Trials of alternative strategies that may increase the quality and quantity of data collected by questionnaire in clinical trials are needed. Reports of these trials must include details of the alternative instruments used (for example, number of items, number of pages, opportunity to save data electronically and resume completion at another time), mean or median time to completion of electronic questionnaires, material costs and the amount of staff time required. Data collection in clinical trials is costly, and so care is needed to design data collection instruments that will provide sufficiently reliable measures of outcomes whilst ensuring high levels of follow-up. Whether shorter 'quick and dirty' outcome measures (for example, a few simple questions) are better than more sophisticated questionnaires will require assessment of the costs in terms of their impact on bias, precision, trial completion time, and overall costs.

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). Questionnaire design still does remain as much an art as a science, but the evidence base for improving the quality and completeness of data collection in clinical trials is growing.

Armstrong BG: Optimizing power in allocating resources to exposure assessment in an epidemiologic study. Am J Epidemiol. 1996, 144: 192-197.

Article   CAS   PubMed   Google Scholar  

Hill AB: Observation and experiment. N Engl J Med. 1953, 248: 995-1001. 10.1056/NEJM195306112482401.

Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, 3: MR000008-

PubMed   Google Scholar  

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH harmonised tripartite guideline, statistical principles for clinical trials E9. http://www.ich.org/LOB/media/MEDIA485.pdf

CIOMS: Management of safety information from clinical trials: report of CIOMS working group VI. 2005, Geneva, Switzerland: Council for International Organisations of Medical Sciences (CIOMS)

Google Scholar  

Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2004, Oxford University Press, 3

Farr JN, Jenkins JJ, Paterson DG: Simplification of Flesch reading ease formula. J Appl Psychol. 1951, 35: 333-337. 10.1037/h0062427.

Article   Google Scholar  

Armstrong BK, White E, Saracci R: Principles of exposure measurement in epidemiology. Monographs in Epidemiology and Biostatistics. 1995, New York, NY: Oxford University Press, 21:

Nieuwenhuijsen M: Design of exposure questionnaires for epidemiological studies. Occup Environ Med. 2005, 62: 272-280. 10.1136/oem.2004.015206.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tourangeau R, Couper MP, Conrad F: Spacing, position, and order: interpretive heuristics for visual features of survey questions. Pub Opin Quart. 2004, 68: 368-393. 10.1093/poq/nfh035.

Jenkins CR, Dillman DA: Towards a theory of self-administered questionnaire design. http://www.census.gov/srd/papers/pdf/sm95-06.pdf

Tufte E: The visual display of quantitative information. 1999, Cheshire, CT: Graphics Press

Garber MC, Nau DP, Erickson SR, Aikens JE, Lawrence JB: The concordance of self-report with other measures of medication adherence: a summary of the literature. Med Care. 2004, 42: 649-652. 10.1097/01.mlr.0000129496.05898.02.

Article   PubMed   Google Scholar  

Heerwegh D: Mode differences between face-to-face and web surveys: an experimental investigation of data quality and social desirability effects. Int J Pub Opin Res. 2009, 21: 111-121. 10.1093/ijpor/edn054.

Willis GB: Cognitive interviewing: a how-to guide. http://www.appliedresearch.cancer.gov/areas/cognitive/interview.pdf

Greenland S: Response and follow-up bias in cohort studies. Am J Epidemiol. 1977, 106: 184-187.

CAS   PubMed   Google Scholar  

Kenward MG, Carpenter J: Multiple imputation: current perspectives. Stat Methods Med Res. 2007, 16: 199-218. 10.1177/0962280206075304.

Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: does questionnaire length matter?. Contr Clin Trials. 2004, 25: 31-52. 10.1016/j.cct.2003.08.013.

Rothman K, Mikkelsen EM, Riis A, Sørensen HT, Wise LA, Hatch EE: Randomized trial of questionnaire length. Epidemiology. 2009, 20: 154-10.1097/EDE.0b013e31818f2e96.

Sterne JAC, Davey Smith G: Sifting the evidence - what's wrong with significance tests?. BMJ. 2001, 322: 226-231. 10.1136/bmj.322.7280.226.

Edwards P, Cooper R, Roberts I, Frost C: Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires. J Epidemiol Comm Health. 2005, 59: 987-999. 10.1136/jech.2005.034397.

Scott P, Edwards P: Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Serv Res. 2006, 6: 111-10.1186/1472-6963-6-111.

Article   PubMed   PubMed Central   Google Scholar  

Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires - a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.

Kenyon S, Pike K, Jones D, Taylor D, Salt A, Marlow N, Brocklehurst P: The effect of a monetary incentive on return of a postal health and development questionnaire: a randomised trial. BMC Health Serv Res. 2005, 5: 55-10.1186/1472-6963-5-55.

Gates S, Williams MA, Withers E, Williamson E, Mt-Isa S, Lamb SE: Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? The MINT incentive study. Trials. 2009, 10: 44-10.1186/1745-6215-10-44.

McColl E: Commentary: methods to increase response rates to postal questionnaires. Int J Epidemiol. 2007, 36: 968-

McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J: Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess. 2001, 5: 1-256.

Edwards P, Fernandes J, Roberts I, Kuppermann N: Young men were at risk of becoming lost to follow-up in a cohort of head-injured adults. J Clin Epidemiol. 2007, 60: 417-424. 10.1016/j.jclinepi.2006.06.021.

Nakash R, Hutton JL, Lamb SE, Gates S, Fisher J: Response and non-response to postal questionnaire follow-up in a clinical trial - a qualitative study of the patient's perspective. J Eval Clin Prac. 2008, 14: 226-235. 10.1111/j.1365-2753.2007.00838.x.

Download references

Acknowledgements

I would like to thank Lambert Felix for his help with updating the Cochrane review summarised in this article, and Graham Try for his comments on earlier drafts of the manuscript.

Author information

Authors and affiliations.

Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK

Phil Edwards

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phil Edwards .

Additional information

Competing interests.

The author declares that he has no competing interests.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Edwards, P. Questionnaires in clinical trials: guidelines for optimal design and administration. Trials 11 , 2 (2010). https://doi.org/10.1186/1745-6215-11-2

Download citation

Received : 29 July 2009

Accepted : 11 January 2010

Published : 11 January 2010

DOI : https://doi.org/10.1186/1745-6215-11-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Monetary Incentive
  • Questionnaire Design
  • Electronic Questionnaire
  • Text Response
  • Flesch Reading Ease

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

medical research questionnaire

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Selecting, designing,...

Selecting, designing, and developing your questionnaire

  • Related content
  • Peer review

This article has a correction. Please see:

  • Use of automated external defibrillator by first responders in out of hospital cardiac arrest: prospective controlled trial - February 12, 2004
  • Petra M Boynton , lecturer in health services research ( p.boynton{at}pcps.ucl.ac.uk ) 1 ,
  • Trisha Greenhalgh , professor of primary health care 1
  • 1 Department of Primary Care and Population Sciences, University College London, Archway Campus, London N19 5LW
  • Correspondence to: P M Boynton
  • Accepted 17 March 2004

Anybody can write down a list of questions and photocopy it, but producing worthwhile and generalisable data from questionnaires needs careful planning and imaginative design

The great popularity with questionnaires is they provide a “quick fix” for research methodology. No single method has been so abused. 1

Questionnaires offer an objective means of collecting information about people's knowledge, beliefs, attitudes, and behaviour. 2 3 Do our patients like our opening hours? What do teenagers think of a local antidrugs campaign and has it changed their attitudes? Why don't doctors use computers to their maximum potential? Questionnaires can be used as the sole research instrument (such as in a cross sectional survey) or within clinical trials or epidemiological studies.

Randomised trials are subject to strict reporting criteria, 4 but there is no comparable framework for questionnaire research. Hence, despite a wealthof detailed guidance in the specialist literature, 1 – 3 > 5 w1-w8 elementary methodological errors are common. 1 Inappropriate instruments and lack of rigour inevitably lead to poor quality data, misleading conclusions, and woolly recommendations. w8 In this series we aim to present a practical guide that willenable research teams to do questionnaire research that is well designed, well managed, and non-discriminatory and which contributes to a generalisable evidence base. We start with selecting and designing the questionnaire.

What information are you trying to collect?

You and your co-researchers may have different assumptions about precisely what information you would like your study to generate. A formal scoping exercise will ensure that you clarify goals and if necessary reach an agreed compromise. It will also flag up potential practical problems—for example, how long the questionnaire will be and how it might be administered.

As a rule of thumb, if you are not familiar enough with the research area or with a particular population subgroup to predict the range of possible responses, and especially if such detailsare not available in the literature, you should first use a qualitative approach (such as focus groups) to explore the territory and map key areas for further study. 6

Is a questionnaire appropriate?

People often decide to use a questionnaire for research questions that need a different method. Sometimes, a questionnaire will be appropriate only if used within a mixed methodology study—for example, to extend and quantify the findings of an initial exploratory phase. Table A on bmj.com gives some real examples where questionnaires were used inappropriately. 1

Box 1: Pitfalls of designing your own questionnaire

Natasha, a practice nurse, learns that staff at a local police station have a high incidence of health problems, which she believes are related to stress at work. She wants to test the relation between stress and health in these staff to inform the design of advice services. Natasha designs her own questionnaire. Had she completed a thorough literature search for validated measures, she would have found several high quality questionnaires that measure stress in public sector workers.8 Natasha's hard work produces only a second rate study that she is unable to get published.

Research participants must be able to give meaningful answers (with help from a professional interviewer if necessary). Particular physical, mental, social, and linguistic needs are coveredin the third article of this series. 7

Could you use an existing instrument?

Using a previously validated and published questionnaire will save you time and resources; you will be able to compare your own findings with those from other studies, you need only give outline details of the instrument when you write up your work, and you may find it easier to get published (box 1).

Increasingly, health services research uses standard questionnaires designed for producing data that can be compared across studies. For example, clinical trials routinely include measures of patients' knowledge about a disease, 9 satisfaction withservices, 10 or health related quality of life. 11 – 13 w3 w9 The validity (see below) of this approach depends on whether the type and range of closed responses reflects the full range of perceptions and feelings that people in all the different potential sampling frames might hold. Importantly, health status and quality of life instruments lose their validity when used beyond the context in which they were developed. 12 14 15 w3 w10-12

If there is no “off the peg” questionnaire available, you will have to construct your own. Using one or more standard instruments alongside a short bespoke questionnaire could save you the need to develop and validate a long list of new items.

Is the questionnaire valid and reliable?

A valid questionnaire measures what it claims to measure. In reality, many fail to do this. For example, a self completion questionnaire that seeks to measure people's food intake may be invalid because it measures what they say they have eaten, not what they have actually eaten. 16 Similarly, responses on questionnaires that ask general practitioners how they manage particular clinical conditions differ significantly from actual clinical practice. w13 an instrument developed in a different time, country, or cultural context may not be a valid measure in the group you are studying. For example, the item “I often attend gay parties” may have been a valid measure of a person's sociability level in the 1950s, but the wording has a very different connotation today.

Reliable questionnaires yield consistent results from repeated samples and different researchers over time. Differences in results come from differences between participants, not from inconsistencies in how the items are understood or how different observers interpret the responses. Astandardised questionnaire is one that is written and administered so all participants are askedthe precisely the same questions in an identical format and responses recorded in a uniform manner. Standardising a measure increases its reliability.

Just because a questionnaire has been piloted on a few of your colleagues, used in previous studies, or published in a peer reviewed journal does not mean it is either valid or reliable. The detailed techniques for achieving validity, reliability, and standardisation are beyond the scope of this series. If you plan to develop or modify a questionnaire yourself, you must consult a specialist text on these issues. 2 3

How should you present your questions?

Questionnaire items may be open or closed ended and be presented in various formats ( figure ). Table B on bmj.com examines the pros and cons of the two approaches. Two words that are often used inappropriately in closed question stems are frequently and regularly. A poorly designed item might read, “I frequently engage in exercise,” and offer a Likert scale giving responses from “strongly agree” through to “strongly disagree.” But “frequently” implies frequency, so a frequency based rating scale (with options such as at leastonce a day, twice a week, and so on) would be more appropriate. “Regularly,” on the other hand,implies a pattern. One person can regularly engage in exercise once a month whereas another person can regularly do so four times a week. Other weasel words to avoid in question stems include commonly, usually, many, some, and hardly ever. 17 w14

Examples of formats for presenting questionnaire items

  • Download figure
  • Open in new tab
  • Download powerpoint

Box 2: A closed ended design that produced misleading information

Customer: I'd like to discontinue my mobile phone rental please.

Company employee: That's fine, sir, but I need to complete a form for our records on why you've made that decision. Is it (a) you have moved to another network;(b) you've upgraded within our network; or (c) you can't afford the payments?

Customer: It isn't any of those. I've just decided I don't want to own a mobile phone any more. It's more hassle than it's worth.

Company employee: [after a pause] In that case, sir,I'll have to put you down as “can't afford the payments.”

Closed ended designs enable researchers to produce aggregated data quickly, but the range of possible answers is set by the researchers not respondents, and the richness of potential responses is lower. Closed ended items often cause frustration, usually because researchers have not considered all potential responses (box 2). 18

Ticking a particular box, or even saying yes, no, or maybe can make respondents want to explain their answer, and such free text annotations may add richly to the quantitative data. You should consider inserting a free text box at the end of the questionnaire (or even after particularitems or sections). Note that participants need instructions (perhaps with examples) on how to complete free text items in the same way as they do for closed questions.

If you plan to use open ended questions or invite free text comments, you must plan in advance how you will analyse these data (drawing on the skills of a qualitative researcher if necessary). 19 You must also build into the study design adequate time, skills, and resources for this analysis; otherwise you will waste participants' and researchers' time. If you do not have the time or expertise to analyse free text responses, do not invite any.

Some respondents (known as yea sayers) tend to agree with statements rather than disagree. For this reason, do not present your items so that strongly agree always links to the same broad attitude. For example, on a patient satisfaction scale, if one question is “my GP generally triesto help me out,” another question should be phrased in the negative, such as “the receptionists are usually impolite.”

Apart from questions, what else should you include?

A common error by people designing questionnaires for the first time is simply to hand out a list of the questions they want answered. Table C on bmj.com gives a checklist of other things to consider. It is particularly important to provide an introductory letter or information sheet for participants to take away after completing the questionnaire.

What should the questionnaire look like?

Researchers rarely spend sufficient time on the physical layout of their questionnaire, believing that the science lies in the content of the questions and not in such details as the font size or colour. Yet empirical studies have repeatedly shown that low response rates are often dueto participants being unable to read or follow the questionnaire (box 3). 3 w6 In general, questions should be short and to the point (around 12 words or less), but for issues of a sensitive and personal nature, short questions can be perceived as abrupt and threatening, and longer sentences are preferred. w6

How should you select your sample?

Different sampling techniques will affect the questions you ask and how you administer your questionnaire (see table D on bmj.com). For more detailed advice on sampling, see Bowling 20 and Sapsford. 3

If you are collecting quantitative data with a view to testing a hypothesis or assessing the prevalence of a disease or problem (for example, about intergroup differences in particular attitudes or health status), seek statistical advice on the minimum sample size. 3

What approvals do you need before you start?

Unlike other methods, questionnaires require relatively little specialist equipment or materials, which means that inexperienced and unsupported researchers sometimes embark on questionnaire surveys without completing the necessary formalities. In the United Kingdom, a research study on NHS patients or staff must be:

Formally approved by the relevant person in an organisation that is registered with the Department of Health as a research sponsor (typically, a research trust, university or college) 21

Consistent with data protection law and logged on the organisation's data protection files (see next article in series) 19

Accordant with research governance frameworks 21

Approved by the appropriate research ethics committee (see below).

Box 3: Don't let layout let you down

Meena, a general practice tutor, wanted to study her fellow general practitioners' attitudes to a new training scheme in her primary care trust. She constructed a series of questions, but when they were written down, they covered 10 pages, which Meena thought looked off putting. She reduced the font and spacing of her questionnaire, and printed it double sided, until it was onlyfour sides in length. But many of her colleagues refused to complete it, telling her they found it too hard to read and work through. She returned the questionnaire to its original 10 page format, which made it easier and quicker to complete, and her response rate increased greatly.

Summary points

Questionnaire studies often fail to produce high quality generalisable data

When possible, use previously validated questionnaires

Questions must be phrased appropriately for the target audience and information required

Good explanations and design will improve response rates

In addition, if your questionnaire study is part of a formal academic course (for example, a dissertation), you must follow any additional regulations such as gaining written approval from your supervisor.

A study is unethical if it is scientifically unsound, causes undue offence or trauma, breaches confidentiality, or wastes people's time or money. Written approval from a local or multicentre NHS research ethics committee (more information at www.corec.org.uk ) is essential but does not in itself make a study ethical. Those working innon-NHS institutions or undertaking research outside the NHS may need to submit an additional (non-NHS) ethical committee application to their own institution orresearch sponsor.

The committee will require details of the study design, copies of your questionnaire, and anyaccompanying information or covering letters. If the questionnaire is likely to cause distress, you should include a clear plan for providing support to both participants and researchers. Remember that just because you do not find a question offensive or distressing does not mean it willnot upset others. 6

As we have shown above, designing a questionnaire study that produces usable data is not as easy as it might seem. Awareness of the pitfalls is essential both when planning research and appraising published studies. Table E on bmj.com gives a critical appraisal checklist for evaluating questionnaire studies. In the following two articles we will discuss how to select a sample, pilot and administer a questionnaire, and analyse data and approaches for groups that are hard toresearch.

This is the first in a series of three articles on questionnaire research

Acknowledgments

Susan Catt supplied additional references and feedback. We also thank Alicia O'Cathain, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser MacFarlane, and Will Callaghan for feedbackon earlier versions. Numerous research students and conference delegates provided methodologicalquestions and case examples of real life questionnaire research, which provided the inspiration and raw material for this series. We also thank the hundreds of research participants who over the years have contributed data and given feedback to our students and ourselves about the design,layout, and accessibility of instruments.

Contributors and sources PMB and TG have taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directlyfrom questions asked about real questionnaire studies. To address these questions we explored a wide range of sources from the psychological and health services research literature.

References w1-w17, further illustrative examples, and checklists are on bmj.com

Competing interests None declared.

  • Oppenheim AN
  • Altman DG ,
  • Schulz KF ,
  • Davidoff F ,
  • Elbourne D ,
  • Boynton PM ,
  • Greenhalgh T
  • Widerszal-Bazyl M ,
  • Heaney DJ ,
  • Maxwell M ,
  • Van Hook MP ,
  • Berkman B ,
  • Garratt A ,
  • Schmidt L ,
  • Mackintosh A ,
  • Fitzpatrick R
  • Gilbody SM ,
  • Drewnowski A
  • Schaeffer NC
  • Houtkoop-Steenstra H
  • Department of Health

medical research questionnaire

  • Search Menu
  • Browse content in Disability assessment
  • Assessment for capacity to work
  • Assessment of functional capability
  • Browse content in Fitness for work
  • Civil service, central government, and education establishments
  • Construction industry
  • Emergency Medical Services
  • Fire and rescue service
  • Healthcare workers
  • Hyperbaric medicine
  • Military - Other
  • Military - Fitness for Work
  • Military - Mental Health
  • Oil and gas industry
  • Police service
  • Rail and Roads
  • Remote medicine
  • Telecommunications industry
  • The disabled worker
  • The older worker
  • The young worker
  • Travel medicine
  • Women at work
  • Browse content in Framework for practice
  • Health and Safety at Work etc. Act 1974 and associated regulations
  • Health information and reporting
  • Ill health retirement
  • Questionnaire Reviews
  • Browse content in Occupational Medicine
  • Blood borne viruses and other immune disorders
  • Dermatological disorders
  • Endocrine disorders
  • Gastrointestinal and liver disorders
  • Gynaecology
  • Haematological disorders
  • Mental health
  • Neurological disorders
  • Occupational cancers
  • Opthalmology
  • Renal and urological disorders
  • Respiratory Disorders
  • Rheumatological disorders
  • Browse content in Rehabilitation
  • Chronic disease
  • Mental health rehabilitation
  • Motivation for work
  • Physical health rehabilitation
  • Browse content in Workplace hazard and risk
  • Biological/occupational infections
  • Dusts and particles
  • Occupational stress
  • Post-traumatic stress
  • Advance articles
  • Editor's Choice
  • Themed and Special Issues
  • Author Guidelines
  • Submission Site
  • Open Access
  • Books for Review
  • Become a Reviewer
  • About Occupational Medicine
  • About the Society of Occupational Medicine
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Permissions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

A brief history, key research.

  • < Previous

MRC questionnaire (MRCQ) on respiratory symptoms

  • Article contents
  • Figures & tables
  • Supplementary Data

J. E. Cotes, D. J. Chinn, MRC questionnaire (MRCQ) on respiratory symptoms, Occupational Medicine , Volume 57, Issue 5, August 2007, Page 388, https://doi.org/10.1093/occmed/kqm051

  • Permissions Icon Permissions

The Medical Research Council Questionnaire (MRCQ) was developed by researchers at the Medical Research Council, UK, as a tool to study respiratory epidemiology in communities and occupational groups [ 1 ]. It reliably relates symptoms and lung function and has been in use for almost 50 years. The 1976 version is reproduced in a current publication [ 2 ]. Instructions on its use can be obtained from the authors. A subsequent version includes questions that are directed to identifying asthma [ 3 ].

In its usual form, the MRCQ comprises 17 questions on respiratory symptoms (cough, phlegm, breathlessness, wheeze and chest illnesses, now and during the past 2 years), detailed questions on smoking history and a check-list on past illnesses.

Wording of questions, follow-up questions, definitions and interpretation of responses are standardized and alternative questions have been prepared for special circumstances, for example, shift working. The intensity of symptoms is not covered but can be scored separately [ 4 ].

The MRCQ provides a system for scoring respiratory symptoms [ 5 ] and identifying underlying factors including smoking, previous chest illnesses and occupational dusts and vapours. Reproducibility is achieved by having the questions asked by an observer who had previously used the training manual and cassette. However, a version for self-administration is also available.

The questions on breathlessness are widely used for grading this symptom, but there is more than one scoring system, so the grades should be defined. In subjects with chronic respiratory disorders, the grades of breathlessness are weakly correlated with forced expiratory volume (FEV 1 ). However, the correlation is higher with ventilation during sub-maximal exercise [ 6 ] and with quality of life as assessed by a quality of life questionnaire [ 7 ]. These two features appear to be co-linear and this possibility should be explored further.

The MRCQ is recommended for use in epidemiological and occupational respiratory surveys and as part of a consultation for respiratory symptoms or assessment of lung function. Where appropriate, the screening can be expanded with additional questions on ischaemic heart disease [ 8 ], asthma [ 9 ] or exposure to occupational respiratory hazards such as coal or cotton dust, asbestos fibres or fumes from welding [ 10 ].

Google Scholar

Email alerts

Citing articles via.

  • Contact SOM
  • Recommend to your Library

Affiliations

  • Online ISSN 1471-8405
  • Print ISSN 0962-7480
  • Copyright © 2024 Society of Occupational Medicine
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Privacy Policy

Research Method

Home » Questionnaire – Definition, Types, and Examples

Questionnaire – Definition, Types, and Examples

Table of Contents

Questionnaire

Questionnaire

Definition:

A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.

History of Questionnaire

The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.

The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.

One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.

In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.

Types of Questionnaire

Types of Questionnaires are as follows:

Structured Questionnaire

This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.

Unstructured Questionnaire

An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.

Open-ended Questionnaire

An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.

Close-ended Questionnaire

In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.

Mixed Questionnaire

A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.

Pictorial Questionnaire:

In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.

Types of Questions in Questionnaire

The types of Questions in Questionnaire are as follows:

Multiple Choice Questions

These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.

  • a. Red b . Blue c. Green d . Yellow

Rating Scale Questions

These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.

  • On a scale of 1 to 10, how likely are you to recommend this product to a friend?

Open-Ended Questions

These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.

  • What do you think are the biggest challenges facing your community?

Likert Scale Questions

These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.

How strongly do you agree or disagree with the following statement:

“I enjoy exercising regularly.”

  • a . Strongly Agree
  • c . Neither Agree nor Disagree
  • d . Disagree
  • e . Strongly Disagree

Demographic Questions

These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.

  • What is your age?

Yes/No Questions

These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.

Have you ever traveled outside of your home country?

Ranking Questions

These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.

Please rank the following factors in order of importance when choosing a restaurant:

  • a. Quality of Food
  • c. Ambiance
  • d. Location

Matrix Questions

These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.

Dichotomous Questions

These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.

Do you support the death penalty?

How to Make a Questionnaire

Step-by-Step Guide for Making a Questionnaire:

  • Define your research objectives: Before you start creating questions, you need to define the purpose of your questionnaire and what you hope to achieve from the data you collect.
  • Choose the appropriate question types: Based on your research objectives, choose the appropriate question types to collect the data you need. Refer to the types of questions mentioned earlier for guidance.
  • Develop questions: Develop clear and concise questions that are easy for participants to understand. Avoid leading or biased questions that might influence the responses.
  • Organize questions: Organize questions in a logical and coherent order, starting with demographic questions followed by general questions, and ending with specific or sensitive questions.
  • Pilot the questionnaire : Test your questionnaire on a small group of participants to identify any flaws or issues with the questions or the format.
  • Refine the questionnaire : Based on feedback from the pilot, refine and revise the questionnaire as necessary to ensure that it is valid and reliable.
  • Distribute the questionnaire: Distribute the questionnaire to your target audience using a method that is appropriate for your research objectives, such as online surveys, email, or paper surveys.
  • Collect and analyze data: Collect the completed questionnaires and analyze the data using appropriate statistical methods. Draw conclusions from the data and use them to inform decision-making or further research.
  • Report findings: Present your findings in a clear and concise report, including a summary of the research objectives, methodology, key findings, and recommendations.

Questionnaire Administration Modes

There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  • Self-administered paper questionnaires: Participants complete the questionnaire on paper, either in person or by mail. This mode is relatively low cost and easy to administer, but it may result in lower response rates and greater potential for errors in data entry.
  • Online questionnaires: Participants complete the questionnaire on a website or through email. This mode is convenient for both researchers and participants, as it allows for fast and easy data collection. However, it may be subject to issues such as low response rates, lack of internet access, and potential for fraudulent responses.
  • Telephone surveys: Trained interviewers administer the questionnaire over the phone. This mode allows for a large sample size and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Face-to-face interviews : Trained interviewers administer the questionnaire in person. This mode allows for a high degree of control over the survey environment and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Mixed-mode surveys: Researchers use a combination of two or more modes to administer the questionnaire, such as using online questionnaires for initial screening and following up with telephone interviews for more detailed information. This mode can help overcome some of the limitations of individual modes, but it requires careful planning and coordination.

Example of Questionnaire

Title of the Survey: Customer Satisfaction Survey

Introduction:

We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.

Instructions:

Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.

1. How satisfied are you with our product quality?

  • Very satisfied
  • Somewhat satisfied
  • Somewhat dissatisfied
  • Very dissatisfied

2. How satisfied are you with our customer service?

3. How satisfied are you with the price of our products?

4. How likely are you to recommend our products to others?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

5. How easy was it to find the information you were looking for on our website?

  • Somewhat easy
  • Somewhat difficult
  • Very difficult

6. How satisfied are you with the overall experience of using our products and services?

7. Is there anything that you would like to see us improve upon or change in the future?

…………………………………………………………………………………………………………………………..

Conclusion:

Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.

Applications of Questionnaire

Some common applications of questionnaires include:

  • Research : Questionnaires are commonly used in research to gather information from participants about their attitudes, opinions, behaviors, and experiences. This information can then be analyzed and used to draw conclusions and make inferences.
  • Healthcare : In healthcare, questionnaires can be used to gather information about patients’ medical history, symptoms, and lifestyle habits. This information can help healthcare professionals diagnose and treat medical conditions more effectively.
  • Marketing : Questionnaires are commonly used in marketing to gather information about consumers’ preferences, buying habits, and opinions on products and services. This information can help businesses develop and market products more effectively.
  • Human Resources: Questionnaires are used in human resources to gather information from job applicants, employees, and managers about job satisfaction, performance, and workplace culture. This information can help organizations improve their hiring practices, employee retention, and organizational culture.
  • Education : Questionnaires are used in education to gather information from students, teachers, and parents about their perceptions of the educational experience. This information can help educators identify areas for improvement and develop more effective teaching strategies.

Purpose of Questionnaire

Some common purposes of questionnaires include:

  • To collect information on attitudes, opinions, and beliefs: Questionnaires can be used to gather information on people’s attitudes, opinions, and beliefs on a particular topic. For example, a questionnaire can be used to gather information on people’s opinions about a particular political issue.
  • To collect demographic information: Questionnaires can be used to collect demographic information such as age, gender, income, education level, and occupation. This information can be used to analyze trends and patterns in the data.
  • To measure behaviors or experiences: Questionnaires can be used to gather information on behaviors or experiences such as health-related behaviors or experiences, job satisfaction, or customer satisfaction.
  • To evaluate programs or interventions: Questionnaires can be used to evaluate the effectiveness of programs or interventions by gathering information on participants’ experiences, opinions, and behaviors.
  • To gather information for research: Questionnaires can be used to gather data for research purposes on a variety of topics.

When to use Questionnaire

Here are some situations when questionnaires might be used:

  • When you want to collect data from a large number of people: Questionnaires are useful when you want to collect data from a large number of people. They can be distributed to a wide audience and can be completed at the respondent’s convenience.
  • When you want to collect data on specific topics: Questionnaires are useful when you want to collect data on specific topics or research questions. They can be designed to ask specific questions and can be used to gather quantitative data that can be analyzed statistically.
  • When you want to compare responses across groups: Questionnaires are useful when you want to compare responses across different groups of people. For example, you might want to compare responses from men and women, or from people of different ages or educational backgrounds.
  • When you want to collect data anonymously: Questionnaires can be useful when you want to collect data anonymously. Respondents can complete the questionnaire without fear of judgment or repercussions, which can lead to more honest and accurate responses.
  • When you want to save time and resources: Questionnaires can be more efficient and cost-effective than other methods of data collection such as interviews or focus groups. They can be completed quickly and easily, and can be analyzed using software to save time and resources.

Characteristics of Questionnaire

Here are some of the characteristics of questionnaires:

  • Standardization : Questionnaires are standardized tools that ask the same questions in the same order to all respondents. This ensures that all respondents are answering the same questions and that the responses can be compared and analyzed.
  • Objectivity : Questionnaires are designed to be objective, meaning that they do not contain leading questions or bias that could influence the respondent’s answers.
  • Predefined responses: Questionnaires typically provide predefined response options for the respondents to choose from, which helps to standardize the responses and make them easier to analyze.
  • Quantitative data: Questionnaires are designed to collect quantitative data, meaning that they provide numerical or categorical data that can be analyzed using statistical methods.
  • Convenience : Questionnaires are convenient for both the researcher and the respondents. They can be distributed and completed at the respondent’s convenience and can be easily administered to a large number of people.
  • Anonymity : Questionnaires can be anonymous, which can encourage respondents to answer more honestly and provide more accurate data.
  • Reliability : Questionnaires are designed to be reliable, meaning that they produce consistent results when administered multiple times to the same group of people.
  • Validity : Questionnaires are designed to be valid, meaning that they measure what they are intended to measure and are not influenced by other factors.

Advantage of Questionnaire

Some Advantage of Questionnaire are as follows:

  • Standardization: Questionnaires allow researchers to ask the same questions to all participants in a standardized manner. This helps ensure consistency in the data collected and eliminates potential bias that might arise if questions were asked differently to different participants.
  • Efficiency: Questionnaires can be administered to a large number of people at once, making them an efficient way to collect data from a large sample.
  • Anonymity: Participants can remain anonymous when completing a questionnaire, which may make them more likely to answer honestly and openly.
  • Cost-effective: Questionnaires can be relatively inexpensive to administer compared to other research methods, such as interviews or focus groups.
  • Objectivity: Because questionnaires are typically designed to collect quantitative data, they can be analyzed objectively without the influence of the researcher’s subjective interpretation.
  • Flexibility: Questionnaires can be adapted to a wide range of research questions and can be used in various settings, including online surveys, mail surveys, or in-person interviews.

Limitations of Questionnaire

Limitations of Questionnaire are as follows:

  • Limited depth: Questionnaires are typically designed to collect quantitative data, which may not provide a complete understanding of the topic being studied. Questionnaires may miss important details and nuances that could be captured through other research methods, such as interviews or observations.
  • R esponse bias: Participants may not always answer questions truthfully or accurately, either because they do not remember or because they want to present themselves in a particular way. This can lead to response bias, which can affect the validity and reliability of the data collected.
  • Limited flexibility: While questionnaires can be adapted to a wide range of research questions, they may not be suitable for all types of research. For example, they may not be appropriate for studying complex phenomena or for exploring participants’ experiences and perceptions in-depth.
  • Limited context: Questionnaires typically do not provide a rich contextual understanding of the topic being studied. They may not capture the broader social, cultural, or historical factors that may influence participants’ responses.
  • Limited control : Researchers may not have control over how participants complete the questionnaire, which can lead to variations in response quality or consistency.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Survey Research

Survey Research – Types, Methods, Examples

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

medical research questionnaire

Home Surveys Questionnaire

21 Questionnaire Templates: Examples and Samples

Questionnaire Templates and Examples

Questionnaire: Definition

A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. A questionnaire is typically a mix of open-ended questions and close-ended questions ; the latter allowing for respondents to enlist their views in detail.

A questionnaire can be used in both, qualitative market research as well as quantitative market research with the use of different types of questions .

LEARN ABOUT: Open-Ended Questions

Types of Questionnaires

We have learnt that a questionnaire could either be structured or free-flow. To explain this better:

  • Structured Questionnaires: A structured questionnaires helps collect quantitative data . In this case, the questionnaire is designed in a way that it collects very specific type of information. It can be used to initiate a formal enquiry on collect data to prove or disprove a prior hypothesis.
  • Unstructured Questionnaires: An unstructured questionnaire collects qualitative data . The questionnaire in this case has a basic structure and some branching questions but nothing that limits the responses of a respondent. The questions are more open-ended.

LEARN ABOUT:   Structured Question

Types of Questions used in a Questionnaire

A questionnaire can consist of many types of questions . Some of the commonly and widely used question types though, are:

  • Open-Ended Questions: One of the commonly used question type in questionnaire is an open-ended question . These questions help collect in-depth data from a respondent as there is a huge scope to respond in detail.
  • Dichotomous Questions: The dichotomous question is a “yes/no” close-ended question . This question is generally used in case of the need of basic validation. It is the easiest question type in a questionnaire.
  • Multiple-Choice Questions: An easy to administer and respond to, question type in a questionnaire is the multiple-choice question . These questions are close-ended questions with either a single select multiple choice question or a multiple select multiple choice question. Each multiple choice question consists of an incomplete stem (question), right answer or answers, close alternatives, distractors and incorrect answers. Depending on the objective of the research, a mix of the above option types can be used.
  • Net Promoter Score (NPS) Question: Another commonly used question type in a questionnaire is the Net Promoter Score (NPS) Question where one single question collects data on the referencability of the research topic in question.
  • Scaling Questions: Scaling questions are widely used in a questionnaire as they make responding to the questionnaire, very easy. These questions are based on the principles of the 4 measurement scales – nominal, ordinal, interval and ratio .

Questionnaires help enterprises collect valuable data to help them make well-informed business decisions. There are powerful tools available in the market that allows using multiple question types, ready to use survey format templates, robust analytics, and many more features to conduct comprehensive market research.

LEARN ABOUT: course evaluation survey examples

For example, an enterprise wants to conduct market research to understand what pricing would be best for their new product to capture a higher market share. In such a case, a questionnaire for competitor analysis can be sent to the targeted audience using a powerful market research survey software which can help the enterprise conduct 360 market research that will enable them to make strategic business decisions.

Now that we have learned what a questionnaire is and its use in market research , some examples and samples of widely used questionnaire templates on the QuestionPro platform are as below:

LEARN ABOUT: Speaker evaluation form

Customer Questionnaire Templates: Examples and Samples

QuestionPro specializes in end-to-end Customer Questionnaire Templates that can be used to evaluate a customer journey right from indulging with a brand to the continued use and referenceability of the brand. These templates form excellent samples to form your own questionnaire and begin testing your customer satisfaction and experience based on customer feedback.

LEARN ABOUT: Structured Questionnaire

USE THIS FREE TEMPLATE

Employee & Human Resource (HR) Questionnaire Templates: Examples and Samples

QuestionPro has built a huge repository of employee questionnaires and HR questionnaires that can be readily deployed to collect feedback from the workforce on an organization on multiple parameters like employee satisfaction, benefits evaluation, manager evaluation , exit formalities etc. These templates provide a holistic overview of collecting actionable data from employees.

Community Questionnaire Templates: Examples and Samples

The QuestionPro repository of community questionnaires helps collect varied data on all community aspects. This template library includes popular questionnaires such as community service, demographic questionnaires, psychographic questionnaires, personal questionnaires and much more.

Academic Evaluation Questionnaire Templates: Examples and Samples

Another vastly used section of QuestionPro questionnaire templates are the academic evaluation questionnaires . These questionnaires are crafted to collect in-depth data about academic institutions and the quality of teaching provided, extra-curricular activities etc and also feedback about other educational activities.

MORE LIKE THIS

customer communication tool

Customer Communication Tool: Types, Methods, Uses, & Tools

Apr 23, 2024

sentiment analysis tools

Top 12 Sentiment Analysis Tools for Understanding Emotions

QuestionPro BI: From Research Data to Actionable Dashboards

QuestionPro BI: From Research Data to Actionable Dashboards

Apr 22, 2024

customer experience management software

21 Best Customer Experience Management Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

77 interesting medical research topics for 2024

Last updated

25 November 2023

Reviewed by

Brittany Ferri, PhD, OTR/L

Medical research is the gateway to improved patient care and expanding our available treatment options. However, finding a relevant and compelling research topic can be challenging.

Use this article as a jumping-off point to select an interesting medical research topic for your next paper or clinical study.

  • How to choose a medical research topic

When choosing a research topic , it’s essential to consider a couple of things. What topics interest you? What unanswered questions do you want to address? 

During the decision-making and brainstorming process, here are a few helpful tips to help you pick the right medical research topic:

Focus on a particular field of study

The best medical research is specific to a particular area. Generalized studies are often too broad to produce meaningful results, so we advise picking a specific niche early in the process. 

Maybe a certain topic interests you, or your industry knowledge reveals areas of need.

Look into commonly researched topics

Once you’ve chosen your research field, do some preliminary research. What have other academics done in their papers and projects? 

From this list, you can focus on specific topics that interest you without accidentally creating a copycat project. This groundwork will also help you uncover any literature gaps—those may be beneficial areas for research.

Get curious and ask questions

Now you can get curious. Ask questions that start with why, how, or what. These questions are the starting point of your project design and will act as your guiding light throughout the process. 

For example: 

What impact does pollution have on children’s lung function in inner-city neighborhoods? 

Why is pollution-based asthma on the rise? 

How can we address pollution-induced asthma in young children? 

  • 77 medical research topics worth exploring in 2023

Need some research inspiration for your upcoming paper or clinical study? We’ve compiled a list of 77 topical and in-demand medical research ideas. Let’s take a look. 

  • Exciting new medical research topics

If you want to study cutting-edge topics, here are some exciting options:

COVID-19 and long COVID symptoms

Since 2020, COVID-19 has been a hot-button topic in medicine, along with the long-term symptoms in those with a history of COVID-19. 

Examples of COVID-19-related research topics worth exploring include:

The long-term impact of COVID-19 on cardiac and respiratory health

COVID-19 vaccination rates

The evolution of COVID-19 symptoms over time

New variants and strains of the COVID-19 virus

Changes in social behavior and public health regulations amid COVID-19

Vaccinations

Finding ways to cure or reduce the disease burden of chronic infectious diseases is a crucial research area. Vaccination is a powerful option and a great topic to research. 

Examples of vaccination-related research topics include:

mRNA vaccines for viral infections

Biomaterial vaccination capabilities

Vaccination rates based on location, ethnicity, or age

Public opinion about vaccination safety 

Artificial tissues fabrication

With the need for donor organs increasing, finding ways to fabricate artificial bioactive tissues (and possibly organs) is a popular research area. 

Examples of artificial tissue-related research topics you can study include:

The viability of artificially printed tissues

Tissue substrate and building block material studies

The ethics and efficacy of artificial tissue creation

  • Medical research topics for medical students

For many medical students, research is a big driver for entering healthcare. If you’re a medical student looking for a research topic, here are some great ideas to work from:

Sleep disorders

Poor sleep quality is a growing problem, and it can significantly impact a person’s overall health. 

Examples of sleep disorder-related research topics include:

How stress affects sleep quality

The prevalence and impact of insomnia on patients with mental health conditions

Possible triggers for sleep disorder development

The impact of poor sleep quality on psychological and physical health

How melatonin supplements impact sleep quality

Alzheimer’s and dementia 

Cognitive conditions like dementia and Alzheimer’s disease are on the rise worldwide. They currently have no cure. As a result, research about these topics is in high demand. 

Examples of dementia-related research topics you could explore include:

The prevalence of Alzheimer’s disease in a chosen population

Early onset symptoms of dementia

Possible triggers or causes of cognitive decline with age

Treatment options for dementia-like conditions

The mental and physical burden of caregiving for patients with dementia

  • Lifestyle habits and public health

Modern lifestyles have profoundly impacted the average person’s daily habits, and plenty of interesting topics explore its effects. 

Examples of lifestyle and public health-related research topics include:

The nutritional intake of college students

The impact of chronic work stress on overall health

The rise of upper back and neck pain from laptop use

Prevalence and cause of repetitive strain injuries (RSI)

  • Controversial medical research paper topics

Medical research is a hotbed of controversial topics, content, and areas of study. 

If you want to explore a more niche (and attention-grabbing) concept, here are some controversial medical research topics worth looking into:

The benefits and risks of medical cannabis

Depending on where you live, the legalization and use of cannabis for medical conditions is controversial for the general public and healthcare providers.

Examples of medical cannabis-related research topics that might grab your attention include:

The legalization process of medical cannabis

The impact of cannabis use on developmental milestones in youth users

Cannabis and mental health diagnoses

CBD’s impact on chronic pain

Prevalence of cannabis use in young people

The impact of maternal cannabis use on fetal development 

Understanding how THC impacts cognitive function

Human genetics

The Human Genome Project identified, mapped, and sequenced all human DNA genes. Its completion in 2003 opened up a world of exciting and controversial studies in human genetics.

Examples of human genetics-related research topics worth delving into include:

Medical genetics and the incidence of genetic-based health disorders

Behavioral genetics differences between identical twins

Genetic risk factors for neurodegenerative disorders

Machine learning technologies for genetic research

Sexual health studies

Human sexuality and sexual health are important (yet often stigmatized) medical topics that need new research and analysis.

As a diverse field ranging from sexual orientation studies to sexual pathophysiology, examples of sexual health-related research topics include:

The incidence of sexually transmitted infections within a chosen population

Mental health conditions within the LGBTQIA+ community

The impact of untreated sexually transmitted infections

Access to safe sex resources (condoms, dental dams, etc.) in rural areas

  • Health and wellness research topics

Human wellness and health are trendy topics in modern medicine as more people are interested in finding natural ways to live healthier lifestyles. 

If this field of study interests you, here are some big topics in the wellness space:

Gluten sensitivity

Gluten allergies and intolerances have risen over the past few decades. If you’re interested in exploring this topic, your options range in severity from mild gastrointestinal symptoms to full-blown anaphylaxis. 

Some examples of gluten sensitivity-related research topics include:

The pathophysiology and incidence of Celiac disease

Early onset symptoms of gluten intolerance

The prevalence of gluten allergies within a set population

Gluten allergies and the incidence of other gastrointestinal health conditions

Pollution and lung health

Living in large urban cities means regular exposure to high levels of pollutants. 

As more people become interested in protecting their lung health, examples of impactful lung health and pollution-related research topics include:

The extent of pollution in densely packed urban areas

The prevalence of pollution-based asthma in a set population

Lung capacity and function in young people

The benefits and risks of steroid therapy for asthma

Pollution risks based on geographical location

Plant-based diets

Plant-based diets like vegan and paleo diets are emerging trends in healthcare due to their limited supporting research. 

If you’re interested in learning more about the potential benefits or risks of holistic, diet-based medicine, examples of plant-based diet research topics to explore include:

Vegan and plant-based diets as part of disease management

Potential risks and benefits of specific plant-based diets

Plant-based diets and their impact on body mass index

The effect of diet and lifestyle on chronic disease management

Health supplements

Supplements are a multi-billion dollar industry. Many health-conscious people take supplements, including vitamins, minerals, herbal medicine, and more. 

Examples of health supplement-related research topics worth investigating include:

Omega-3 fish oil safety and efficacy for cardiac patients

The benefits and risks of regular vitamin D supplementation

Health supplementation regulation and product quality

The impact of social influencer marketing on consumer supplement practices

Analyzing added ingredients in protein powders

  • Healthcare research topics

Working within the healthcare industry means you have insider knowledge and opportunity. Maybe you’d like to research the overall system, administration, and inherent biases that disrupt access to quality care. 

While these topics are essential to explore, it is important to note that these studies usually require approval and oversight from an Institutional Review Board (IRB). This ensures the study is ethical and does not harm any subjects. 

For this reason, the IRB sets protocols that require additional planning, so consider this when mapping out your study’s timeline. 

Here are some examples of trending healthcare research areas worth pursuing:

The pros and cons of electronic health records

The rise of electronic healthcare charting and records has forever changed how medical professionals and patients interact with their health data. 

Examples of electronic health record-related research topics include:

The number of medication errors reported during a software switch

Nurse sentiment analysis of electronic charting practices

Ethical and legal studies into encrypting and storing personal health data

Inequities within healthcare access

Many barriers inhibit people from accessing the quality medical care they need. These issues result in health disparities and injustices. 

Examples of research topics about health inequities include:

The impact of social determinants of health in a set population

Early and late-stage cancer stage diagnosis in urban vs. rural populations

Affordability of life-saving medications

Health insurance limitations and their impact on overall health

Diagnostic and treatment rates across ethnicities

People who belong to an ethnic minority are more likely to experience barriers and restrictions when trying to receive quality medical care. This is due to systemic healthcare racism and bias. 

As a result, diagnostic and treatment rates in minority populations are a hot-button field of research. Examples of ethnicity-based research topics include:

Cancer biopsy rates in BIPOC women

The prevalence of diabetes in Indigenous communities

Access inequalities in women’s health preventative screenings

The prevalence of undiagnosed hypertension in Black populations

  • Pharmaceutical research topics

Large pharmaceutical companies are incredibly interested in investing in research to learn more about potential cures and treatments for diseases. 

If you’re interested in building a career in pharmaceutical research, here are a few examples of in-demand research topics:

Cancer treatment options

Clinical research is in high demand as pharmaceutical companies explore novel cancer treatment options outside of chemotherapy and radiation. 

Examples of cancer treatment-related research topics include:

Stem cell therapy for cancer

Oncogenic gene dysregulation and its impact on disease

Cancer-causing viral agents and their risks

Treatment efficacy based on early vs. late-stage cancer diagnosis

Cancer vaccines and targeted therapies

Immunotherapy for cancer

Pain medication alternatives

Historically, opioid medications were the primary treatment for short- and long-term pain. But, with the opioid epidemic getting worse, the need for alternative pain medications has never been more urgent. 

Examples of pain medication-related research topics include:

Opioid withdrawal symptoms and risks

Early signs of pain medication misuse

Anti-inflammatory medications for pain control

  • Identify trends in your medical research with Dovetail

Are you interested in contributing life-changing research? Today’s medical research is part of the future of clinical patient care. 

As your go-to resource for speedy and accurate data analysis , we are proud to partner with healthcare researchers to innovate and improve the future of healthcare.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 18 May 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

medical research questionnaire

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

medical research questionnaire

HubSpot CRM

medical research questionnaire

Google Sheets

medical research questionnaire

Google Analytics

medical research questionnaire

Microsoft Excel

medical research questionnaire

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

medical research questionnaire

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • 20 Amazing health survey questions for questionnaires

20 Amazing health survey questions for questionnaires

Surveys are an excellent approach to acquiring data that isn't revealed by lab results or spoken in casual conversation. Patients can be reluctant to offer you personal feedback, but surveys allow them to do so confidently. Online surveys encourage communication with the patient by collecting opinions from clients and employees.

The health assessment of a person plays a significant role in determining and assessing their health status . Healthcare organizations frequently use health assessment survey questions to gather patient data more effectively, quickly, and conveniently. This article will explain a health survey, how you can create it quickly and promptly on forms.app , and examples of health survey questions you can use in your excellent survey.

  • What is a health survey?

Health surveys are a crucial and practical decision-making tool when creating a health plan. Health studies provide detailed information about the chronic illnesses that patients have, as well as about patient perspectives on health trends, way of life, and use of healthcare services .

A patient satisfaction survey is a collection of questions designed to get feedback from patients and gauge their satisfaction with the service and quality of their healthcare provider . The patient satisfaction survey questionnaire assists in identifying critical indicators for patient care that aid medical institutions in understanding the quality of treatment offered and potential service issues.

medical research questionnaire

  • How to write better questions in your health survey

The proper application of a health survey is its most crucial component. The timing is critical to health surveys. Patients in the hospital typically need a certain amount of uninterrupted time to complete the survey questions without interruption; instead, they should complete the surveys after their visit. Here are some tips on how to create a good health survey question:

1. Ask clear questions

In general, people avoid solving long and obscure surveys. Patients want to understand the questions clearly when sharing their views and ideas. If you keep the health survey questions clear and short , you can increase the number of respondents and get more effective results. To make your questions clear, you can add descriptions under question titles.

medical research questionnaire

2. Use visual power

The use of visuals in surveys positively affects the number of participants. By using some images in health surveys, you can enable patients to respond more quickly and accurately . For example, in a health survey question asking the patient which region he has pain, you can make it easier for patients to answer by using visuals.

medical research questionnaire

3. Reserve a section in the questionnaire for patient suggestions

Opinions and suggestions of patients are essential to improving the treatment, health, and hospital systems. In the last part of the questionnaire, you can ask the patients to present their opinions and suggestions . In this way, the patient can feel more important , and you can reach the patient's views more effectively .

medical research questionnaire

4. Include the 'other' option in the answer choices.

There may not be an option suitable for the patients in the answer choice. This may cause the patient to leave the question blank or give an incorrect answer. In this case, you can ask the patient to write their reply by adding the ' other ' option to the options .

medical research questionnaire

  • 20 excellent health survey question examples

A health survey question asks respondents about their general health and condition. Researchers can use these questions to gather data about a patient's public health, disease risk factors, feelings about their medical care, and other relevant information . 

A health survey effectively gathers information from a large population or a specific target group. You can collect critical data from the patient by asking the appropriate questions at the right time. Below, this article  has shared 20 Great health care survey question examples for surveys:

1  - How healthy do you feel on a scale of 1 to 10?

2  - How often do you go to the hospital?

  a) Once a week

  b) Once every two weeks

  c) Once a month

  d) Once every three months

  e) Once a year

  f) Other (Please write your answer)

3  - Do you have any chronic diseases?

  a) Yes 

  b) No 

4  - Do you have any genetic diseases?

  a) Diabetes

  b) High blood pressure

  c) Huntington

  d) Thalassemia

  e) Hemophilia

  f) Other (Please specify)

5  - Do you regularly use alcohol and/or drugs?

  a) Yes to both

  b) Only to drugs

  c) Only to alcohol

  d) No

6  - How frequently do you get your health checkup?

  a) Once in 2 months

  b) Once in 6 months

  c) Once a year

  d) Only when needed

  e) Never get it done

7  - Does anyone in your family members have a hereditary disease?

  a) Yes

  b) No

8  - How often do you exercise?

  a) Every day

  b) Once in two days

  c) Once a week

  d) Once a month

  e) Never

9  - Have you had an allergic reaction or received treatment for it?

  a) Yes, I did. I also received treatment.

  b) I had it but did not receive treatment

  c) I've never had one.

10  - What level of function can you carry out routine tasks?

  a) Excellent level

  b) Good level

  c) Intermediate level

  d) Bad level

  e) Terrible level

11  - Have you experienced depression or psychological distress in the last four weeks?

  a) Yes very much

  b) Sometimes

  c) Never

12  - How much have your emotional issues impacted your interactions with friends and family over the past four weeks?

  a) It didn't affect me at all

  b) Very little

  c) Moderate

  d) Quite a few

  e) Too much

13  - How would you rate your treatment process?

  a) Wonderful

  b) Above average

  c) Average

  d) Below average

  e) Very poor

14  - Do you use any medication regularly?

15  - What various medications have you used over the last 24 hours?

16  - How was the doctor's attitude towards you on a scale of 1 to 10?

17  - How do you rate the local hospitals in your area?

  a) Excellent

  b) Good

  d) Poor

18  - Please rate (1-10) your agreement with the following: Health insurance is affordable.

19  - Which of the following have you experienced pain in the past month?

  a) Heart

  b) Kidney

  c) Lung

  d) Stomach

  e) Other (Please specify)

20  - Do you recommend this health facility to your family and friends?

  a) Definitely yes

  b) Yes

  c) No  

  d) Definitely not

  • How to create a health survey on forms.app

forms.app is one of the best survey makers . It offers its users a wide variety of ready-to-use forms, surveys, and quizzes. The free template for health survey on forms.app is easy to use. It will be explained step by step how to use the forms.app to create the health questionnaire.

1  - Sign up or log in to forms.app : For health surveys that you can create quickly and easily on forms.app, you must first log in to forms.app. You can register for free and quickly if you do not have an existing account.

medical research questionnaire

2  - Choose a sample or start from scratch :  On forms.app, you can select from a wide selection of templates covering a wide range of topics. You can edit an existing survey template on forms.app by selecting it and making the necessary changes, or you can start with a blank form and add fields as you see fit.

medical research questionnaire

3  - Select a theme or manually customize your form : You can also select a different theme from the many options offered by form.app.

medical research questionnaire

4  - Complete the settings : Finish the settings, and save. After completing all the sets, the test is ready to use! It can be used to save and share with attendees.

medical research questionnaire

Free health survey templates

A hospital or health center can find patients' feedback on their care and services by conducting a health survey. You can quickly and efficiently get the answers from the patients using the forms.app questionnaire you created. This survey tool enables medical professionals to pinpoint risk factors in the neighborhood surrounding hospitals or healthcare facilities, including prevalent health practices like drug usage, smoking, poor dietary choices, and inactivity .

Hospitals can determine whether patients' diagnoses are accurate and whether their medications are enough to treat them. These surveys will undoubtedly move more quickly and contribute to improving health services if they ask each patient the right questions. You can get started using the free templates below.

Mental Health Quiz

Mental Health Quiz

Mental Health Evaluation Form

Mental Health Evaluation Form

Telemental Health Consent Form Template

Telemental Health Consent Form Template

Sena is a content writer at forms.app. She likes to read and write articles on different topics. Sena also likes to learn about different cultures and travel. She likes to study and learn different languages. Her specialty is linguistics, surveys, survey questions, and sampling methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

35+ Best Easter quiz questions & answers

35+ Best Easter quiz questions & answers

Ayşegül Nacu

35+ essential questions to ask in a health history questionnaire

35+ essential questions to ask in a health history questionnaire

5 Types of Questionnaires (and free templates)

5 Types of Questionnaires (and free templates)

medical research questionnaire

Respiratory questionnaire and instructions to interviewers (1986)

The Medical Research Council produced this questionnaire and its instructions in 1986 to research chronic bronchitis in large-scale studies.

medical research questionnaire

Questionnaire on respiratory symptoms (1986) (PDF)

PDF , 87 KB

If you cannot open or read this document, you can ask for a different format.

Email [email protected] , telling us:

  • the name of the document
  • what format you need
  • any assistive technology you use (such as type of screen reader).

Find out about our approach to the accessibility of our website .

Questionnaire on respiratory symptoms (1986) instructions to interviewers (PDF)

PDF , 143 KB

The questionnaire was first published in 1960 under the approval of the MRC Committee on the Aetiology of Chronic Bronchitis. This was revised and a new version published in 1966.

When the committee disbanded, the responsibility for it was passed to the newly formed MRC Committee for Research into Chronic Bronchitis who again revised it in 1976. When this committee disbanded, the responsibility for the questionnaire passed to the Committee on Environmental and Occupational Health (CEOH) who reviewed it and issued what remains to be the most recent version in 1986.

The questionnaire on respiratory symptoms was designed to be used in large scale epidemiological studies only (100 to 1000 people). It cannot be used on an individual basis.

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

medical research questionnaire

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

medical research questionnaire

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

medical research questionnaire

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

medical research questionnaire

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Stop COVID Cohort: An Observational Study of 3480 Patients Admitted to the Sechenov University Hospital Network in Moscow City for Suspected Coronavirus Disease 2019 (COVID-19) Infection

Collaborators.

  • Sechenov StopCOVID Research Team : Anna Berbenyuk ,  Polina Bobkova ,  Semyon Bordyugov ,  Aleksandra Borisenko ,  Ekaterina Bugaiskaya ,  Olesya Druzhkova ,  Dmitry Eliseev ,  Yasmin El-Taravi ,  Natalia Gorbova ,  Elizaveta Gribaleva ,  Rina Grigoryan ,  Shabnam Ibragimova ,  Khadizhat Kabieva ,  Alena Khrapkova ,  Natalia Kogut ,  Karina Kovygina ,  Margaret Kvaratskheliya ,  Maria Lobova ,  Anna Lunicheva ,  Anastasia Maystrenko ,  Daria Nikolaeva ,  Anna Pavlenko ,  Olga Perekosova ,  Olga Romanova ,  Olga Sokova ,  Veronika Solovieva ,  Olga Spasskaya ,  Ekaterina Spiridonova ,  Olga Sukhodolskaya ,  Shakir Suleimanov ,  Nailya Urmantaeva ,  Olga Usalka ,  Margarita Zaikina ,  Anastasia Zorina ,  Nadezhda Khitrina

Affiliations

  • 1 Department of Pediatrics and Pediatric Infectious Diseases, Institute of Child's Health, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 2 Inflammation, Repair, and Development Section, National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom.
  • 3 Soloviev Research and Clinical Center for Neuropsychiatry, Moscow, Russia.
  • 4 School of Physics, Astronomy, and Mathematics, University of Hertfordshire, Hatfield, United Kingdom.
  • 5 Biobank, Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 6 Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 7 Chemistry Department, Lomonosov Moscow State University, Moscow, Russia.
  • 8 Department of Polymers and Composites, N. N. Semenov Institute of Chemical Physics, Moscow, Russia.
  • 9 Department of Clinical and Experimental Medicine, Section of Pediatrics, University of Pisa, Pisa, Italy.
  • 10 Institute of Social Medicine and Health Systems Research, Faculty of Medicine, Otto von Guericke University Magdeburg, Magdeburg, Germany.
  • 11 Institute for Urology and Reproductive Health, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 12 Department of Intensive Care, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 13 Clinic of Pulmonology, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 14 Department of Internal Medicine No. 1, Institute of Clinical Medicine, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 15 Department of Forensic Medicine, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • 16 Department of Statistics, University of Oxford, Oxford, United Kingdom.
  • 17 Medical Research Council Population Health Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom.
  • 18 Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, United Kingdom.
  • 19 Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Oxford, United Kingdom.
  • 20 Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.
  • PMID: 33035307
  • PMCID: PMC7665333
  • DOI: 10.1093/cid/ciaa1535

Background: The epidemiology, clinical course, and outcomes of patients with coronavirus disease 2019 (COVID-19) in the Russian population are unknown. Information on the differences between laboratory-confirmed and clinically diagnosed COVID-19 in real-life settings is lacking.

Methods: We extracted data from the medical records of adult patients who were consecutively admitted for suspected COVID-19 infection in Moscow between 8 April and 28 May 2020.

Results: Of the 4261 patients hospitalized for suspected COVID-19, outcomes were available for 3480 patients (median age, 56 years; interquartile range, 45-66). The most common comorbidities were hypertension, obesity, chronic cardiovascular disease, and diabetes. Half of the patients (n = 1728) had a positive reverse transcriptase-polymerase chain reaction (RT-PCR), while 1748 had a negative RT-PCR but had clinical symptoms and characteristic computed tomography signs suggestive of COVID-19. No significant differences in frequency of symptoms, laboratory test results, and risk factors for in-hospital mortality were found between those exclusively clinically diagnosed or with positive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RT-PCR. In a multivariable logistic regression model the following were associated with in-hospital mortality: older age (per 1-year increase; odds ratio, 1.05; 95% confidence interval, 1.03-1.06), male sex (1.71; 1.24-2.37), chronic kidney disease (2.99; 1.89-4.64), diabetes (2.1; 1.46-2.99), chronic cardiovascular disease (1.78; 1.24-2.57), and dementia (2.73; 1.34-5.47).

Conclusions: Age, male sex, and chronic comorbidities were risk factors for in-hospital mortality. The combination of clinical features was sufficient to diagnose COVID-19 infection, indicating that laboratory testing is not critical in real-life clinical practice.

Keywords: COVID-19; Russia; SARS-CoV-2; cohort; mortality risk factors.

© The Author(s) 2020. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: [email protected].

Publication types

  • Observational Study
  • Research Support, Non-U.S. Gov't
  • Hospitalization
  • Middle Aged

Grants and funding

  • 20-04-60063/Russian Foundation for Basic Research
  • Open access
  • Published: 22 April 2024

Training nurses in an international emergency medical team using a serious role-playing game: a retrospective comparative analysis

  • Hai Hu 1 , 2 , 3   na1 ,
  • Xiaoqin Lai 2 , 4 , 5   na1 &
  • Longping Yan 6 , 7 , 8  

BMC Medical Education volume  24 , Article number:  432 ( 2024 ) Cite this article

37 Accesses

Metrics details

Although game-based applications have been used in disaster medicine education, no serious computer games have been designed specifically for training these nurses in an IEMT setting. To address this need, we developed a serious computer game called the IEMTtraining game. In this game, players assume the roles of IEMT nurses, assess patient injuries in a virtual environment, and provide suitable treatment options.

The design of this study is a retrospective comparative analysis. The research was conducted with 209 nurses in a hospital. The data collection process of this study was conducted at the 2019-2020 academic year. A retrospective comparative analysis was conducted on the pre-, post-, and final test scores of nurses in the IEMT. Additionally, a survey questionnaire was distributed to trainees to gather insights into teaching methods that were subsequently analyzed.

There was a significant difference in the overall test scores between the two groups, with the game group demonstrating superior performance compared to the control group (odds ratio = 1.363, p value = 0.010). The survey results indicated that the game group exhibited higher learning motivation scores and lower cognitive load compared with the lecture group.

Conclusions

The IEMT training game developed by the instructor team is a promising and effective method for training nurses in disaster rescue within IEMTs. The game equips the trainees with the necessary skills and knowledge to respond effectively to emergencies. It is easily comprehended, enhances knowledge retention and motivation to learn, and reduces cognitive load.

Peer Review reports

Since the beginning of the twenty-first century, the deployment of international emergency medical teams in disaster-stricken regions has increased world wide [ 1 ]. To enhance the efficiency of these teams, the World Health Organization (WHO) has introduced the International Emergency Medical Team (IEMT) initiative to guarantee their competence. Adequate education and training play a vital role in achieving this objective [ 2 ].

Nurses play a vital role as IEMTs by providing essential medical care and support to populations affected by disasters and emergencies. Training newly joined nurses is an integral part of IEMT training.

Typical training methods include lectures, field-simulation exercises, and tabletop exercises [ 3 , 4 , 5 ]. However, lectures, despite requiring fewer teaching resources, are often perceived as boring and abstract. This may not be the most ideal method for training newly joined nurses in the complexities of international medical responses. However, simulation field exercises can be effective in mastering the knowledge and skills of disaster medicine responsiveness. However, they come with significant costs and requirements, such as extended instructional periods, additional teachers or instructors, and thorough preparation. These high costs make it challenging to organize simulation exercises repeatedly, making them less ideal for training newly joined nurses [ 6 ].

Moreover, classic tabletop exercises that use simple props, such as cards in a classroom setting, have limitations. The rules of these exercises are typically simple, which makes it challenging to simulate complex disaster scenarios. In addition, these exercises cannot replicate real-life situations, making them too abstract for newly joined nurses to fully grasp [ 7 , 8 ].

Recently, game-based learning has gained increasing attention as an interactive teaching method [ 9 , 10 ]. Previous studies have validated the efficacy of game-based mobile applications [ 11 , 12 ]. Serious games that align with curricular objectives have shown potential to facilitate more effective learner-centered educational experiences for trainees [ 13 , 14 ]. Although game-based applications have been used in disaster medicine education, no serious computer games have been designed specifically for training newly joined nurses in an international IEMT setting.

Our team is an internationally certified IEMT organization verified by the WHO, underscoring the importance of providing training for newly joined nurses in international medical responses. To address this need, we organized training courses for them. As part of the training, we incorporated a serious computer game called the IEMTtraining game. In this game, players assume the roles of IEMT nurses, assess patient injuries in a virtual environment, and provide suitable treatment options. This study aims to investigate the effectiveness of the IEMTtraining game. To the best of our knowledge, this is the first serious game specifically designed to train newly joined nurses in an IEMT setting.

The IEMTtraining game was subsequently applied to the training course for newly joined nurses, and this study aimed to investigate its effectiveness. To the best of our knowledge, this is the first serious game specifically designedto train newly joined nurses in an IEMT setting.

Study design

This study was conducted using data from the training records database of participants who had completed the training. The database includes comprehensive demographic information, exam scores, and detailed information from post-training questionnaires for all trainees. We reviewed the training scores and questionnaires of participants who took part in the training from Autumn 2019 to Spring 2020.

The local Institutional Review Committee approved the study and waived the requirement for informed consent due to the study design. The study complied with the international ethical guidelines for human research, such as the Declaration of Helsinki. The accessed data were anonymized.

Participants

A total of 209 newly joined nurses needed to participate in the training. Due to limitations in the size of the training venue, the trainees had to be divided into two groups for the training. All trainees were required to choose a group and register online. The training team provided the schedule and training topic for the two training sessions to all trainees before the training commenced. Each trainee had the opportunity to sign up based on their individual circumstances. Furthermore, the training team set a maximum limit of 110 trainees for each group, considering the dimensions of the training venue. Trainees were assigned on a first-come-first-served basis. In the event that a group reached its capacity, any unregistered trainees would be automatically assigned to another group.

In the fall of 2019, 103 newly joined nurses opted for the lecture training course (lecture group). In this group, instructors solely used the traditional teaching methods of lectures and demonstrations. The remaining 106 newly joined nurses underwent game-based training (game group). In addition to the traditional lectures and demonstrations, the instructor incorporated an IEMTtraining game to enhance the training experience in the game group.

The IEMTTraining game

The IEMTtraining game, a role-playing game, was implemented using the RPG Maker MV Version1.6.1 (Kadokawa Corporation, Tokyo, Tokyo Metropolis, Japan). Players assumed the roles of rescuers in a fictional setting of an earthquake (Part1 of Supplemental Digital Content ).

The storyline revolves around an earthquake scenario, with the main character being an IEMT nurse. Within the game simulation, there were 1000 patients in the scenario. The objective for each player was to treat as many patients as possible to earn higher experience points compared to other players. In addition, within the game scene, multiple nonplayer characters played the role of injured patients. The players navigate the movements of the main character using a computer mouse. Upon encountering injured persons, the player can view their injury information by clicking on them and selecting the triage tags. The player can then select the necessary medical supplies from the kit to provide treatment. Additionally, the player is required to act according to the minimum standards for IEMTs, such as registration in the IEMT coordination cell and reporting of injury information following the minimum data set (MDS) designed by the WHO [ 15 , 16 ]. This portion of the training content imposes uniform requirements for all IEMT members, hence it is necessary for IEMT nurses to learn it. All correct choices result in the accumulation of experience points. Game duration can be set by the instructor and the player with the highest experience points at the end of the game.

Measurement

We have collected the test scores of the trainees in our training database to explore their knowledge mastery. Additionally, we have collected post-training questionnaire data from the trainees to investigate their learning motivation, cognitive load, and technology acceptance.

Pre-test, post-test, and final test

All trainees were tested on three separate occasions: (1) a “pre-test”before the educational intervention, (2) a “post-test”following the intervention, and (3) a “final test”at the end of the term (sixweeks after the intervention). Each test comprised 20 multiple-choice questions (0.5 points per item) assessing the trainees’ mastery of crucial points in their knowledge and decision-making. The higher the score, the better the grade will be.

Questionnaires

The questionnaires used in this study can be found in Part 2 of the Supplemental Digital Content .

The learning motivation questionnaire used in this study was based on the measure developed by Hwang and Chang [ 17 ]. It comprises seven items rated on a six-point scale. The reliability of the questionnaire, as indicated by Cronbach’s alpha, was 0.79.

The cognitive load questionnaire was adapted from the questionnaire developed by Hwang et al [ 18 ]. It consisted of five items for assessing “mental load” and three items for evaluating “mental effort.” The items were rated using a six-point Likert scale. The Cronbach’s alpha values for the two parts of the questionnaire were 0.86 and 0.85, respectively.

The technology acceptance questionnaire, which was only administered to the game group, as it specifically focused on novel teaching techniques and lacked relevance tothe lecture group, was derived from the measurement instrument developed by Chu et al [ 19 ]. It comprised seven items for measuring “perceived ease of use” and six items for assessing “perceived usefulness.” The items were rated on a six-point Likert scale. The Cronbach’s alpha values for the two parts of the questionnaire were 0.94 and 0.95, respectively.

The lecture group received 4 hours of traditional lectures. Additionally, 1 week before the lecture, the trainees were provided with a series of references related to the topic and were required to preview the content before the class. A pre-test was conducted before the lecture to assess the trainees’ prior knowledge, followed by a post-test immediately after the lecture, and a final test 6 weeks after training.

In the game group, the delivery and requirements for references were the same as those in the lecture group. However, the training format differed. The game group received a half-hour lecture introducinggeneral principles, followed by 3 hours of gameplay. The last halfhour was dedicated to summarizing the course and addressing questions or concerns. Similar to the lecture group, the trainees in this group also completed pre-, post-, and final tests. Additionally, a brief survey ofthe teaching methods was conducted at the end of the final test (see Fig.  1 ).

figure 1

General overview of the teaching procedure. Figure Legend: The diagram shows the teaching and testing processes for the two groups of trainees. Q&A: questions and answers

Data analysis

All data were analyzed using IBM SPSS Statistics (version 20.0;IBM Inc., Armonk, NY, USA). Only the trainees who participated in all three tests were included in the analysis. In total, there were 209 trainees, but 11 individuals (6 from the lecture group and 5 from the game group) were excluded due to incomplete data. Therefore, the data of 198 trainees were ultimately included in the analysis.

In addition, measurement data with a normal distribution were described as mean (standard deviation, SD). In contrast, measurement data with non-normal distributions were expressed as median [first quartile, third quartile]. Furthermore, enumeration data were constructed using composition ratios.

Moreover, a generalized estimating equation (GEE) was employed to compare the groups’ pre-, post-, and final test scores. The Mann–Whitney U test was used to compare the questionnaire scores between the two groups. The statistical significance was set at a level of 0.05.

Among the data included in the analysis, 97 (48.99%) participants were in the lecture group, and 101 (51.01%)were in the game group.

The number of male trainees in the lecture and game groups was 30 (30.93%) and 33 (32.67%), respectively. The mean age of participants in the lecture group was 27.44 ± 4.31 years, whereas that of the game group was 28.05 ± 4.29 years. There were no significant differences in sex or age (Table  1 ). Regarding the test scores, no significant differences were found between the two groups in the pre- and post-tests. However, a significant difference was observed in the final test scores conducted 6 weeks later (Table 1 ).

According to the GEE analysis, the overall scores for the post-test and final test were higher compared to the pre-test scores. Additionally, there was a significant difference in the overall test scores between the two groups, with the game group demonstrating superior performance compared to the control group (odds ratio = 1.363, p value = 0.010). Further details of the GEE results can be found in Part 3 of the supplementary materials .

Table  2 presents the results of the questionnaire ratings for the two groups. The median [first quartile, third quartile] of the learning motivation questionnaire ratings were 4 [3, 4] for the lecture group and 5 [4, 5] for the game group. There were significant differences between the questionnaire ratings of the two groups ( p  < 0.001), indicating that the game group had higher learning motivation for the learning activity.

The median [first quartile, third quartile] of the overall cognitive load ratings were 3 [3, 4] and 4 [4, 5] for the game and lecture groups, respectively. There was a significant difference between the cognitive load ratings of the two groups ( p  < 0.001).

This study further compared two aspects of cognitive load: mental load and mental effort. The median [first quartile, third quartile] for the mental effort dimension were 3 [2, 3] and 4 [4, 5] for the game and lecture groups, respectively (p < 0.001). For mental load, the median [first quartile, third quartile] were 4 [3, 4] and 4 [3, 4] for the game and lecture groups, respectively. There was no significant difference in the mental load ratings between the two groups ( p  = 0.539).

To better understand the trainees’ perceptions of the use of the serious game, this study collected the feedback of the trainees in the game group regarding “perceived usefulness” and “perceived ease of use,” as shown in Table 2 . Most trainees provided positive feedback on the two dimensions of the serious game.

To the best of our knowledge, this IEMT training game is the first serious game intended for newly joined nurses of IEMTs. Therefore, this study presents an initial investigation into the applicability of serious games.

Both lectures and serious games improved post-class test scores to the same level, consistent with previous studies. Krishnan et al. found that an educational game on hepatitis significantly improved knowledge scores [ 20 ]. Additionally, our study showed higher knowledge retention in the game group after 6 weeks, in line with previous studies on serious games. In a study on sexually transmitted diseases, game-based instruction was found to improve knowledge retention for resident physicians compared to traditional teaching methods [ 21 ]. The IEMTtraining game, designed as a role-playing game, is more likely to enhance knowledge retention in newly joined nurses in the long term. Therefore, serious games should be included in the teaching of IEMT training.

This study demonstrated improved learning motivation in the game group, consistent with previous research indicating that game-based learning enhances motivation due to the enjoyable and challenging nature of the games [ 22 , 23 ]. A systematic review by Allan et al. further supports the positive impact of game-based learning tools on the motivation, attitudes, and engagement of healthcare trainees [ 24 ].

As serious games are a novel learning experience for trainees, it is worth investigating the cognitive load they experience. Our study found that serious games effectively reduce trainees’ overall cognitive load, particularly in terms of lower mental effort. Mental effort refers to the cognitive capacity used to handle task demands, reflecting the cognitive load associated with organizing and presenting learning content, as well as guiding student learning strategies [ 25 , 26 ]. This reduction in cognitive load is a significant advantage of serious gaming, as it helps learners better understand and organize their knowledge. However, our study did not find a significant difference in mental load between the two groups. Mental load considers the interaction between task and subject characteristics, based on students’ understanding of tasks and subject characteristics [ 18 ]. This finding is intriguing as it aligns with similar observations in game-based education for elementary and secondary school students [ 27 ], but is the first mention of game-based education in academic papers related to nursing training.

In our survey of the game group participants, we found that their feedback regarding the perceived ease of use and usefulness of the game was overwhelmingly positive. This indicates that the designed game was helpful to learners during the learning process. Moreover, the game’s mechanics were easily understood by the trainees without requiring them to investsignificant time and effort to understand the game rules and controls.

This study had some limitations. First, this retrospective observational study may have been susceptible to sampling bias due to the non-random grouping of trainees. It only reviewed existing data from the training database, and future research should be conducted to validate our findings through prospective studies. Therefore, randomized controlled trials are required. Second, the serious game is currently available only in China. We are currently developing an English version to better align with the training requirements of international IEMT nurses. Third, the development of such serious gamescan be time-consuming. To address this problem, we propose a meta-model to help researchers and instructors select appropriate game development models to implement effective serious games.

An IEMT training game for newly joined nurses is a highly promising training method. Its potential lies in its ability to offer engaging and interactive learning experiences, thereby effectively enhancing the training process. Furthermore, the game improved knowledge retention, increased motivation to learn, and reduced cognitive load. In addition, the game’s mechanics are easily understood by trainees, which further enhances its effectiveness as a training instrument.

Availability of data and materials

Availability of data and materials can be ensured through direct contact with the author. If you require access to specific data or materials mentioned in a study or research article, reaching out to the author is the best way to obtain them. By contacting the author directly, you can inquire about the availability of the desired data and materials, as well as any necessary procedures or restrictions for accessing them.

Authors are willing to provide data and materials to interested parties. They understand the importance of transparency and the positive impact of data sharing on scientific progress. Whether it is raw data, experimental protocols, or unique materials used in the study, authors can provide valuable insights and resources to support further investigations or replications.

To contact the author, one can refer to the email address provided in the article.

Abbreviations

World Health Organization

International Emergency Medical Team

Minimum Data Set

Generalized estimating eq.

Standard deviation

World Health Organization.Classification and minimum standards for emergency medical teams. https://apps.who.int/iris/rest/bitstreams/1351888/retrieve . Published 2021. Accessed May 6, 2023.

World Health Organization. Classification and Minimum Standards for Foreign Medical Teams in Sudden Onset Disasters. https://cdn.who.int/media/docs/default-source/documents/publications/classification-and-minimum-standards-for-foreign-medical-teams-in-suddent-onset-disasters65829584-c349-4f98-b828-f2ffff4fe089.pdf?sfvrsn=43a8b2f1_1&download=true . Published 2013. Accessed May 6, 2023.

Brunero S, Dunn S, Lamont S. Development and effectiveness of tabletop exercises in preparing health practitioners in violence prevention management: a sequential explanatory mixed methods study. Nurse Educ Today. 2021;103:104976. https://doi.org/10.1016/j.nedt.2021.104976 .

Article   Google Scholar  

Sena A, Forde F, Yu C, Sule H, Masters MM. Disaster preparedness training for emergency medicine residents using a tabletop exercise. Med Ed PORTAL. 2021;17:11119. https://doi.org/10.15766/mep_2374-8265.11119 .

Moss R, Gaarder C. Exercising for mass casualty preparedness. Br J Anaesth. 2022;128(2):e67–70. https://doi.org/10.1016/j.bja.2021.10.016 .

Hu H, Liu Z, Li H. Teaching disaster medicine with a novel game-based computer application: a case study at Sichuan University. Disaster Med Public Health Prep. 2022;16(2):548–54. https://doi.org/10.1017/dmp.2020.309 .

Chi CH, Chao WH, Chuang CC, Tsai MC, Tsai LM. Emergency medical technicians' disaster training by tabletop exercise. Am J Emerg Med. 2001;19(5):433–6. https://doi.org/10.1053/ajem.2001.24467 .

Hu H, Lai X, Li H, et al. Teaching disaster evacuation management education to nursing students using virtual reality Mobile game-based learning. Comput Inform Nurs. 2022;40(10):705–10. https://doi.org/10.1097/CIN.0000000000000856 .

van Gaalen AEJ, Brouwer J, Schönrock-Adema J, et al. Gamification of health professions education: a systematic review. Adv Health Sci Educ Theory Pract. 2021;26(2):683–711. https://doi.org/10.1007/s10459-020-10000-3 .

Adjedj J, Ducrocq G, Bouleti C, et al. Medical student evaluation with a serious game compared to multiple choice questions assessment. JMIR Serious Games. 2017;5(2):e11. https://doi.org/10.2196/games.7033 .

Hu H, Xiao Y, Li H. The effectiveness of a serious game versus online lectures for improving medical Students' coronavirus disease 2019 knowledge. Games Health J. 2021;10(2):139–44. https://doi.org/10.1089/g4h.2020.0140.E .

Pimentel J, Arias A, Ramírez D, et al. Game-based learning interventions to Foster cross-cultural care training: a scoping review. Games Health J. 2020;9(3):164–81. https://doi.org/10.1089/g4h.2019.0078 .

Hu H, Lai X, Yan L. Improving nursing Students' COVID-19 knowledge using a serious game. Comput Inform Nurs. 2021;40(4):285–9. https://doi.org/10.1097/CIN.0000000000000857 .

Menin A, Torchelsen R, Nedel L. An analysis of VR technology used in immersive simulations with a serious game perspective. IEEE Comput Graph Appl. 2018;38(2):57–73. https://doi.org/10.1109/MCG.2018.021951633 .

Kubo T, Chimed-Ochir O, Cossa M, et al. First activation of the WHO emergency medical team minimum data set in the 2019 response to tropical cyclone Idai in Mozambique. Prehosp Disaster Med. 2022;37(6):727–34.

Jafar AJN, Sergeant JC, Lecky F. What is the inter-rater agreement of injury classification using the WHO minimum data set for emergency medical teams? Emerg Med J. 2020;37(2):58–64. https://doi.org/10.1136/emermed-2019-209012 .

Hwang GJ, Chang HF. A formative assessment-based mobile learning approach to improving the learning attitudes and achievements of students. Comput Educ. 2011;56(4):1023–31. https://doi.org/10.1016/j.compedu.2010.12.002 .

Hwang G-J, Yang L-H. Sheng-yuan Wang.Concept map-embedded educational computer game for improving students’ learning performance in natural science courses. Comput Educ. 2013;69:121–30.

Chu HC, Hwang GJ, Tsai CC, et al. A two-tier test approach to developing location-aware mobile learning system for natural science course. Comput Educ. 2010;55(4):1618–27. https://doi.org/10.1016/j.compedu.2010.07.004 .

Krishnan S, Blebil AQ, Dujaili JA, Chuang S, Lim A. Implementation of a hepatitis-themed virtual escape room in pharmacy education: A pilot study. Educ Inf Technol (Dordr). 2023;5:1–13. https://doi.org/10.1007/s10639-023-11745-1 . Epub ahead of print. PMID: 37361790; PMCID: PMC10073791

Butler SK, Runge MA, Milad MP. A game show-based curriculum for teaching principles of reproductive infectious disease (GBS PRIDE trial). South Med J. 2020;113(11):531–7. https://doi.org/10.14423/SMJ.0000000000001165 . PMID: 33140104

Haruna H, Hu X, Chu SKW, et al. Improving sexual health education programs for adolescent students through game-based learning and gamification. Int J Environ Res Public Health. 2018;15(9):2027. https://doi.org/10.3390/ijerph15092027 .

Rewolinski JA, Kelemen A, Liang Y. Type I diabetes self-management with game-based interventions for pediatric and adolescent patients. Comput Inform Nurs. 2020;39(2):78–88. https://doi.org/10.1097/CIN.0000000000000646 .

Allan R, McCann L, Johnson L, Dyson M, Ford J. A systematic review of 'equity-focused' game-based learning in the teaching of health staff. Public Health Pract (Oxf). 2023;27(7):100462. https://doi.org/10.1016/j.puhip.2023.100462 . PMID: 38283754; PMCID: PMC10820634

Zumbach J, Rammerstorfer L, Deibl I. Cognitive and metacognitive support in learning with a serious game about demographic change. Comput Hum Behav. 2020;103:120–9. https://doi.org/10.1016/j.chb.2019.09.026 .

Chang C-C, Liang C, Chou P-N, et al. Is game-based learning better in flow experience and various types of cognitive load than non-game-based learning? Perspective from multimedia and media richness. Comput Hum Behav. 2017;71:218–27. https://doi.org/10.1016/j.chb.2017.01.031 .

Kalmpourtzis G, Romero M. Constructive alignment of learning mechanics and game mechanics in serious game design in higher education. Int J Serious Games. 2020;7(4):75–88. https://doi.org/10.17083/ijsg.v7i4.361 .

Download references

Acknowledgements

We would like to thank all the staffs who contribute to the database. We would like to thank Editage ( www.editage.cn ) for English language editing. We also would like to thank Dr. Yong Yang for statistics help. We would like to thank The 10th Sichuan University Higher Education Teaching Reform Research Project (No. SCU10170) and West China School of Medicine (2023-2024) Teaching Reform Research Project (No. HXBK-B2023016) for the support.

Author information

Both Hai Hu and Xiaoqin Lai contributed equally to this work and should be regarded as co-first authors.

Authors and Affiliations

Emergency Management Office of West China Hospital, Sichuan University, The street address: No. 37. Guoxue Road, Chengdu City, Sichuan Province, China

China International Emergency Medical Team (Sichuan), Chengdu City, Sichuan Province, China

Hai Hu & Xiaoqin Lai

Emergency Medical Rescue Base, Sichuan University, Chengdu City, Sichuan Province, China

Day Surgery Center, West China Hospital, Sichuan University, Chengdu City, Sichuan Province, China

Xiaoqin Lai

Department of Thoracic Surgery, West China Tianfu Hospital, Sichuan University, Chengdu City, Sichuan Province, China

West China School of Nursing, Sichuan University, Chengdu City, Sichuan Province, China

Longping Yan

West China School of Public Health, Sichuan University, Chengdu, Sichuan, China

West China Fourth Hospital, Sichuan University, Chengdu, Sichuan, China

You can also search for this author in PubMed   Google Scholar

Contributions

HH conceived the study, designed the trial, and obtained research funding. XL supervised the conduct of the data collection from the database, and managed the data, including quality control. HH and LY provided statistical advice on study design and analyzed the data. All the authors drafted the manuscript, and contributed substantially to its revision. HH takes responsibility for the paper as a whole.

Corresponding author

Correspondence to Hai Hu .

Ethics declarations

Ethics approval and consent to participate.

The local institutional review committee approved the study and waived the need for informed consent from the participants owing to the study design.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hu, H., Lai, X. & Yan, L. Training nurses in an international emergency medical team using a serious role-playing game: a retrospective comparative analysis. BMC Med Educ 24 , 432 (2024). https://doi.org/10.1186/s12909-024-05442-x

Download citation

Received : 05 November 2023

Accepted : 17 April 2024

Published : 22 April 2024

DOI : https://doi.org/10.1186/s12909-024-05442-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rescue work
  • Gamification
  • Simulation training

BMC Medical Education

ISSN: 1472-6920

medical research questionnaire

Stanford Woods Institute for the Environment

Planet versus Plastics

Plastic waste has infiltrated every corner of our planet, from oceans and waterways to the food chain and even our bodies. Only 9% of plastic is recycled due to factors including poor infrastructure, technical challenges, lack of incentives, and low market demand.   

“We need legislation that disincentivizes big oil from producing plastic in the first place, coupled with enforced single use plastic taxes and fines,” says Desiree LaBeaud , professor of pediatric infectious diseases and senior fellow at   Stanford Woods Institute for the Environment . “We also need truly compostable alternatives that maintain the convenient lifestyle that plastic allows us now."

Plastic presents a problem like no other. Stanford scholars are approaching it from many angles: exploring the connection between plastic and disease, rethinking how plastic could be reused, and uncovering new ways of breaking down waste. In honor of Earth Day and this year’s theme – Planet vs. Plastics – we’ve highlighted stories about promising solutions to the plastics challenge. 

Environmental changes are altering the risk for mosquito-borne diseases

medical research questionnaire

Our changing climate is dramatically altering the landscape for mosquito-borne diseases, but other changes to the physical environment - like the proliferation of plastic trash - also make an impact, as mosquitos can breed in the plastic waste we discard. 

Since this study published, HERI-Kenya , a nonprofit started by Stanford infectious disease physician Desiree LaBeaud , has launched HERI Hub , a brick and mortar education hub that educates, empowers and inspires community members to improve the local environment to promote health.

Using plastic waste to build roads, buildings, and more

medical research questionnaire

Stanford engineers  Michael Lepech  and  Zhiye Li  have a unique vision of the future: buildings and roads made from plastic waste. In this story, they discuss obstacles, opportunities, and other aspects of transforming or upcycling plastic waste into valuable materials. 

Since this white paper was published, students in Lepech's  life cycle assessment course  have explored the environmental and economic impacts of waste management, emissions, and energy efficiency of building materials for the San Francisco Museum of Modern Arts. In addition to recycled plastic, they proposed a photovoltaic system and conducted comparison studies to maximize the system’s life cycle. This work is being translated into an upcoming publication.

Stanford researchers show that mealworms can safely consume toxic additive-containing plastic

medical research questionnaire

Mealworms are not only able to eat various forms of plastic, as previous research has shown, they can also consume potentially toxic plastic additives in polystyrene with no ill effects. The worms can then be used as a safe, protein-rich feed supplement.

Since this study published, it has inspired students across the world to learn about and experiment with mealworms and plastic waste. Stanford researchers involved with this and related studies have been inundated with requests for more information and guidance from people inspired by the potential solution.

Grants tackle the plastics problem

Stanford Woods Institute has awarded more than $23 million in funding to research projects that seek to identify solutions to pressing environment and sustainability challenges, including new approaches to plastic waste management. 

Converting polyethylene into palm oil

medical research questionnaire

This project is developing a new technology to convert polyethylene — by far the most discarded plastic — into palm oil. The approach could add value to the plastic waste management chain while sourcing palm oil through a less destructive route.

Improving plastic waste management

Plastic bottles in a trash pile

This project aims to radically change the way plastic waste is processed via a new biotechnology paradigm: engineering highly active enzymes and microbes capable of breaking down polyesters in a decentralized network of “living” waste receptacles. 

More stories from Stanford

Eight simple but meaningful things you can do for the environment.

medical research questionnaire

A new, artistic perspective on plastic waste

medical research questionnaire

Whales eat colossal amounts of microplastics

medical research questionnaire

Event | Pollution and Health

medical research questionnaire

A greener future begins with small steps

medical research questionnaire

Mosquito diseases on the move

medical research questionnaire

Last straw: The path to reducing plastic pollution

medical research questionnaire

Plastic ingestion by fish a growing problem

medical research questionnaire

Stanford infectious disease expert Desiree LaBeaud talks trash, literally, on Stanford Engineering's The Future of Everything podcast. 

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Change Healthcare Cybersecurity Incident Frequently Asked Questions

Updated as of April 19, 2024

A: Given the unprecedented magnitude of this cyberattack, its widespread impact on patients and health care providers nationwide, and in the interest of patients and health care providers, OCR issued the Dear Colleague letter addressing the following:

  • OCR confirmed that it prioritized and opened an investigation of Change Healthcare and UnitedHealth Group (UHG), focused on whether a breach of protected health information (PHI) occurred and on the entities’ compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Rules . OCR did this because of the cyberattack’s unprecedented impact on patient care and privacy.
  • OCR’s investigation interests in other entities that partnered with Change Healthcare and UHG is secondary. This would include those covered entities that have business associate relationships with Change Healthcare and UHG, and those organizations that are business associates to Change Healthcare and UHG.
  • However, OCR reminded all of these entities of their HIPAA obligations to have business associate agreements in place and to ensure that timely breach notification to the Department of Health and Human Services (HHS) and affected individuals occurs.
  • OCR HIPAA Security Rule Guidance Material – This webpage provides educational materials to learn more about the HIPAA Security Rule and other sources of standards for safeguarding electronic protected health information. Materials include a Recognized Security Practices Video, Security Rule Education Paper Series, HIPAA Security Rule Guidance, OCR Cybersecurity Newsletters, and more.
  • OCR Video on How the HIPAA Security Rule Protects Against Cyberattacks – This video discusses how the HIPAA Security Rule can help covered entities and business associates defend against cyberattacks. Topics include breach trends, common attack vectors, and findings from OCR investigations.
  • OCR Webinar on HIPAA Security Rule Risk Analysis Requirement – This webinar discusses the HIPAA Security Rule requirements for conducting an accurate and thorough assessment of potential risks and vulnerabilities to electronic protect health information and reviews common risk analysis deficiencies OCR has identified in its investigations.
  • HHS Security Risk Assessment Tool – This tool is designed to assist small- to medium-sized entities in conducting an internal security risk assessment to aid in meeting the security risk analysis requirements of the HIPAA Security Rule.
  • Factsheet: Ransomware and HIPAA – This resource provides information on what is ransomware, what covered entities and business associates should do if their information systems are infected, and HIPAA breach reporting requirements.
  • Healthcare and Public Health (HPH) Cybersecurity Performance Goals – These voluntary, health care specific cybersecurity performance goals can help health care organizations strengthen cyber preparedness, improve cyber resiliency, and protect patient health information and safety.

A: Ensuring continuity of care and patient privacy is the utmost priority. In the interest of patients and health care providers who are reeling from the impact of this cyberattack of unprecedented magnitude, OCR initiated investigations of Change Healthcare and UHG. The investigation is primarily focused on whether a breach of unsecured PHI occurred and on Change Healthcare’s and UHG’s compliance with the HIPAA Rules . 

A:  No. Covered entities have up to 60 calendar days from the date of discovery of a breach of unsecured protected health information to file breach reports to OCR’s breach portal for breaches affecting 500 or more individuals.

OCR’s breach portal contains a list of all reported breaches of unsecured PHI affecting 500 or more individuals.

A: No. Before a breach is posted on the HHS Breach Portal, OCR verifies the report it receives. OCR discusses the breach reported with the regulated entity that reported the breach and verifies that the information in the breach report is accurate. Once breach verification is completed, the breach report will be posted on the HHS Breach Portal. The amount of time that the breach verification process takes can vary depending on the circumstances, but generally the verification process is completed within 14 days.

A: Yes. OCR’s ransomware guidance provides specific information on the steps covered entities and business associates should take to determine if a ransomware incident is a HIPAA breach. A breach, under the HIPAA Rules, is defined as, “…the acquisition, access, use, or disclosure of [PHI] in a manner not permitted under the [ HIPAA Privacy Rule ] which compromises the security or privacy of the PHI.” See 45 CFR 164.402. Whether the presence of ransomware would be a breach under the HIPAA Rules is a fact-specific determination. 

A: Yes, a breach of PHI is presumed to have occurred unless the covered entity can demonstrate that there is a “…low probability that the PHI has been compromised,” based on the factors in the Breach Notification Rule . The covered entity must comply with the applicable breach notification requirements, including notification to affected individuals without unreasonable delay, to the HHS Secretary, and to the media (for breaches affecting over 500 individuals). See 45 CFR 164.400-414.

The required breach notification to an individual must include to the extent possible: a brief description of the breach; a description of the types of information that were involved in the breach; the steps affected individuals should take to protect themselves from potential harm; a brief description of what the covered entity is doing to investigate the breach, mitigate the harm, and prevent further breaches; and contact information for the covered entity (or business associate, as applicable).  

A: Following a breach of unsecured PHI, covered entities must provide notification of the breach to affected individuals, the HHS Secretary, and, in certain circumstances, to the media. In addition, business associates must notify covered entities if a breach occurs at or by the business associate.

Please visit the OCR Breach Notification webpage for detailed guidance. Please visit the Breach Reporting webpage for instructions on how submit a breach notification to HHS Secretary and to access the electronic breach notification form.

Below is a summary of breach notification requirements and reporting procedures for covered entities:

Breach Notification for Covered Entities (See 45 CFR 164.404 and 164.408)

A covered entity’s breach notification obligations differ, depending on if the breach affects 500 or more individuals or fewer than 500 individuals .

Covered Entities: Submitting a Notice for a Breach Affecting 500 or More Individuals

If a breach of unsecured PHI affects 500 or more individuals , a covered entity must notify the HHS Secretary of the breach without unreasonable delay and in no case later than 60 calendar days from the discovery of the breach . The covered entity must submit the notice electronically by clicking here to access the breach notification form and complete all of the required fields.

Covered Entities: Submit a Notice for a Breach Affecting Fewer than 500 Individuals

If a breach of unsecured PHI affects fewer than 500 individuals , a covered entity must notify the HHS Secretary of the breach within 60 calendar days of the end of the calendar year in which the breach was discovered . (A covered entity is not required to wait until the end of the reporting period to report breaches affecting fewer than 500 individuals; a covered entity may report these breaches at the time they are discovered.) The covered entity may report all of its breaches affecting fewer than 500 individuals on one date, but the covered entity must complete a separate notice for each breach incident. The covered entity must submit the notice electronically by clicking here to access the breach notification form and completing all of the required fields. 

Number Uncertain: If the number of individuals affected by a breach is uncertain at the time of notification submission, the covered entity should provide an estimate, and, if it discovers additional information, submit updates in the manner specified below. If only one option is available in a particular submission category, the covered entity should pick the best option, and may provide additional details in the free text portion of the submission.

Additional Information Discovered: If a covered entity discovers additional information that supplements, modifies, or clarifies a previously submitted notice to the HHS Secretary, it may submit an additional form by checking the appropriate box to indicate that it is an addendum to the initial report, using the transaction number provided after its submission of the initial breach report.

Covered Entities: Media Notice (See 45 CFR 164.406)

Covered entities that experience a breach affecting more than 500 residents of a State or jurisdiction are, in addition to notifying the affected individuals, required to provide notice to prominent media outlets serving the State or jurisdiction. Covered entities may provide this notification in the form of a press release to appropriate media outlets serving the affected area. Like individual notice, this media notification must be provided without unreasonable delay and in no case later than 60 days following the discovery of a breach and must include the same information required for the individual notice.

Covered Entities: Substitute Notice (See 45 CFR 164.404(d)(2))

The HIPAA Breach Notification Rule allows for the use of substitute notice to affected individuals where there is insufficient or out-of-date contact information that precludes written notification to the individual. In such instances, a substitute form of notice reasonably calculated to reach the individual shall be provided. Substitute notice can be provided in the following ways:

(i) In the case in which there is insufficient or out-of-date contact information for fewer than 10 individuals , then such substitute notice may be provided by an alternative form of written notice, telephone, or other means.

(ii) In the case in which there is insufficient or out-of-date contact information for 10 or more individuals , then such substitute notice shall:

(A) Be in the form of either a conspicuous posting for a period of 90 days on the home page of the Web site of the covered entity involved, or conspicuous notice in major print or broadcast media in geographic areas where the individuals affected by the breach likely reside; and

(B) Include a toll-free phone number that remains active for at least 90 days where an individual can learn whether the individual's unsecured protected health information may be included in the breach.

A: Breach Notification for Business Associates (See 45 CFR 164.410)

If a breach of unsecured PHI occurs at or by a business associate , the business associate must notify the covered entity following the discovery of the breach. This notice must be provided without unreasonable delay and no later than 60 calendar days from the discovery of the breach . To the extent possible, the business associate must provide the covered entity with the identification of each individual affected by the breach, or each individual reasonably believed to have been affected, as well as any other available information required to be provided by the covered entity in its notification to affected individuals. 

Additionally, with respect to a breach at or by a business associate, while the covered entity is ultimately responsible for ensuring individuals are notified, the covered entity may delegate the responsibility of providing individual notices to the business associate . Covered entities and business associates should consider which entity is in the best position to provide notice to the individual, which may vary, depending on the circumstances, such as the functions the business associate performs on behalf of the covered entity and which entity has the relationship with the individual. 

A: HIPAA regulated entities affected by this incident should contact Change Healthcare and UHG with any questions on how HIPAA breach notification will occur.

A: OCR plans to update this page as needed.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of erjopen

Risk factors for post-COVID-19 condition in previously hospitalised children using the ISARIC Global follow-up protocol: a prospective cohort study

Ismail m. osmanov.

1 Z.A. Bashlyaeva Children's Municipal Clinical Hospital, Moscow, Russia

2 Pirogov Russian National Research Medical University, Moscow, Russia

29 These authors contributed equally to this article

Ekaterina Spiridonova

3 Dept of Paediatrics and Paediatric Infectious Diseases, Institute of Child's Health, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia

Polina Bobkova

Aysylu gamirova, anastasia shikhaleva, margarita andreeva, oleg blyuss.

4 School of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield, UK

Yasmin El-Taravi

Audrey dunngalvin.

5 School of Applied Psychology, University College Cork, Cork City, Ireland

Pasquale Comberiati

6 Dept of Clinical and Experimental Medicine, Section of Pediatrics, University of Pisa, Pisa, Italy

Diego G. Peroni

Christian apfelbacher.

7 Institute of Social Medicine and Health Systems Research, Faculty of Medicine, Otto von Guericke University Magdeburg, Magdeburg, Germany

Jon Genuneit

8 Pediatric Epidemiology, Dept of Pediatrics, Medical Faculty, Leipzig University, Leipzig, Germany

Lyudmila Mazankova

9 Russian Medical Academy of Continuous Professional Education of the Ministry of Healthcare of the Russian Federation, Moscow, Russia

Alexandra Miroshina

Evgeniya chistyakova.

10 Dept of Paediatrics and Paediatric Rheumatology, Institute of Child's Health, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia

Elmira Samitova

Svetlana borzakova.

11 Research Institute for Healthcare Organization and Medical Management of Moscow Healthcare Dept, Moscow, Russia

Elena Bondarenko

Anatoliy a. korsunskiy, irina konova, sarah wulf hanson.

12 Institute for Health Metrics and Evaluation, University of Washington, Seattle, WA, USA

Gail Carson

13 ISARIC Global Support Centre, Nuffield Dept of Medicine, University of Oxford, Oxford, UK

Louise Sigfrid

Janet t. scott.

14 MRC-University of Glasgow Centre for Virus Research, Glasgow, UK

Matthew Greenhawt

15 Dept of Pediatrics, Section of Allergy/Immunology, Children's Hospital Colorado, University of Colorado School of Medicine, Aurora, CO, USA

Elizabeth A. Whittaker

16 Paediatric Infectious Diseases, Imperial College Healthcare NHS Trust, London, UK

Elena Garralda

17 Division of Psychiatry, Imperial College London, London, UK

Olivia V. Swann

18 Dept of Child Life and Health, University of Edinburgh, Edinburgh, UK

19 Paediatric Infectious Diseases, Royal Hospital for Children, Glasgow, UK

Danilo Buonsenso

20 Dept of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy

21 Dipartimento di Scienze Biotecnologiche di Base, Cliniche Intensivologiche e Perioperatorie, Università Cattolica del Sacro Cuore, Rome, Italy

22 Center for Global Health Research and Studies, Università Cattolica del Sacro Cuore, Rome, Italy

Dasha E. Nicholls

Frances simpson.

23 Coventry University, Coventry, UK

Christina Jones

24 School of Psychology, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK

Malcolm G. Semple

25 Health Protection Research Unit in Emerging and Zoonotic Infections, Institute of Infection, Veterinary and Ecological Sciences, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK

26 Dept of Respiratory Medicine, Alder Hey Children's Hospital, Liverpool, UK

John O. Warner

27 Inflammation, Repair and Development Section, National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK

Piero Olliaro

Daniel munblit.

28 Research and Clinical Center for Neuropsychiatry, Moscow, Russia

Associated Data

Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.

Supplementary material ERJ-01341-2021.Supplement

This one-page PDF can be shared freely online.

Shareable PDF ERJ-01341-2021.Shareable

The long-term sequelae of coronavirus disease 2019 (COVID-19) in children remain poorly characterised. This study aimed to assess long-term outcomes in children previously hospitalised with COVID-19 and associated risk factors.

This is a prospective cohort study of children (≤18 years old) admitted to hospital with confirmed COVID-19. Children admitted between 2 April 2020 and 26 August 2020 were included. Telephone interviews used the International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) COVID-19 Health and Wellbeing Follow-up Survey for Children. Persistent symptoms (>5 months) were further categorised by system(s) involved.

518 out of 853 (61%) eligible children were available for the follow-up assessment and included in the study. Median (interquartile range (IQR)) age was 10.4 (3–15.2) years and 270 (52.1%) were girls. Median (IQR) follow-up since hospital discharge was 256 (223–271) days. At the time of the follow-up interview 126 (24.3%) participants reported persistent symptoms, among which fatigue (53, 10.7%), sleep disturbance (36, 6.9%) and sensory problems (29, 5.6%) were the most common. Multiple symptoms were experienced by 44 (8.4%) participants. Risk factors for persistent symptoms were: older age “6–11 years” (OR 2.74, 95% CI 1.37–5.75) and “12–18 years” (OR 2.68, 95% CI 1.41–5.4), and a history of allergic diseases (OR 1.67, 95% CI 1.04–2.67).

Conclusions

A quarter of children experienced persistent symptoms months after hospitalisation with acute COVID-19 infection, with almost one in 10 experiencing multisystem involvement. Older age and allergic diseases were associated with higher risk of persistent symptoms at follow-up.

Short abstract

A quarter of children experienced persistent symptoms months after COVID-19 infection, with almost one in 10 experiencing multisystem involvement. Older age and allergic diseases were associated with higher risk of persistent symptoms at follow-up. https://bit.ly/3vqeEmZ

Introduction

Emerging data suggest that a substantial proportion of people experience ongoing symptoms including fatigue and muscle weakness, breathlessness, and neurological problems more than 6 months after the acute phase of coronavirus disease 2019 (COVID-19) [ 1 , 2 ]. This phenomenon is commonly referred to as “long COVID”, a term defined by patient groups, and also known as post-COVID-19 syndrome, the post-COVID-19 condition [ 3 ] or “COVID long-haulers” [ 4 , 5 ]. Recent population data from the UK reported that the highest prevalence of long COVID after 12 weeks was among those aged 25–34 years (18.2%) and lowest in those aged 2–11 years (7.4%) [ 6 ].

Evidence on post-acute COVID-19 condition and long-term outcomes in children is still limited to small studies, with more than half having at least one persisting symptom 4 months after COVID-19 infection [ 7 ]. However, a recent publication from Australia by S ay et al. [ 8 ] suggested that only 8% of children aged 0–19 years (median 3 years) had ongoing symptoms 3–6 months after predominantly mild COVID-19 infection. The limitation of the study as acknowledged by the authors was the low age range. This mandates the inclusion of larger numbers, particularly of older children, in future studies [ 8 ].

There is a need to assess the long-term consequences of COVID-19 in paediatric populations [ 9 ], to inform clinicians, researchers and public health experts, to address the impacts of this condition on those affected and their families, and to inform discussions on vaccination of children. This cohort study aimed to investigate the incidence of and risk factors for long-term COVID-19 outcomes in children post-hospital discharge. We used the standardised follow-up data collection protocol developed by the International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) Global Paediatric COVID-19 Follow-up Working Group [ 10 ].

Study design, setting and participants

This is a prospective cohort study of children (≤18 years old) admitted with suspected or confirmed COVID-19 to Z.A. Bashlyaeva Children's Municipal Clinical Hospital in Moscow, Russia. This large tertiary university hospital can accommodate up to 980 children at a time and served as the primary COVID-19 hospital for children residing in Moscow city. Children admitted to the hospital during the first wave of the pandemic, between 2 April 2020 and 26 August 2020, with reverse transcriptase PCR-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection were included. The parents of these children were contacted between 31 January 2021 and 27 February 2021 to complete a follow-up survey for this study.

The acute-phase dataset included demographics, symptoms, comorbidities, chest computed tomography, supportive care and clinical outcomes at discharge. This study was approved by the Moscow City Independent Ethics Committee (abbreviate 1, protocol number 74). Parental consent was sought during hospital admission and consent for the follow-up interview was sought via verbal confirmation during telephone interview.

Interviews were undertaken by a team of medical students with experience gained in previous COVID-19 research [ 2 , 11 ] who underwent standardised training in telephone assessment, REDCap data entry and data security. Assessments were conducted via interviews with the parents/carers. Nonresponders were contacted by telephone three times before considering them lost to follow-up. Information about the current condition and persisting symptoms was collected using version 1.0 of the ISARIC COVID-19 Health and Wellbeing Follow-Up Survey for Children, to assess patients’ physical and psychosocial wellbeing and behaviour, with local adaptations (additional questions related to the presence and duration of signs/symptoms were included), translated into Russian. The protocol was registered at The Open Science Framework [ 12 ]. The follow-up survey documented data on demographics, parental perception of changes in their child's emotional and behavioural status (including reasons for change: COVID-19, pandemic or both), previous vaccination history, hospital stay and readmissions, mortality (after the initial index event), history of newly developed symptoms between discharge and the follow-up assessment, including symptom onset and duration, and overall health condition compared with prior to the child's COVID-19 onset ( supplementary material ). To assess the prevalence of symptoms over time parents were asked the following questions: “Within the last seven days , has your child had any of these symptoms, which were NOT present prior to their Covid-19 illness? (If yes, please indicate below and the duration of the symptom/s)” and “Please report any symptoms that have been bothering your child since discharge that are not present today. Please specify the time of onset and duration of these symptoms”.

Data management

REDCap electronic data capture tools (www.project-redcap.org) hosted at Sechenov University (Moscow, Russia) and Microsoft Excel (Microsoft, Redmond, WA, USA) were used for data collection, storage and management [ 13 , 14 ]. The baseline characteristics, including demographics, symptoms on admission and comorbidities, were extracted from electronic medical records and entered into REDCap.

Exposure and outcome variables

For the purposes of this study, we defined “persistent symptoms” as symptoms present at the time of the follow-up interview and lasting for >5 months. These were subcategorised into respiratory, neurological, sensory, sleep, gastrointestinal, dermatological, cardiovascular, fatigue and musculoskeletal ( supplementary table S1 ), as informed by previously published literature [ 15 , 16 ] and ISARIC Global Paediatric COVID-19 Follow-up Working Group discussions.

Allergic diseases were defined as the presence of any of the following: asthma, allergic rhinitis, eczema or food allergy. Participants’ age categories were based on Eunice Kennedy Shriver National Institute of Child Health and Human Development Pediatric Terminology [ 17 ]. Severe disease was defined as having received noninvasive ventilation, invasive ventilation or admission to the paediatric intensive care unit (PICU) during hospital admission.

Health status before COVID-19 and at the time of the interview was assessed using a 0–100 wellness scale [ 18 ], where 0 was the worst possible health and 100 was the best possible health.

Statistical analysis

Descriptive statistics were calculated for baseline characteristics. Continuous variables were summarised as median (interquartile range (IQR)) and categorical variables as frequency (percentage). The Chi-squared test or Fisher's exact test was used for testing hypotheses on differences in proportions between groups. The Wilcoxon rank-sum test was used for testing hypotheses on differences between groups.

We performed multivariable logistic regression to investigate associations of demographic characteristics, comorbidities (limited to those reported in ≥5% of participants), presence of pneumonia and severity of COVID-19 during acute infection with persistent symptom categories present at the time of the follow-up interview. We included all participants for whom the variables of interest were available in the final analysis, without imputing missing data. The differing denominators used indicate missing data. Odds ratios were calculated together with 95% confidence intervals.

UpSet plots were used to present the coexistence of persistent symptom categories. Two-sided p-values were reported for all statistical tests; a p-value <0.05 was considered to be statistically significant. Statistical analysis was performed using R version 3.5.1 ( https://cran.r-project.org ). Packages used included dplyr, lubridate, ggplots2, plotrix and UpSetR.

Patient and public involvement

The survey was developed by the ISARIC Global Paediatric COVID-19 Follow-up Working Group and informed by a wide range of global stakeholders with expertise in infectious diseases, critical care, paediatrics, epidemiology, allergy/immunology, respiratory medicine, psychiatry, psychology and methodology, and patient representatives. The survey was distributed to the members of the patient group and suggestions from parents/carers were implemented.

All 853 children hospitalised with suspected СOVID-19 between 2 April 2020 and 26 August 2020 were discharged alive ( figure 1 ). Of 836 patients with accurate contact information, parents of 518 PCR-positive children agreed to be interviewed (response rate 62%) and were included in the analysis.

An external file that holds a picture, illustration, etc.
Object name is ERJ-01341-2021.01.jpg

Flow diagram of patients with COVID-19 admitted to Z.A. Bashlyaeva Children's Municipal Clinical Hospital between 2 April 2020 and 26 August 2020. # : relatives unable to describe the child's health; relatives not willing to refer interviewers to the child's parents/carers; inability to speak Russian.

Median (IQR) age was 10.4 (3–15.2) years (range 2 days–18 years) and 272 (52.1%) were girls. Median (IQR) follow-up time since hospital admission was 268 (233–284) days. Children had a median (IQR) of 8 (4–9) years of formal school education and a median (IQR) of 4 (3–5) family members were residing in the household ( table 1 ).

Demographic characteristics of patients with COVID-19 admitted to Z.A. Bashlyaeva Children's Municipal Clinical Hospital

Data are presented as n/N (%) or median (interquartile range), excluding missing values. PICU: paediatric intensive care unit. # : all cases of diabetes were type 1.

The most common pre-existing comorbidity in this cohort was food allergy (13% (67/514)), followed by allergic rhinitis and asthma (9.7% (50/514)), gastrointestinal problems (9.3% (48/514)), eczema (8.8% (45/514)), and neurological conditions (8.8% (45/514)). Parents of 55.3% (284/514) of children did not report any comorbidities. Fever (83.6% (427/511)), cough (55.7% (284/510)), rhinorrhoea (54.3% (278/512)) and fatigue (38.9% (197/506)) were the most common presenting symptoms at the time of hospital admission ( supplementary table S2 ). 37.3% (192/515) of patients had pneumonia during the hospital stay; 2.7% (14/515) had severe disease, which required noninvasive ventilation/invasive ventilation or admission to the PICU. Treatments received during hospital admission are presented in supplementary table S3 .

At the time of the follow-up interview, parents of 24.7% (128) of children reported at least one persistent symptom, with fatigue 10.6% (53/496), insomnia 5.19% (26/501), disturbed smell 4.7% (22/467) and headache 3.5% (17/486) being the most common. Detailed information on symptoms and duration is presented in supplementary table S4 .

The prevalence of the symptoms present at the time of discharge declined over time ( figure 2 ). The number of children with fatigue fell from 15.8% (82/518) at the time of discharge to 8.8% (45/513) 6–7 months later, altered sense of smell from 8.7% (45/518) to 4.7% (24/514), sleep disturbance from 7.5% (39/518) to 5.8% (30/515), altered sense of taste from 5.6% (29/518) to 3.1% (16/515), headache from 4.6% (24/518) to 3.5% (18/517) and breathing difficulties from 3.9% (20/518) to 1% (5/517). The prevalence of the most common symptoms, including symptoms that developed some time after discharge, is shown in supplementary figure S1 .

An external file that holds a picture, illustration, etc.
Object name is ERJ-01341-2021.02.jpg

Duration of the most common symptoms (post-discharge) in children who experienced symptoms at the time of discharge. The calculations are based on responses to the questions: “Within the last seven days , has your child had any of these symptoms, which were NOT present prior to their Covid-19 illness? (If yes, please indicate below and the duration of the symptom/s)” and “Please report any symptoms that have been bothering your child since discharge that are not present today. Please specify the time of onset and duration of these symptoms”.

With regard to persistent symptom categories ( supplementary table S1 ), fatigue was the most commonly reported in 10.6% (53/498) of patients at the time of assessment, followed by sleep disturbance 7.2% (36/501), sensory 6.2% (29/467), gastrointestinal 4.4% (22/499) and dermatological 3.6% (18/496) problems. A smaller number of patients experienced neurological 3% (14/465), respiratory 2.5% (12/489), cardiovascular 1.9% (9/470) and musculoskeletal 1.8% (9/489) problems long-term.

A total of 8.5% (44) of participants reported persistent symptoms from more than one category at the time of the follow-up assessment. The most commonly co-occurring categories were fatigue and sleep problems in 1.9% (10) of children, and fatigue and sensory problems were present in 1.5% (8) of children. 2.7% (14) of children had persistent symptoms from three or more different categories. Coexistence of persistent symptom categories at the time of follow-up is presented in the UpSet plot in figure 3 .

An external file that holds a picture, illustration, etc.
Object name is ERJ-01341-2021.03.jpg

UpSet plot representing the coexistence of persistent symptom (present at the time of follow-up interview and lasting for >5 months) categories at follow-up assessment. The values represent the number of individuals experiencing a persistent symptom category or combination of categories. Dark blue lines link multiple symptoms indicated by dark blue circles.

The scores on the wellness scale for children with one or two or more persistent symptoms significantly declined when compared with before COVID-19 onset, from 90 (80–100) to 82.5 (70–93.8) and from 90 (80–95) to 70 (60–80) (p<0.001 for all comparisons), respectively. Children who did not experience any persistent symptoms did not report any significant changes in wellness when asked to compare with how they felt before their acute COVID-19 illness. We also assessed emotional difficulties, social relationships and activity levels in children ( supplementary tables S4 and S5 ). Parents related the following changes to COVID-19 illness and not to the pandemic in general: less eating in 4.5% (23/512) of children, less sleeping in 3.5% (18/511) and more sleeping in 2% (10/511), reduced physical activity in 4.7% (24/512), and child becoming less emotional in 4.3% (22/511). In contrast, parents attributed changes to social activities to the pandemic in general rather than to COVID-19 illness: 12% (58/485) of children were spending less time with their friends in person, while 13% (61/470) were spending more time with friends remotely, with less than 1% of parents attributing these changes to COVID-19 illness. 23% (110/478) of children were spending more time watching television, playing video/computer games or using social media for educational purposes, with 92.9% of parents associating these changes with the pandemic in general rather than COVID-19 illness.

In multivariable regression analysis, older age was associated with persistent symptoms ( figure 4a ). When compared with children <2 years of age, those aged 6–11 years had OR 2.57 (95% CI 1.29–5.36) for persistent symptoms and those aged 12–18 years had OR 2.52 (95% CI 1.34–5.01). Another predictor associated with persistent symptoms was allergic diseases (OR 1.67, 95% CI 1.04–2.67). Similar patterns were seen for children with coexistence of persistent symptoms from two or more categories: 6–11 years of age OR 2.49 (95% CI 1.02–6.72) and 12–18 years of age OR 3.18 (95% CI 1.43–8.11), both versus <2 years of age ( figure 4b ).

An external file that holds a picture, illustration, etc.
Object name is ERJ-01341-2021.04.jpg

Multivariable logistic regression model to identify pre-existing risk factors for long COVID. Odds ratios (with 95% confidence intervals) for the presence of a) any category of persistent symptoms (n=127) at the time of follow-up and b) two or more coexisting categories of persistent symptoms (n=73) at the time of follow-up. Neurological conditions and allergic diseases are specified in table 1 . Odds ratios are plotted on a log scale.

We ran an additional regression analyses using “age” as a continuous variable which gave a similar result ( supplementary figure S2 ). When subgroup analyses were performed in the age group ≥6 years, severe acute COVID-19 was associated with persistent symptoms (OR 6.14, 95% CI 1.27–43.94) and excessive weight and obesity with coexistence of persistent symptoms from two or more categories (OR 2.89, 95% CI 1.12–7.15) ( supplementary figure S3 ).

To the best our knowledge, this is the largest prospective paediatric cohort study with the longest follow-up assessing symptom prevalence and duration of long COVID in children and adolescents with laboratory-confirmed SARS-CoV-2 infection post-hospital discharge. We found that a quarter of children and adolescents had persistent symptoms at the time of follow-up, with fatigue, sleep disturbance and sensory problems being the most common. Almost one in 10 reported multisystem impacts with two or more categories of persistent symptoms at the time of follow-up. Children in mid-childhood and adolescence (aged 6–18 years) were at higher risk of persistent symptoms at the time of follow-up. Although prevalence of symptoms declined over time, a substantial proportion experienced problems many months after discharge.

Although many children experienced symptoms such as fatigue, disturbed smell and taste, sleep and respiratory problems, hair loss, and headaches at the time of hospital discharge, we witnessed a steady decline in symptom prevalence over time. This was particularly evident for fatigue and smell disturbance. Prevalence of some symptoms such as headache and sleep problems declined slower, which may be driven by psychological mechanisms rather than pathophysiological virus infection effects [ 19 ]. A limitation of these findings is that symptom onset and duration was recalled at the single follow-up interview in our study; this may be overcome with repeated follow-ups at appropriate intervals to limit potential recall imprecision. There are very few studies assessing long COVID in children and adolescents; a previous smaller study from Italy found similar persisting symptoms during a shorter follow-up [ 7 ]. In line with our results, previous research demonstrated symptoms fading over time in adults [ 15 ]; however, data are still limited as most of the published cohort studies do not measure the duration of symptoms, but rather assess their presence at a single follow-up.

We found that almost one in 10 children had multisystem impacts with two or more categories of persistent symptoms present at the time of follow-up. Similar numbers were previously reported in the Russian adult population [ 2 ] and patients with clusters of different symptoms were described in the UK [ 20 ]. Patients with multisystem involvement will represent the primary target for the development of future research and intervention strategies.

Age was significantly associated with persistent symptom presence at the time of follow-up, with children aged ≥6 years being at higher risk. To the best of our knowledge, risk factors for long COVID in children have not been investigated in previous studies, so we may draw comparisons with the data from adult cohorts only. Previous data suggest that long COVID is prevalent in adults [ 1 , 2 , 20 – 23 ] and that age is associated with a higher risk of long COVID [ 20 , 22 ]. An Australian follow-up study of 151 children (median 3 years) who had predominantly mild acute COVID-19 found only 8% with ongoing long COVID symptoms [ 8 ]. As acknowledged by the authors of that study, low median age may be the main reason for the low long COVID prevalence and our study substantiates this. We also found that in children aged ≥6 years, severe acute COVID-19 was associated with persistent symptoms and excessive weight and obesity with multisystem involvement, but confidence intervals were wide and these findings require confirmation on a larger sample size to make any firm conclusions.

We found that allergic diseases in children were also associated with a higher risk of long COVID. This is in agreement with adult studies from Russia [ 2 ] and the UK [ 20 ] reporting asthma to be associated with the development of long COVID. Recent data suggested that COVID-19 consequences may be linked with mast cell activation syndrome [ 24 ] and the T-helper type 2-biased immunological response in children with allergic diseases may be responsible for an increased risk of long-term consequences from the infection. This highlights the importance of further research on potential underlying immunological and autoimmune mechanisms of long COVID [ 25 ].

Apart from physical symptoms, we also assessed emotional and behavioural changes. Although most parents reported no changes, one in 20 parents noticed changes in their children that they attributed to COVID-19 illness rather than the general situation during the pandemic. These included changes in eating, sleeping, emotional wellbeing and physical activities. Over one in 10 parents noted that their children were spending less time in face-to-face communication and more time interacting with their friends remotely, and spending time online for both educational and noneducational purposes. These changes were largely attributed to the general situation during the pandemic rather than to COVID-19 illness. The “lockdown” measures were implemented in Moscow in the middle of March and lasted until June 2020. Restrictions included self-isolation, closure of public places, including schools/universities, social distancing, etc. The pandemic resulted in an increase in anxiety levels among the population, which was associated with increased media consumption [ 26 ]. The effect of the pandemic, illness or both should be further studied in future research.

A major strength of this study is that it was based on the ISARIC COVID-19 Health and Wellbeing Follow-Up Survey for Children, which will assist with data harmonisation and comparison with other international studies in the future. Another strength is the large sample size of confirmed SARS-CoV-2-infected children and this cohort has the longest follow-up assessment of hospitalised children to date. Stratification to determine if the symptoms were persistent following COVID-19 and assessment of trends over time were other novel aspects of the study. At the same time, this cohort study has several limitations. First, the study population only included patients within Moscow, although regional clustering is common to many cohort studies published during the COVID-19 pandemic. Second, it only included hospitalised children, not representative of the paediatric population. Third, we did not have a control group of previously hospitalised children not experiencing COVID-19. Fourth, some patients may have developed additional comorbidities or complications since hospital discharge that were not appropriately captured and could potentially affect wellbeing and symptom prevalence and persistence. Fifth, parents/caregivers were interviewed in this study and not the children themselves. There is also a risk of selection bias due to recruitment of the hospitalised population and recall bias in reporting symptoms which were nonexistent at the time of follow-up, and potential selection bias with those with symptoms more likely to agree to the survey.

The study used to generate this data within the ISARIC WHO Clinical Characterisation Protocol initiative is a prospective pandemic preparedness protocol which is agnostic to disease and has a pragmatic design to allow recruitment during pandemic conditions. The reality of conducting research in outbreak conditions does not allow for appropriate co-enrolment of a control group, which is not practical. One of the issues which has not been addressed so far in clinical research is what control group of individuals admitted to hospital during this period when hospitals were overwhelmed with COVID-19 cases could provide a valid control group. The design of this study allows us only to describe the features of COVID-19 survivors and cannot involve a control group. СOVID-19 is not just a respiratory tract infection, so there is no “one size fits all” control group. At present, to the best of our knowledge, all major publications on long COVID are uncontrolled cohorts due to the difficulties of ascertaining data among controls matched for age and sex, but most importantly matched for the same experiences during the pandemic aside from confirmed COVID-19 illness.

Our findings have implications for further research. Longer follow-up duration and repeated assessments combined with controls and sampling for further studies into the pathophysiology and immunology of post-COVID-19 illness sequelae are needed to inform case definitions and intervention trials aimed to improve long-term outcomes.

Although symptoms which were present at discharge diminished over time, even 8 months after hospital discharge many children experienced persistent symptoms, with fatigue, sensory changes and sleep problems being the most common sequelae. One in 10 children experienced multisystem involvement at the time of follow-up. Older age and allergic diseases were the main risk factors for persistent symptoms. Future work should be multidisciplinary, prospective, preferably with a control cohort, repeated sampling, and with an ability for children to report their health and wellbeing themselves, accompanied by biological sample collection to establish causative mechanisms for a better understanding of COVID-19 sequelae and help with the phenotype/endotype categorisation.

Supplementary material

Shareable pdf, acknowledgements.

We are very grateful to the Z.A. Bashlyaeva Children's Municipal Clinical Hospital clinical staff and to the patients, parents, carers and families for their kindness and understanding during these difficult times of the COVID-19 pandemic. We would like to express our very great appreciation to the ISARIC Global COVID-19 Follow-up Working Group for survey development. We would like to thank Maksim Kholopov (Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia) for providing technical support in data collection and database administration. We are very thankful to Eat & Talk, Luch, Black Market, FLIP and Academia (Moscow, Russia) for providing us the workspace in a time of need and their support of COVID-19 research. Finally, we would like to extend our gratitude to the ISARIC Global team, the ISARIC Global Adult and Paediatric COVID-19 Follow-up Working Group, and the ISARIC Global Support Centre for their continuous support, expertise and for the development of the outbreak-ready standardised protocols for the data collection.

Sechenov Stop COVID Research Team: Elina Abdeeva, Nikol Alekseeva, Anastasiia Bairashevskaia, Dina Baimukhambetova, Lusine Baziyants, Anna Berbenyuk, Tatiana Bezbabicheva, Julia Chayka, Salima Deunezhewa, Yulia Filippova, Anastasia Gorina, Cyrill Gorlenko, Margarita Kalinina, Bogdan Kirillov, Herman Kiseljow, Natalya Kogut, Mariia Korgunova, Anastasia Kotelnikova, Alexandra Krupina, Anna Kuznetsova, Anastasia Kuznetsova, Veronika Laukhina, Baina Lavginova, Elza Lidjieva, Nadezhda Markina, Daria Nikolaeva, Georgiy Novoselov, Polina Petrova, Erika Porubayeva, Kristina Presnyakova, Anna Pushkareva, Mikhail Rumyantsev, Ilona Sarukhanyan, Jamilya Shatrova, Nataliya Shishkina, Anastasia Shvedova, Valeria Ustyan, Maria Varaksina, Ekaterina Varlamova, Margarita Yegiyan and Elena Zuykova (Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia); Svetlana Gadetskaya and Yulia V. Ivanova (Dept of Paediatrics and Paediatric Infectious Diseases, Institute of Child's Health, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia).

This article has an editorial commentary: https://doi.org/10.1183/13993003.02245-2021

Conflict of interest: J. Genuneit reports working as a project manager of unrestricted research grants on the composition of breast milk to Ulm University and Leipzig University with funding from Danone Nutricia Research. M.G. Semple reports grants from the Dept of Health and Social Care National Institute of Health Research UK, grants from the Medical Research Council UK, grants from the Health Protection Research Unit in Emerging & Zoonotic Infections, University of Liverpool, outside the submitted work; he also reports a minority ownership at Integrum Scientific LLC (Greensboro, NC, USA), outside the submitted work. T. Vos reports personal fees for work on the Global Burden of Disease Study from the Bill and Melinda Gates Foundation, outside the submitted work. C. Apfelbacher has received lecture fees from AstraZeneca, and is a member of a group developing a core outcome set for long COVID, outside the submitted work. All other authors report no relevant conflict of interests.

IMAGES

  1. United Kingdom Patient Questionnaire

    medical research questionnaire

  2. Clinical experience questionnaire with survey items and responses from

    medical research questionnaire

  3. Medical Screening Questionnaire Cmpb

    medical research questionnaire

  4. FREE 40+ Questionnaire Forms in PDF

    medical research questionnaire

  5. Adult medical questionnaire sample in Word and Pdf formats

    medical research questionnaire

  6. 10+ Best Medical Questionnaire Templates

    medical research questionnaire

VIDEO

  1. 10Min Research

  2. How to develop and design questionnaire for research?

  3. How to Make a Questionnaire for Research

  4. What is Questionnaire? Types of Questionnaire-Characteristics of Questionnaire

  5. How to modify questionnaire

  6. How to write qualitative research questions

COMMENTS

  1. Practical Guidelines to Develop and Evaluate a Questionnaire

    Thus, the questionnaire-based research was criticized by many in the past for being a soft science. The scale construction is also not a part of most of the graduate and postgraduate training. ... The increasing usage of questionnaires in medical sciences requires rigorous scientific evaluations before finally adopting it for routine use. There ...

  2. Designing and validating a research questionnaire

    However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss how to design (part 1) and how to use and validate (part 2) a research questionnaire. It is important to emphasize that questionnaires seek to gather information from other people and ...

  3. Medical Surveys: Questions & Templates for Patients

    Track patient satisfaction by asking for feedback after office visits or hospital stays. Ask patients to give feedback on their interactions with staff, medical technicians, physicians, and nurses. Send out a survey asking about possible improvements to waiting rooms, check-in procedures, appointment-setting, cleanliness, and more.

  4. (PDF) Surveys and questionnaires in health research

    National Health Survey 2007 - 08. The Australian Bureau of Statistics has conducted cross-sectional National Health Surveys every three years to. track the state of health of the nation and ...

  5. Hands-on guide to questionnaire research: Selecting, designing, and

    Increasingly, health services research uses standard questionnaires designed for producing data that can be compared across studies. ... This is the first in a series of three articles on questionnaire research. References w1-w17, further illustrative examples, and checklists are on bmj.com.

  6. Questionnaires in clinical trials: guidelines for optimal design and

    A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). The mode of administration can also impact on the cost, quality and completeness of data ...

  7. Questionnaire Design

    Revised on June 22, 2023. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  8. Selecting, designing, and developing your questionnaire

    Anybody can write down a list of questions and photocopy it, but producing worthwhile and generalisable data from questionnaires needs careful planning and imaginative design The great popularity with questionnaires is they provide a "quick fix" for research methodology. No single method has been so abused. 1 Questionnaires offer an objective means of collecting information about people's ...

  9. MRC questionnaire (MRCQ) on respiratory symptoms

    The Medical Research Council Questionnaire (MRCQ) was developed by researchers at the Medical Research Council, UK, as a tool to study respiratory epidemiology in communities and occupational groups [ 1 ]. It reliably relates symptoms and lung function and has been in use for almost 50 years.

  10. Top 15 Health Survey Questions for Questionnaires

    Contactless health survey templates. Use this template to screen visitors before they arrive at any place. 2. Contactless health screening template for Airlines. Use the below questionnaire while the passengers book tickets and just before they board the flight. 3. Contactless health screening template for conferences.

  11. Questionnaire surveys in medical research

    Questionnaire surveys in medical research. Questionnaire surveys in medical research. Questionnaire surveys in medical research J Eval ... J Eaden, M K Mayberry, J F Mayberry. Affiliation 1 Gastrointestinal Research Unit, Leicester General Hospital, UK. PMID: 11133122 DOI: 10.1046/j.1365-2753.2000.00263.x No abstract available ...

  12. Questionnaire

    In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

  13. 21 Questionnaire Templates: Examples and Samples

    A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. This article enlists 21 questionnaire templates along with samples and examples. It also describes the different types of questionnaires and the question types that are used in these questionnaires.

  14. 77 Exciting Medical Research Topics (2024)

    These issues result in health disparities and injustices. Examples of research topics about health inequities include: The impact of social determinants of health in a set population. Early and late-stage cancer stage diagnosis in urban vs. rural populations. Affordability of life-saving medications.

  15. A quick guide to survey research

    Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

  16. 20 Amazing health survey questions for questionnaires

    It offers its users a wide variety of ready-to-use forms, surveys, and quizzes. The free template for health survey on forms.app is easy to use. It will be explained step by step how to use the forms.app to create the health questionnaire. 1 - Sign up or log in to forms.app: For health surveys that you can create quickly and easily on forms.app ...

  17. Respiratory questionnaire and instructions to interviewers (1986)

    The questionnaire on respiratory symptoms was designed to be used in large scale epidemiological studies only (100 to 1000 people). It cannot be used on an individual basis. The Medical Research Council produced this questionnaire and its instructions in 1986 to research chronic bronchitis in large-scale studies.

  18. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  19. Predicting COVID-19 Vaccination Intentions to Inform Evidence-Based

    Outcome measures included self-reported vaccination intention and behavior. Predictor measures, rooted in theories of social and behavioral science that have been found to be predictive of vaccination outcomes (i.e., Reasoned Action Approach, Extended Parallel Process Model), included perceived severity and susceptibility, negative affect, instrumental and affective attitudes, social norms ...

  20. Stop COVID Cohort: An Observational Study of 3480 Patients ...

    3 Soloviev Research and Clinical Center for Neuropsychiatry, Moscow, Russia. 4 School of Physics, Astronomy, and Mathematics, University of Hertfordshire, Hatfield, United Kingdom. 5 Biobank, Institute for Regenerative Medicine, Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia.

  21. Training nurses in an international emergency medical team using a

    The research was conducted with 209 nurses in a hospital. The data collection process of this study was conducted at the 2019-2020 academic year. A retrospective comparative analysis was conducted on the pre-, post-, and final test scores of nurses in the IEMT. ... The survey results indicated that the game group exhibited higher learning ...

  22. Hands-on guide to questionnaire research: Administering, analysing, and

    Research participants, on the other hand, may be motivated to complete a questionnaire through interest, boredom, a desire to help others (particularly true in health studies), because they feel pressurised to do so, through loneliness, or for an unconscious ulterior motive ("pleasing the doctor").

  23. 2018 Database

    From this page you can download the PISA 2018 dataset with the full set of responses from individual students, school principals, teachers and parents. These files will be of use to statisticians and professional researchers who would like to undertake their own analysis of the PISA 2018 data. The files available on this page include ...

  24. WHO 2024 data call is now open for antifungals in the preclinical

    To have a robust clinical antifungal pipeline it's essential to invest and monitor its upstream development. In November 2022 WHO released the WHO fungal priority pathogens list (FPPL), a catalogue of the 19 fungi that represent the greatest threat to public health. The list is the first global effort to systematically prioritize fungal pathogens, considering the unmet research and development ...

  25. How to design a questionnaire

    Some recommend a sample size of 100, while others recommend the size as a multiple (ranging from 5 to 30) of the number of items. The general advice is to take as large a sample size as possible. [ 7] If a questionnaire already exists in another language, it can be used to avoid the exhaustive process of developing a new one.

  26. Planet versus Plastics

    Planet versus Plastics. Plastic waste has infiltrated every corner of our planet, from oceans and waterways to the food chain and even our bodies. Only 9% of plastic is recycled due to factors including poor infrastructure, technical challenges, lack of incentives, and low market demand. "We need legislation that disincentivizes big oil from ...

  27. Research Methods in Healthcare Epidemiology: Survey and Qualitative

    Here we discuss only those aspects of qualitative research that may complement survey design and/or analysis. Qualitative approaches are becoming more common in health-care research and are dependent on conceptual, rather than numerical, analysis. Qualitative research is used to discern complex social processes, understand how a particular ...

  28. Change Healthcare Cybersecurity Incident Frequently Asked Questions

    Change Healthcare Cybersecurity Incident Frequently Asked Questions. (i) In the case in which there is insufficient or out-of-date contact information for fewer than 10 individuals, then such substitute notice may be provided by an alternative form of written notice, telephone, or other means. (ii) In the case in which there is insufficient or out-of-date contact information for 10 or more ...

  29. Risk factors for post-COVID-19 condition in previously hospitalised

    Information about the current condition and persisting symptoms was collected using version 1.0 of the ISARIC COVID-19 Health and Wellbeing Follow-Up Survey for Children, to assess patients' physical and psychosocial wellbeing and behaviour, with local adaptations (additional questions related to the presence and duration of signs/symptoms ...