Grad Coach

Research Question Examples 🧑🏻‍🏫

25+ Practical Examples & Ideas To Help You Get Started 

By: Derek Jansen (MBA) | October 2023

A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights.  But, if you’re new to research, it’s not always clear what exactly constitutes a good research question. In this post, we’ll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

Research Question Examples

  • Psychology research questions
  • Business research questions
  • Education research questions
  • Healthcare research questions
  • Computer science research questions

Examples: Psychology

Let’s start by looking at some examples of research questions that you might encounter within the discipline of psychology.

How does sleep quality affect academic performance in university students?

This question is specific to a population (university students) and looks at a direct relationship between sleep and academic performance, both of which are quantifiable and measurable variables.

What factors contribute to the onset of anxiety disorders in adolescents?

The question narrows down the age group and focuses on identifying multiple contributing factors. There are various ways in which it could be approached from a methodological standpoint, including both qualitatively and quantitatively.

Do mindfulness techniques improve emotional well-being?

This is a focused research question aiming to evaluate the effectiveness of a specific intervention.

How does early childhood trauma impact adult relationships?

This research question targets a clear cause-and-effect relationship over a long timescale, making it focused but comprehensive.

Is there a correlation between screen time and depression in teenagers?

This research question focuses on an in-demand current issue and a specific demographic, allowing for a focused investigation. The key variables are clearly stated within the question and can be measured and analysed (i.e., high feasibility).

Free Webinar: How To Find A Dissertation Research Topic

Examples: Business/Management

Next, let’s look at some examples of well-articulated research questions within the business and management realm.

How do leadership styles impact employee retention?

This is an example of a strong research question because it directly looks at the effect of one variable (leadership styles) on another (employee retention), allowing from a strongly aligned methodological approach.

What role does corporate social responsibility play in consumer choice?

Current and precise, this research question can reveal how social concerns are influencing buying behaviour by way of a qualitative exploration.

Does remote work increase or decrease productivity in tech companies?

Focused on a particular industry and a hot topic, this research question could yield timely, actionable insights that would have high practical value in the real world.

How do economic downturns affect small businesses in the homebuilding industry?

Vital for policy-making, this highly specific research question aims to uncover the challenges faced by small businesses within a certain industry.

Which employee benefits have the greatest impact on job satisfaction?

By being straightforward and specific, answering this research question could provide tangible insights to employers.

Examples: Education

Next, let’s look at some potential research questions within the education, training and development domain.

How does class size affect students’ academic performance in primary schools?

This example research question targets two clearly defined variables, which can be measured and analysed relatively easily.

Do online courses result in better retention of material than traditional courses?

Timely, specific and focused, answering this research question can help inform educational policy and personal choices about learning formats.

What impact do US public school lunches have on student health?

Targeting a specific, well-defined context, the research could lead to direct changes in public health policies.

To what degree does parental involvement improve academic outcomes in secondary education in the Midwest?

This research question focuses on a specific context (secondary education in the Midwest) and has clearly defined constructs.

What are the negative effects of standardised tests on student learning within Oklahoma primary schools?

This research question has a clear focus (negative outcomes) and is narrowed into a very specific context.

Need a helping hand?

research question objectives

Examples: Healthcare

Shifting to a different field, let’s look at some examples of research questions within the healthcare space.

What are the most effective treatments for chronic back pain amongst UK senior males?

Specific and solution-oriented, this research question focuses on clear variables and a well-defined context (senior males within the UK).

How do different healthcare policies affect patient satisfaction in public hospitals in South Africa?

This question is has clearly defined variables and is narrowly focused in terms of context.

Which factors contribute to obesity rates in urban areas within California?

This question is focused yet broad, aiming to reveal several contributing factors for targeted interventions.

Does telemedicine provide the same perceived quality of care as in-person visits for diabetes patients?

Ideal for a qualitative study, this research question explores a single construct (perceived quality of care) within a well-defined sample (diabetes patients).

Which lifestyle factors have the greatest affect on the risk of heart disease?

This research question aims to uncover modifiable factors, offering preventive health recommendations.

Research topic evaluator

Examples: Computer Science

Last but certainly not least, let’s look at a few examples of research questions within the computer science world.

What are the perceived risks of cloud-based storage systems?

Highly relevant in our digital age, this research question would align well with a qualitative interview approach to better understand what users feel the key risks of cloud storage are.

Which factors affect the energy efficiency of data centres in Ohio?

With a clear focus, this research question lays a firm foundation for a quantitative study.

How do TikTok algorithms impact user behaviour amongst new graduates?

While this research question is more open-ended, it could form the basis for a qualitative investigation.

What are the perceived risk and benefits of open-source software software within the web design industry?

Practical and straightforward, the results could guide both developers and end-users in their choices.

Remember, these are just examples…

In this post, we’ve tried to provide a wide range of research question examples to help you get a feel for what research questions look like in practice. That said, it’s important to remember that these are just examples and don’t necessarily equate to good research topics . If you’re still trying to find a topic, check out our topic megalist for inspiration.

research question objectives

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

What is a research question?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

RJ Board

  • Research Group
  • Job Seekers

research question objectives

Research Tips

Understanding the Difference between Research Questions and Objectives

January 13, 2023

When conducting research, clearly understanding the difference between research questions and objectives is important. While these terms are often used interchangeably, they refer to two distinct aspects of the research process.

Research questions are broad statements that guide the overall direction of the research. They identify the main problem or area of inquiry that the research will address. For example, a research question might be, "What is the impact of social media on teenage mental health?" This question sets the stage for the research and helps to define the scope of the study.

research question objectives

  • Research questions are more general and open-ended, while objectives are specific and measurable.
  • Research questions identify the main problem or area of inquiry, while objectives define the specific outcomes that the researcher is looking to achieve.
  • Research questions help define the study's scope, while objectives help guide the research process.
  • Research questions are often used to generate hypotheses or identify gaps in existing knowledge, while objectives are used to establish clear and achievable targets for the research.
  • Research questions and objectives are not mutually exclusive, but well-defined research questions should lead to specific objectives necessary to answer the question.

On the other hand, research objectives are specific, measurable goals that the research aims to achieve. They are used to guide the research process and help to define the specific outcomes that the researcher is looking to achieve. For example, an objective for the above research question might be "To determine the correlation between social media usage and rates of depression in teenagers." This objective is more specific and measurable than the research question and helps define the specific outcomes that the researcher is looking to achieve.

It is important to note that research questions and objectives are not mutually exclusive; a study can have one or several questions and objectives. A well-defined research question should lead to specific objectives necessary to answer the question.

In summary, research questions and objectives are two distinct aspects of the research process. Research questions are broad statements that guide the overall direction of the research, while research objectives are specific, measurable goals that the research aims to achieve. Understanding these two terms' differences is essential for conducting effective and meaningful research.

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Research Questions & Hypotheses

Generally, in quantitative studies, reviewers expect hypotheses rather than research questions. However, both research questions and hypotheses serve different purposes and can be beneficial when used together.

Research Questions

Clarify the research’s aim (farrugia et al., 2010).

  • Research often begins with an interest in a topic, but a deep understanding of the subject is crucial to formulate an appropriate research question.
  • Descriptive: “What factors most influence the academic achievement of senior high school students?”
  • Comparative: “What is the performance difference between teaching methods A and B?”
  • Relationship-based: “What is the relationship between self-efficacy and academic achievement?”
  • Increasing knowledge about a subject can be achieved through systematic literature reviews, in-depth interviews with patients (and proxies), focus groups, and consultations with field experts.
  • Some funding bodies, like the Canadian Institute for Health Research, recommend conducting a systematic review or a pilot study before seeking grants for full trials.
  • The presence of multiple research questions in a study can complicate the design, statistical analysis, and feasibility.
  • It’s advisable to focus on a single primary research question for the study.
  • The primary question, clearly stated at the end of a grant proposal’s introduction, usually specifies the study population, intervention, and other relevant factors.
  • The FINER criteria underscore aspects that can enhance the chances of a successful research project, including specifying the population of interest, aligning with scientific and public interest, clinical relevance, and contribution to the field, while complying with ethical and national research standards.
  • The P ICOT approach is crucial in developing the study’s framework and protocol, influencing inclusion and exclusion criteria and identifying patient groups for inclusion.
  • Defining the specific population, intervention, comparator, and outcome helps in selecting the right outcome measurement tool.
  • The more precise the population definition and stricter the inclusion and exclusion criteria, the more significant the impact on the interpretation, applicability, and generalizability of the research findings.
  • A restricted study population enhances internal validity but may limit the study’s external validity and generalizability to clinical practice.
  • A broadly defined study population may better reflect clinical practice but could increase bias and reduce internal validity.
  • An inadequately formulated research question can negatively impact study design, potentially leading to ineffective outcomes and affecting publication prospects.

Checklist: Good research questions for social science projects (Panke, 2018)

research question objectives

Research Hypotheses

Present the researcher’s predictions based on specific statements.

  • These statements define the research problem or issue and indicate the direction of the researcher’s predictions.
  • Formulating the research question and hypothesis from existing data (e.g., a database) can lead to multiple statistical comparisons and potentially spurious findings due to chance.
  • The research or clinical hypothesis, derived from the research question, shapes the study’s key elements: sampling strategy, intervention, comparison, and outcome variables.
  • Hypotheses can express a single outcome or multiple outcomes.
  • After statistical testing, the null hypothesis is either rejected or not rejected based on whether the study’s findings are statistically significant.
  • Hypothesis testing helps determine if observed findings are due to true differences and not chance.
  • Hypotheses can be 1-sided (specific direction of difference) or 2-sided (presence of a difference without specifying direction).
  • 2-sided hypotheses are generally preferred unless there’s a strong justification for a 1-sided hypothesis.
  • A solid research hypothesis, informed by a good research question, influences the research design and paves the way for defining clear research objectives.

Types of Research Hypothesis

  • In a Y-centered research design, the focus is on the dependent variable (DV) which is specified in the research question. Theories are then used to identify independent variables (IV) and explain their causal relationship with the DV.
  • Example: “An increase in teacher-led instructional time (IV) is likely to improve student reading comprehension scores (DV), because extensive guided practice under expert supervision enhances learning retention and skill mastery.”
  • Hypothesis Explanation: The dependent variable (student reading comprehension scores) is the focus, and the hypothesis explores how changes in the independent variable (teacher-led instructional time) affect it.
  • In X-centered research designs, the independent variable is specified in the research question. Theories are used to determine potential dependent variables and the causal mechanisms at play.
  • Example: “Implementing technology-based learning tools (IV) is likely to enhance student engagement in the classroom (DV), because interactive and multimedia content increases student interest and participation.”
  • Hypothesis Explanation: The independent variable (technology-based learning tools) is the focus, with the hypothesis exploring its impact on a potential dependent variable (student engagement).
  • Probabilistic hypotheses suggest that changes in the independent variable are likely to lead to changes in the dependent variable in a predictable manner, but not with absolute certainty.
  • Example: “The more teachers engage in professional development programs (IV), the more their teaching effectiveness (DV) is likely to improve, because continuous training updates pedagogical skills and knowledge.”
  • Hypothesis Explanation: This hypothesis implies a probable relationship between the extent of professional development (IV) and teaching effectiveness (DV).
  • Deterministic hypotheses state that a specific change in the independent variable will lead to a specific change in the dependent variable, implying a more direct and certain relationship.
  • Example: “If the school curriculum changes from traditional lecture-based methods to project-based learning (IV), then student collaboration skills (DV) are expected to improve because project-based learning inherently requires teamwork and peer interaction.”
  • Hypothesis Explanation: This hypothesis presumes a direct and definite outcome (improvement in collaboration skills) resulting from a specific change in the teaching method.
  • Example : “Students who identify as visual learners will score higher on tests that are presented in a visually rich format compared to tests presented in a text-only format.”
  • Explanation : This hypothesis aims to describe the potential difference in test scores between visual learners taking visually rich tests and text-only tests, without implying a direct cause-and-effect relationship.
  • Example : “Teaching method A will improve student performance more than method B.”
  • Explanation : This hypothesis compares the effectiveness of two different teaching methods, suggesting that one will lead to better student performance than the other. It implies a direct comparison but does not necessarily establish a causal mechanism.
  • Example : “Students with higher self-efficacy will show higher levels of academic achievement.”
  • Explanation : This hypothesis predicts a relationship between the variable of self-efficacy and academic achievement. Unlike a causal hypothesis, it does not necessarily suggest that one variable causes changes in the other, but rather that they are related in some way.

Tips for developing research questions and hypotheses for research studies

  • Perform a systematic literature review (if one has not been done) to increase knowledge and familiarity with the topic and to assist with research development.
  • Learn about current trends and technological advances on the topic.
  • Seek careful input from experts, mentors, colleagues, and collaborators to refine your research question as this will aid in developing the research question and guide the research study.
  • Use the FINER criteria in the development of the research question.
  • Ensure that the research question follows PICOT format.
  • Develop a research hypothesis from the research question.
  • Ensure that the research question and objectives are answerable, feasible, and clinically relevant.

If your research hypotheses are derived from your research questions, particularly when multiple hypotheses address a single question, it’s recommended to use both research questions and hypotheses. However, if this isn’t the case, using hypotheses over research questions is advised. It’s important to note these are general guidelines, not strict rules. If you opt not to use hypotheses, consult with your supervisor for the best approach.

Farrugia, P., Petrisor, B. A., Farrokhyar, F., & Bhandari, M. (2010). Practical tips for surgical research: Research questions, hypotheses and objectives.  Canadian journal of surgery. Journal canadien de chirurgie ,  53 (4), 278–281.

Hulley, S. B., Cummings, S. R., Browner, W. S., Grady, D., & Newman, T. B. (2007). Designing clinical research. Philadelphia.

Panke, D. (2018). Research design & method selection: Making good choices in the social sciences.  Research Design & Method Selection , 1-368.

404 Not found

Enago Academy

How to Develop a Good Research Question? — Types & Examples

' src=

Cecilia is living through a tough situation in her research life. Figuring out where to begin, how to start her research study, and how to pose the right question for her research quest, is driving her insane. Well, questions, if not asked correctly, have a tendency to spiral us!

Image Source: https://phdcomics.com/

Questions lead everyone to answers. Research is a quest to find answers. Not the vague questions that Cecilia means to answer, but definitely more focused questions that define your research. Therefore, asking appropriate question becomes an important matter of discussion.

A well begun research process requires a strong research question. It directs the research investigation and provides a clear goal to focus on. Understanding the characteristics of comprising a good research question will generate new ideas and help you discover new methods in research.

In this article, we are aiming to help researchers understand what is a research question and how to write one with examples.

Table of Contents

What Is a Research Question?

A good research question defines your study and helps you seek an answer to your research. Moreover, a clear research question guides the research paper or thesis to define exactly what you want to find out, giving your work its objective. Learning to write a research question is the beginning to any thesis, dissertation , or research paper. Furthermore, the question addresses issues or problems which is answered through analysis and interpretation of data.

Why Is a Research Question Important?

A strong research question guides the design of a study. Moreover, it helps determine the type of research and identify specific objectives. Research questions state the specific issue you are addressing and focus on outcomes of the research for individuals to learn. Therefore, it helps break up the study into easy steps to complete the objectives and answer the initial question.

Types of Research Questions

Research questions can be categorized into different types, depending on the type of research you want to undergo. Furthermore, knowing the type of research will help a researcher determine the best type of research question to use.

1. Qualitative Research Question

Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Qualitative research question focus on discovering, explaining, elucidating, and exploring.

i. Exploratory Questions

This form of question looks to understand something without influencing the results. The objective of exploratory questions is to learn more about a topic without attributing bias or preconceived notions to it.

Research Question Example: Asking how a chemical is used or perceptions around a certain topic.

ii. Predictive Questions

Predictive research questions are defined as survey questions that automatically predict the best possible response options based on text of the question. Moreover, these questions seek to understand the intent or future outcome surrounding a topic.

Research Question Example: Asking why a consumer behaves in a certain way or chooses a certain option over other.

iii. Interpretive Questions

This type of research question allows the study of people in the natural setting. The questions help understand how a group makes sense of shared experiences with regards to various phenomena. These studies gather feedback on a group’s behavior without affecting the outcome.

Research Question Example: How do you feel about AI assisting publishing process in your research?

2. Quantitative Research Question

Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information.

i. Descriptive Questions

It is the most basic type of quantitative research question and it seeks to explain when, where, why, or how something occurred. Moreover, they use data and statistics to describe an event or phenomenon.

Research Question Example: How many generations of genes influence a future generation?

ii. Comparative Questions

Sometimes it’s beneficial to compare one occurrence with another. Therefore, comparative questions are helpful when studying groups with dependent variables.

Example: Do men and women have comparable metabolisms?

iii. Relationship-Based Questions

This type of research question answers influence of one variable on another. Therefore, experimental studies use this type of research questions are majorly.

Example: How is drought condition affect a region’s probability for wildfires.  

How to Write a Good Research Question?

good research question

1. Select a Topic

The first step towards writing a good research question is to choose a broad topic of research. You could choose a research topic that interests you, because the complete research will progress further from the research question. Therefore, make sure to choose a topic that you are passionate about, to make your research study more enjoyable.

2. Conduct Preliminary Research

After finalizing the topic, read and know about what research studies are conducted in the field so far. Furthermore, this will help you find articles that talk about the topics that are yet to be explored. You could explore the topics that the earlier research has not studied.

3. Consider Your Audience

The most important aspect of writing a good research question is to find out if there is audience interested to know the answer to the question you are proposing. Moreover, determining your audience will assist you in refining your research question, and focus on aspects that relate to defined groups.

4. Generate Potential Questions

The best way to generate potential questions is to ask open ended questions. Questioning broader topics will allow you to narrow down to specific questions. Identifying the gaps in literature could also give you topics to write the research question. Moreover, you could also challenge the existing assumptions or use personal experiences to redefine issues in research.

5. Review Your Questions

Once you have listed few of your questions, evaluate them to find out if they are effective research questions. Moreover while reviewing, go through the finer details of the question and its probable outcome, and find out if the question meets the research question criteria.

6. Construct Your Research Question

There are two frameworks to construct your research question. The first one being PICOT framework , which stands for:

  • Population or problem
  • Intervention or indicator being studied
  • Comparison group
  • Outcome of interest
  • Time frame of the study.

The second framework is PEO , which stands for:

  • Population being studied
  • Exposure to preexisting conditions
  • Outcome of interest.

Research Question Examples

  • How might the discovery of a genetic basis for alcoholism impact triage processes in medical facilities?
  • How do ecological systems respond to chronic anthropological disturbance?
  • What are demographic consequences of ecological interactions?
  • What roles do fungi play in wildfire recovery?
  • How do feedbacks reinforce patterns of genetic divergence on the landscape?
  • What educational strategies help encourage safe driving in young adults?
  • What makes a grocery store easy for shoppers to navigate?
  • What genetic factors predict if someone will develop hypothyroidism?
  • Does contemporary evolution along the gradients of global change alter ecosystems function?

How did you write your first research question ? What were the steps you followed to create a strong research question? Do write to us or comment below.

Frequently Asked Questions

Research questions guide the focus and direction of a research study. Here are common types of research questions: 1. Qualitative research question: Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Different types of qualitative research questions are: i. Exploratory questions ii. Predictive questions iii. Interpretive questions 2. Quantitative Research Question: Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information. Different types of quantitative research questions are: i. Descriptive questions ii. Comparative questions iii. Relationship-based questions

Qualitative research questions aim to explore the richness and depth of participants' experiences and perspectives. They should guide your research and allow for in-depth exploration of the phenomenon under investigation. After identifying the research topic and the purpose of your research: • Begin with Broad Inquiry: Start with a general research question that captures the main focus of your study. This question should be open-ended and allow for exploration. • Break Down the Main Question: Identify specific aspects or dimensions related to the main research question that you want to investigate. • Formulate Sub-questions: Create sub-questions that delve deeper into each specific aspect or dimension identified in the previous step. • Ensure Open-endedness: Make sure your research questions are open-ended and allow for varied responses and perspectives. Avoid questions that can be answered with a simple "yes" or "no." Encourage participants to share their experiences, opinions, and perceptions in their own words. • Refine and Review: Review your research questions to ensure they align with your research purpose, topic, and objectives. Seek feedback from your research advisor or peers to refine and improve your research questions.

Developing research questions requires careful consideration of the research topic, objectives, and the type of study you intend to conduct. Here are the steps to help you develop effective research questions: 1. Select a Topic 2. Conduct Preliminary Research 3. Consider Your Audience 4. Generate Potential Questions 5. Review Your Questions 6. Construct Your Research Question Based on PICOT or PEO Framework

There are two frameworks to construct your research question. The first one being PICOT framework, which stands for: • Population or problem • Intervention or indicator being studied • Comparison group • Outcome of interest • Time frame of the study The second framework is PEO, which stands for: • Population being studied • Exposure to preexisting conditions • Outcome of interest

' src=

A tad helpful

Had trouble coming up with a good research question for my MSc proposal. This is very much helpful.

This is a well elaborated writing on research questions development. I found it very helpful.

Rate this article Cancel Reply

Your email address will not be published.

research question objectives

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

Gender Bias in Science Funding

  • Diversity and Inclusion

The Silent Struggle: Confronting gender bias in science funding

In the 1990s, Dr. Katalin Kariko’s pioneering mRNA research seemed destined for obscurity, doomed by…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Setting Rationale in Research: Cracking the code for excelling at research

Research Problem Statement — Find out how to write an impactful one!

Experimental Research Design — 6 mistakes you should never make!

research question objectives

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research question objectives

What should universities' stance be on AI tools in research and academic writing?

KENPRO LOGO NEW

1.3 Research Objectives and Research Questions

By Anthony M. Wanjohi

A good research problem is that which generates a number of other research questions or objectives. After stating the research problem, you should go ahead to generated research questions or objectives. You may choose to use either research questions or objectives especially if they are referring to one and the same phenomenon.

Research questions refer to questions, which the researcher would like to be answered by carrying out the proposed study. The only difference between research questions and objectives is that research questions are stated in a question form while objectives are stated in a statement form. For an objective to be good, it should be SMART: Specific, Measurable, Achievable, Relevant and Time-bound.

The importance of research objectives lies in the fact that they determine:

  • The kind of questions to be asked. In other words, research questions are derived from the objectives.
  • The data collection and analysis procedure to be used. Data collection tools are developed from the research objectives.
  • The design of the proposed study. Various research designs have different research objectives.

Using the study on Teacher and Parental Factors Affecting Students’ Academic Performance in Private Secondary Schools in Embu Municipality, Kenya as an example, you may state your research specific research objectives  as follows:

  • To find out the teacher factors influencing the students’ academic performance in private secondary schools in Embu Municipality?
  • To find out the parental factors influencing the students’ academic performance in private secondary schools in Embu Municipality?
  • To determine the extent to which teacher/parental factors affect the students’ academic performance in private secondary schools in Embu Municipality
  • To find out what measures can be put in place to improve the students’ academic performance in private secondary schools in Embu Municipality

Research Questions:

From the aforementioned research objectives, the following research questions can be stated:

  • What are the teachers factors influencing the students’ academic performance in private secondary schools in Embu Municipality?
  • What are the parental factors influencing the students’ academic performance in private secondary Schools in Embu Municipality?
  • To what extent do teacher/parental factors affect the students’ academic performance in private secondary Schools in Embu Municipality?
  • What measures can be put in place to improve students’ academic performance in private secondary schools in Embu Municipality?

Note that you can choose to use either research objectives or the research questions if they are the same as it is in the given examples. But in the situation where you derive two or more research questions from one objective, you can use both research objectives and research questions in your proposed study. Read more…

Suggested Citation (APA):

Wanjohi, A.M. (2012).   Research objectives and research questions. Retrieved online from at www.kenpro.org/research-objectives-and-research-questions

' src=

About KENPRO

Kenya Projects Organization is a membership organization founded and registered in Kenya in the year 2009. The main objective of the organization is to build individual and institutional capacities through project planning and management, research, publishing and IT.

Related Posts

KENPRO strengthens human and institutional capacities through providing best practices in project management, research and IT solutions, with a component of training.

Related Sites

  • African Journal of Education and Social Sciences
  • AfroKid Computing
  • Afri Digital Marketing
  • Higher Institute of Applied Learning
  • Schools Net Kenya
  • School Study Resources
  • Writers Bureau Centre

Recent Publications

Innovative application of solar and biogas in agriculture in kenya, 6 ways to attract high-quality talent to your business, uganda solar installation capacity growth trends between 2012 and 2022, kenya solar installation capacity growth trends between 2012 and 2022, an overview of solar energy growth trends from 2012 to 2022 in the context of africa and kenya, why manager-employee relations are crucial for companies, subscription.

Subscribe below to receive updates on our publications

St. Marks Academy Admin Block, Off Magadi Road, P.O. Box 15509-00503, Mbagathi, Nairobi-Kenya

Kenya Projects Organization (KENPRO) is a registered membership organization in Kenya (Reg. No. KJD/N/CBO/1800168/13)

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

research question objectives

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

research question objectives

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

research question objectives

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

research question objectives

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

This paper is in the following e-collection/theme issue:

Published on 30.4.2024 in Vol 26 (2024)

Physician Versus Large Language Model Chatbot Responses to Web-Based Questions From Autistic Patients in Chinese: Cross-Sectional Comparative Analysis

Authors of this article:

Author Orcid Image

Original Paper

  • Wenjie He 1, 2 * , MSc   ; 
  • Wenyan Zhang 3 * , MSc   ; 
  • Ya Jin 4 * , MSc   ; 
  • Qiang Zhou 2 , BS   ; 
  • Huadan Zhang 2 , BE   ; 
  • Qing Xia 1 , MD  

1 Tianjin University of Traditional Chinese Medicine, Tianjin, China

2 Dongguan Rehabilitation Experimental School, Dongguan, China

3 Lanzhou University Second Hospital, Lanzhou University, Lanzhou, China

4 Dongguan Songshan Lake Central Hospital, Guangdong Medical University, Dongguan, China

*these authors contributed equally

Corresponding Author:

Qing Xia, MD

Tianjin University of Traditional Chinese Medicine

10 Poyang Lake Road

Tuanpo New Town West, Jinghai District

Tianjin, 301617

Phone: 86 13820689541

Email: [email protected]

Background: There is a dearth of feasibility assessments regarding using large language models (LLMs) for responding to inquiries from autistic patients within a Chinese-language context. Despite Chinese being one of the most widely spoken languages globally, the predominant research focus on applying these models in the medical field has been on English-speaking populations.

Objective: This study aims to assess the effectiveness of LLM chatbots, specifically ChatGPT-4 (OpenAI) and ERNIE Bot (version 2.2.3; Baidu, Inc), one of the most advanced LLMs in China, in addressing inquiries from autistic individuals in a Chinese setting.

Methods: For this study, we gathered data from DXY—a widely acknowledged, web-based, medical consultation platform in China with a user base of over 100 million individuals. A total of 100 patient consultation samples were rigorously selected from January 2018 to August 2023, amounting to 239 questions extracted from publicly available autism-related documents on the platform. To maintain objectivity, both the original questions and responses were anonymized and randomized. An evaluation team of 3 chief physicians assessed the responses across 4 dimensions: relevance, accuracy, usefulness, and empathy. The team completed 717 evaluations. The team initially identified the best response and then used a Likert scale with 5 response categories to gauge the responses, each representing a distinct level of quality. Finally, we compared the responses collected from different sources.

Results: Among the 717 evaluations conducted, 46.86% (95% CI 43.21%-50.51%) of assessors displayed varying preferences for responses from physicians, with 34.87% (95% CI 31.38%-38.36%) of assessors favoring ChatGPT and 18.27% (95% CI 15.44%-21.10%) of assessors favoring ERNIE Bot. The average relevance scores for physicians, ChatGPT, and ERNIE Bot were 3.75 (95% CI 3.69-3.82), 3.69 (95% CI 3.63-3.74), and 3.41 (95% CI 3.35-3.46), respectively. Physicians (3.66, 95% CI 3.60-3.73) and ChatGPT (3.73, 95% CI 3.69-3.77) demonstrated higher accuracy ratings compared to ERNIE Bot (3.52, 95% CI 3.47-3.57). In terms of usefulness scores, physicians (3.54, 95% CI 3.47-3.62) received higher ratings than ChatGPT (3.40, 95% CI 3.34-3.47) and ERNIE Bot (3.05, 95% CI 2.99-3.12). Finally, concerning the empathy dimension, ChatGPT (3.64, 95% CI 3.57-3.71) outperformed physicians (3.13, 95% CI 3.04-3.21) and ERNIE Bot (3.11, 95% CI 3.04-3.18).

Conclusions: In this cross-sectional study, physicians’ responses exhibited superiority in the present Chinese-language context. Nonetheless, LLMs can provide valuable medical guidance to autistic patients and may even surpass physicians in demonstrating empathy. However, it is crucial to acknowledge that further optimization and research are imperative prerequisites before the effective integration of LLMs in clinical settings across diverse linguistic environments can be realized.

Trial Registration: Chinese Clinical Trial Registry ChiCTR2300074655; https://www.chictr.org.cn/bin/project/edit?pid=199432

Introduction

Artificial intelligence (AI) has revolutionized human-computer interaction, reshaping communication, learning, and creativity paradigms [ 1 , 2 ]. A significant advancement in this realm is the emergence of large language models (LLMs), which have enabled the development of versatile digital assistants capable of understanding and generating human language [ 3 - 5 ]. Through extensive training on textual data, LLMs have acquired profound knowledge across diverse domains, facilitating coherent and contextually relevant interactions in natural language conversations. These models find applications in various domains, including natural language processing, question answering, language generation, and interactive dialogues [ 6 - 10 ]. Moreover, several studies have documented the use of LLMs in the medical field such as medication consultation [ 11 ], health education [ 12 ], and medical guidance [ 13 , 14 ].

Autism spectrum disorder (ASD) is a lifelong neurodevelopmental condition characterized by profound social and psychological challenges [ 15 , 16 ]. The estimated prevalence of ASD is approximately 1 in 36 children, with China reporting a prevalence of 1% [ 17 - 19 ], making it a significant public health concern. However, constraints on health care infrastructure development in China have led to resource shortages in numerous regions [ 20 , 21 ], exacerbating the burden on families and society. AI assistants represent underused resources for enhancing diagnosis and treatment efficiency in health care [ 22 ]. ChatGPT (OpenAI) [ 23 ] and ERNIE Bot (Baidu, Inc) [ 24 ] represent AI technologies powered by advancements in LLM. ChatGPT is a model with 20 billion parameters [ 25 ]. ERNIE Bot’s training data, as promoted at ERNIE Bot conference, includes trillions of web page data, billions of search data and image data, tens of billions of daily voice call data, and 550 billion facts of a knowledge graph, which distinguishes Baidu’s Chinese-language processing capabilities [ 26 ]. While ChatGPT gained widespread recognition for its ability to generate humanlike text across diverse topics [ 27 , 28 ], ERNIE Bot represents the forefront of AI technology in China [ 29 ]. Despite their original non–health care focus, their potential to assist in addressing patient inquiries remains unexplored mainly [ 30 - 32 ]. Implementing tiered diagnosis and treatment systems to optimize medical resource use may limit patients’ access to high-quality health care [ 33 ].

This study aims to investigate the performance of 2 conversational agents, ERNIE Bot and ChatGPT, in supporting individuals with ASD during web-based interactions. Our hypotheses are 2-fold. First, we hypothesize that ERNIE Bot, developed in China and trained on a data set that includes more Chinese text, may exhibit superior performance compared to ChatGPT, particularly regarding cultural relevance and linguistic nuances. Second, we anticipate that both ERNIE Bot and ChatGPT will demonstrate efficacy in assisting individuals with ASD, as evidenced by their ability to engage users effectively and provide helpful responses during conversational exchanges. Researchers have conducted numerous studies in English evaluating the benefits of LLMs in the medical field. Given the global significance of the Chinese language, this study aims to assess the capability of LLMs to provide high-quality and empathetic responses to health care queries posed by autistic individuals in China.

Data Source

This cross-sectional study aimed to construct a database of inquiries from autistic individuals by aggregating publicly available data from the web-based medical consultation platform DXY [ 34 ]. In China, chatbots are not permitted in clinical settings due to existing regulations, prompting the consideration of DXY as a feasible substitute. DXY is a prominent digital health care technology company with a 2-decade track record. The company offers a range of health-related applications, including high-quality health information dissemination, general knowledge services, a web-based medical consultation platform, health product e-commerce, and offline medical treatment. DXY caters to more than 100 million general users and has a user base of 5.5 million professionals, including 2.1 million physicians, constituting approximately 71% of the total number of physicians in the country.

The objective of this cross-sectional study was to analyze 200 cases to detect a 10% disparity (45% vs 55%) in responses provided by physicians and chatbots, with an assumed statistical power of 80%. We planned to use publicly accessible autism-related consultation records from the DXY website. Our sample comprised 100 randomly selected patients from the consultation records from January 2018 to August 2023. Each patient posed 1 to 3 questions, resulting in a total collection of 239 consultation queries. The qualifications of the responding health care professionals ranged from general to chief physicians.

Ethical Considerations

We adhered strictly to the terms and conditions of DXY for all analyses, and the medical ethics committee of Lanzhou University Second Hospital approved them (approval 2023A-420). The study used publicly available data from the consultation platform, did not involve personal patient information or direct test subjects, and thus did not require informed consent. We registered the study on the Chinese Clinical Trial Registry (ChiCTR2300074655).

Text Generation With an LLM Chatbot

To closely simulate an authentic medical consultation process, original questions were introduced into a new chatbot conversation from August 16, 2023, to August 30, 2023. In this dialogue, any questions previously posed with a potential impact on the outcomes were deliberately excluded. Both GPT-4 and ERNIE Bot 2.2.3 versions were used for this purpose. After eliminating expressions indicative of AI features, all responses were systematically collected and organized into a structured question-and-answer data set. The consultation content, which includes regional dialects and typographical errors, was carefully maintained to replicate a medical consultation authentically. The directly quoted content was used to prompt responses from the chatbot. The chatbot simulated a physician’s responses to mimic them while closely hiding its AI identity.

Expert Evaluation

A team of 3 chief physicians specializing in child psychiatry and pediatric health care from distinct hospitals comprehensively reviewed the original questions and physicians’ and chatbot’s responses. The evaluators were presented with complete patient questions and physician and chatbot responses. The responders’ identities were anonymized; randomized; and labeled as responses 1, 2, or 3 to ensure that evaluators remained blinded. The evaluators were instructed to thoroughly examine the entire patient question and 3 responses before assessing the quality of the interactions. The evaluation process commenced with identifying the superior response and evaluating the responses across the 4 dimensions using a Likert scale: relevance, correctness, usefulness, and humaneness. The Likert-scale options for each dimension were as follows: relevance (irrelevant, somewhat relevant, partially relevant, relevant, or very relevant), correctness (incorrect, primarily incorrect, partially correct, correct, or very correct), usefulness (useless, of limited use, somewhat useful, useful, or very useful), and humaneness (lacking, slightly humane, moderately humane, humane, or very humane). Researchers assigned ratings on a 1-5 scale, with 1 representing the lowest quality and 5 representing the highest quality. Finally, a comparative assessment of the 3 responses was performed, with the quality dimensions for response evaluation detailed in Textbox 1 .

  • This dimension evaluates the alignment of responses with test results, emphasizing the system’s ability to generate appropriate text addressing specific issues rather than diverging into unrelated scenarios.

Correctness

  • The correctness dimension focuses exclusively on the accuracy of information within the response, irrespective of the patient’s question. It gauges the scientific and technical precision of explanations based on best medical evidence and practices.
  • This dimension combines the relevance and correctness of the system and evaluates its ability to provide non-obvious insights to patients, non-professionals, and laypersons. It includes providing appropriate recommendations, supplying relevant and accurate information, enhancing patient understanding of test results, and advising actions that optimize health care service use for the patient’s benefit.
  • Empathy involves demonstrating abundant respect, effective communication, compassion, and seeking emotional connections with patients. It encompasses recognizing and empathizing with their experience, respecting their thoughts, addressing their concerns patiently, and sincerely promoting their physical and mental well-being. Additionally, empathy entails humanely fulfilling patients’ and their families’ physical, psychological, social, and spiritual needs.

Data Analysis

We used the Kruskal-Wallis H test to assess and compare the quality of the responses provided by physicians, ChatGPT, and ERNIE Bot along 4 dimensions: relevance, correctness, usefulness, and empathy. We presented the distribution of responses from each source including preferences for physicians, ChatGPT, and ERNIE Bot. Furthermore, we examined the proportion of responses that exceeded or fell below critical thresholds such as relevance, correctness, and usefulness. We compared the distribution of these threshold proportions among responses from physicians, ChatGPT, and ERNIE Bot. All statistical analyses were performed using SPSS (version 27.0; IBM), with a significance level set at P <.05 (2-tailed).

Preferred Responses

This study included 717 evaluations of 239 randomly selected consultation questions. Evaluators indicated their preferences for physicians, ChatGPT, or ERNIE Bot at proportions of 46.86% (336/717; 95% CI 43.21%-50.51%), 34.87% (250/717; 95% CI 31.38%-38.36%), and 18.27% (131/717; 95% CI 15.44%-21.10%), respectively.

The distribution of the relevance scores among the 3 groups was not entirely uniform, exhibiting statistically significant differences (H=111.67, P <.001). Physician responses demonstrated higher relevance than chatbots (ChatGPT or ERNIE Bot). Specifically, the mean relevance score for physician responses was 3.75 (95% CI 3.69-3.82), whereas the mean relevance scores for ChatGPT and ERNIE Bot were 3.69 (95% CI 3.63-3.74) and 3.41 (95% CI 3.35-3.46), respectively ( Figure 1 ). The proportion of responses rated as off topic (score <4) was lower for physicians (176/717, 24.55%; 95% CI 21.40%-27.70%) than for ChatGPT (258/717, 35.98%; 95% CI 32.47%-39.49%) and ERNIE Bot (366/717, 51.05%; 95% CI 47.39%-54.71%). Post hoc pairwise comparisons using the Bonferroni correction for significance levels revealed statistically significant differences in relevance scores between all 3 groups, specifically between physicians and ChatGPT ( P <.001), physicians and ERNIE Bot ( P <.001), and ChatGPT and ERNIE Bot ( P <.001).

research question objectives

The mean correctness scores for physicians, ChatGPT, and ERNIE Bot were 3.66 (95% CI 3.60-3.73), 3.73 (95% CI 3.69-3.77), and 3.52 (95% CI 3.47-3.57), respectively ( Figure 2 ). Physicians and ChatGPT achieved higher correctness scores among these 3 groups than ERNIE Bot. The distribution of the correctness scores exhibited statistically significant differences among the 3 groups (H=49.99, P <.001). When comparing the correctness scores between the 3 groups, the differences between physicians and ChatGPT ( P =.58) were not statistically significant. However, the differences between physicians and ERNIE Bot ( P <.001) and between ChatGPT and ERNIE Bot ( P <.001) were both statistically significant. The proportion of responses with errors (score <4) was similar for physicians (196/717, 27.34%; 95% CI 24.08%-30.60%) and ChatGPT (211/717, 29.43%; 95% CI 26.09%-32.77%) and lower for ERNIE Bot (309/717, 43.10%; 95% CI 39.48%-46.72%).

research question objectives

Among the 3 response groups, physician responses exhibited higher levels of usefulness than chatbots (ChatGPT or ERNIE Bot). Specifically, the mean usefulness score for physician responses was 3.54 (95% CI 3.47-3.62), whereas the mean usefulness scores for ChatGPT and ERNIE Bot were 3.40 (95% CI 3.34-3.47) and 3.05 (95% CI 2.99-3.12), respectively ( Figure 3 ). The proportion of responses rated as useful (score ≥4) was more significant for physicians (428/717, 59.69%; 95% CI 56.10%-63.28%) than for the chatbots (ChatGPT: 362/717, 50.49%; 95% CI 46.83%-54.15%; ERNIE Bot: 215/717, 29.99%; 95% CI 26.64%-33.34%). The distribution of the usefulness scores among the 3 groups displayed statistically significant differences (H=135.81, P <.001). Notably, all 3 pairwise comparisons of usefulness scores were statistically significant, with adjusted P values indicating significance, specifically between physicians and ChatGPT ( P <.001), physicians and ERNIE Bot ( P <.001), and ChatGPT and ERNIE Bot ( P <.001).

research question objectives

The mean empathy score for ChatGPT was 3.64 (95% CI 3.57-3.71), whereas the mean empathy scores for physicians and ERNIE Bot were 3.13 (95% CI 3.04-3.21) and 3.11 (95% CI 3.04-3.18), respectively ( Figure 4 ). ChatGPT’s responses received higher empathy scores than physicians and ERNIE Bot within the 3 response groups. Specifically, the proportion of responses displaying empathy (score ≥4) was higher for ChatGPT (447/717, 62.34%; 95% CI 58.79%-65.89%) than for physicians (312/717, 43.51%; 95% CI 39.88%-47.14%) and ERNIE Bot (258/717, 35.98%; 95% CI 32.47%-39.49%). The distribution of empathy scores among the 3 groups revealed significant differences (H=118.58, P <.001). When assessing empathy scores among the 3 groups, the differences between physicians and ChatGPT ( P <.001) and between ChatGPT and ERNIE Bot ( P <.001) both demonstrated statistical significance. However, the disparities between physicians and ERNIE Bot ( P =.14) were not significant.

The evaluators performed a reliability assessment, which revealed robust repeatability. The intraclass correlation coefficient values for the 4 response categories (relevance, correctness, usefulness, and empathy) were 0.812 ( P <.001), 0.831 ( P <.001), 0.818 ( P <.001), and 0.863 ( P <.001), respectively.

research question objectives

Principal Findings

This study evaluated the capabilities of LLMs, such as ChatGPT and ERNIE Bot, in delivering quality and empathetic responses to medical queries from Chinese autistic individuals. To simulate genuine clinical scenarios, publicly accessible data from web-based medical consultation platforms were used in this cross-sectional investigation. Notably, prevailing regulations in China strictly prohibit the use of AI-generated prescriptions. The study results revealed that expert evaluators favored responses from physicians over those generated by chatbots (ChatGPT or ERNIE Bot). In contrast to previous ophthalmology research that demonstrated chatbots outperforming physicians, our findings indicated that physicians received higher scores in relevance, correctness, and usefulness, with only a slight margin behind ChatGPT in terms of empathy. Conversely, ERNIE Bot obtained the lowest scores across all 4 dimensions: relevance, correctness, usefulness, and empathy.

Bernstein et al [ 35 ] compared the occurrence of incorrect or inappropriate content, potential harm, and degree of harm in responses from chatbots and humans. The study indicated that chatbot responses exhibited a similar likelihood of containing incorrect or inappropriate material compared with human responses (prevalence ratio [PR] 0.92, 95% CI 0.77-1.10). Moreover, no significant differences were observed between chatbot and human responses regarding potential harm (PR 0.84, 95% CI 0.67-1.07) or the degree of harm (PR 0.99, 95% CI 0.80-1.22). These results suggested that LLMs can provide appropriate ophthalmological advice for varying levels of complexity in patient questions. Shao et al [ 36 ] developed a set of 37 questions for patient education on thoracic surgery during the perioperative period, covering topics such as disease information, diagnostic procedures, perioperative complications, treatment measures, disease prevention, and perioperative care instructions. An assessment of responses in both English and Chinese contexts revealed that 92% (n=34) were considered appropriate and comprehensive. This study highlighted the potential feasibility of using ChatGPT for patient education in thoracic surgery in both English and Chinese settings. Zhu et al [ 37 ] investigated the application of ChatGPT as a mediator between physicians and patients in Chinese-speaking outpatient settings, focusing mainly on the Chinese Physician Qualification Examination. The study reported an average score of 72.4% in the clinical knowledge section, which was placed within the top 20 percentile. These findings suggested that ChatGPT can facilitate physician-patient communication in Chinese-speaking outpatient settings. Ayers et al [ 38 ] used publicly available data from a social media forum, Reddit, to compare physician responses and the ChatGPT. They randomly selected 195 dialogues with questions answered by physicians and generated chatbot responses by inputting the original questions into a new ChatGPT session. Evaluators assessed the responses from both physicians and the ChatGPT using a Likert scale, considering preferences, information quality, and empathy or interpersonal style. In 585 evaluations, 78.6% (95% CI 75%-81.8%) of evaluators preferred chatbot responses over those from physicians. These studies highlight the significant potential of chatbots in the field of medicine.

In our study, which consisted of 717 evaluations, evaluators preferred physician responses over those from chatbots, namely ChatGPT or ERNIE Bot. The preferences for physicians, ChatGPT, and ERNIE Bot were 46.86% (n=336; 95% CI 43.21%-50.51%), 34.87% (n=250; 95% CI 31.38%-38.36%), and 18.27% (n=131; 95% CI 15.44%-21.10%), respectively. Physician responses achieved higher scores in relevance, accuracy, and usefulness, with the only exception being the dimension of empathy, which ChatGPT surpassed. Our cross-sectional study’s results differed from previous reports due to several factors. First, unlike previous research primarily conducted in English-speaking settings, this study was conducted in a Chinese-speaking environment. Second, we preserved the original medical questions from autistic patients without modification to simulate authentic clinical consultations, including errors or nonstandard expressions in the queries such as misspellings or dialects. Our study contrasted with previous research, which often relies on professionally standardized queries. Furthermore, autistic patients’ queries frequently entail subjective matters such as seeking recommendations for specialist physicians or autism-related resources, lacking standardized responses, and reflecting significant cultural variations. Finally, our study used samples from a paid web-based medical consultation platform, whereas Ayers et al [ 38 ] used dialogues from public social forums on Reddit. Physicians may exhibit more proactive and diligent engagement in paid consultations.

Our study compared the performance of ChatGPT and ERNIE Bot in physician-patient interactions, with ERNIE Bot trained in Chinese and ChatGPT in English. While one might assume that ERNIE Bot’s training in Chinese would result in greater empathy toward Chinese-speaking users than ChatGPT, the results challenge this notion. The findings suggest that factors beyond the language of training influence the empathetic responsiveness of LLMs, highlighting the complexity of human-AI interactions and emphasizing the need for further exploration of the relationship between language and empathy. Physicians responded better when patients asked for recommendations on specific Chinese books about ASD. Physicians also effectively handled situations where the patient’s condition was misstated, whereas LLMs provided inaccurate information. Additionally, creating user-friendly interfaces to accommodate patients with varying levels of technological proficiency could improve the accessibility and usability of AI models in health care settings.

Limitations

This study had several notable limitations. First, reliance on web-based consultation platform records constrained each autistic patient to a maximum of 3 questions per web-based consultation, potentially limiting the ability to replicate real-world patient-physician interactions comprehensively. Moreover, the study exclusively examined text-based responses to patient inquiries, neglecting the potential for health care professionals to tailor their responses based on individual patient characteristics such as occupation and emotional state. The extent to which clinical professionals can adapt their responses to such personalization remains uncertain. Additionally, the study did not assess the chatbot’s capacity to extract information from health records, representing an area of potential improvement. Finally, although the evaluators have been single blinded, they potentially introduced bias into their assessments because they are coauthors of the paper and may hope that there are apparent differences between the groups in the results to make extreme scores.

Conclusions

The findings of this cross-sectional study show that physician responses outperform those of LLMs in the Chinese context, responding to inquiries from autistic patients in text-based formats compared with responses from current state-of-the-art LLMs. Nevertheless, these LLMs can offer medical guidance for autistic patients and demonstrate greater empathy compared with physicians, particularly ChatGPT-4. It is essential to emphasize that further refinement and comprehensive research are prerequisites before deploying LLMs effectively in clinical settings across diverse linguistic environments.

Acknowledgments

The authors extend their appreciation to the assessors for their professional contributions in conducting the measurements in this study. Special thanks are extended to Shanghai Tengyun Biotechnology Co, Ltd for developing the Hiplot Pro platform and their valuable technical assistance and tools for data visualization. This research received financial support from the National Natural Science Foundation of China (82105021). The funders were not involved in the study design, execution, or reporting. This study represents an inaugural assessment of the effectiveness of 2 prominent large language models, ChatGPT from the United States and ERNIE Bot from China, in providing high-quality and empathetic responses to medical inquiries from Chinese-speaking autistic patients.

Data Availability

All data generated or analyzed during this study are included in this published article. The data for this study are contained in Multimedia Appendix 1 .

Authors' Contributions

QX initiated and designed this study. WH collected and analyzed the data and drafted the manuscript. WZ guided English writing. YJ created the figures and tables. QZ and HZ conducted literature searches and participated in participant recruitment. QX revised the manuscript. QX managed the project and secured funding. All the authors have reviewed and approved the manuscript.

Conflicts of Interest

None declared.

The raw data.

  • Edelmann A, Wolff T, Montagne D, Bail CA. Computational social science and sociology. Annu Rev Sociol. 2020;46(1):61-81. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cheng CY, Chiu IM, Hsu MY, Pan HY, Tsai CM, Lin CHR. Deep learning assisted detection of abdominal free fluid in Morison's pouch during focused assessment with sonography in trauma. Front Med (Lausanne). 2021;8:707437. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Omiye JA, Gui H, Rezaei SJ, Zou J, Daneshjou R. Large language models in medicine: the potentials and pitfalls. Ann Intern Med. 2024;177(2):210-220. [ CrossRef ]
  • Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930-1940. [ CrossRef ] [ Medline ]
  • Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172-180. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iannantuono GM, Bracken-Clarke D, Floudas CS, Roselli M, Gulley JL, Karzai F. Applications of large language models in cancer care: current evidence and future perspectives. Front Oncol. 2023;13:1268915. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tian S, Jin Q, Yeganova L, Lai PT, Zhu Q, Chen X, et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief Bioinform. 2023;25(1):bbad493. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lai TM, Zhai C, Ji H. KEBLM: Knowledge-Enhanced Biomedical Language Models. J Biomed Inform. 2023;143:104392. [ CrossRef ] [ Medline ]
  • Alqahtani T, Badreldin HA, Alrashed M, Alshaya AI, Alghamdi SS, Bin Saleh K, et al. The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Res Social Adm Pharm. 2023;19(8):1236-1242. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Müller M, Salathé M, Kummervold PE. COVID-Twitter-BERT: a natural language processing model to analyse COVID-19 content on Twitter. Front Artif Intell. 2023;6:1023281. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hsu HY, Hsu KC, Hou SY, Wu CL, Hsieh YW, Cheng YD. Examining real-world medication consultations and drug-herb interactions: ChatGPT performance evaluation. JMIR Med Educ. 2023;9:e48433. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roster K, Kann RB, Farabi B, Gronbeck C, Brownstone N, Lipner SR. Readability and health literacy scores for ChatGPT-generated dermatology public education materials: cross-sectional analysis of sunscreen and melanoma questions. JMIR Dermatol. 2024;7:e50163. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Karakas C, Brock DB, Lakhotia A. Leveraging ChatGPT in the pediatric neurology clinic: practical considerations for use to improve efficiency and outcomes. Pediatr Neurol. 2023;148:157-163. [ CrossRef ] [ Medline ]
  • Yang J, Ardavanis KS, Slack KE, Fernando ND, Della Valle CJ, Hernandez NM. Chat Generative Pretrained Transformer (ChatGPT) and Bard: artificial intelligence does not yet provide clinically supported answers for hip and knee osteoarthritis. J Arthroplasty. 2024;39(5):1184-1190. [ CrossRef ] [ Medline ]
  • Lord C, Charman T, Havdahl A, Carbone P, Anagnostou E, Boyd B, et al. The Lancet commission on the future of care and clinical research in autism. Lancet. 2022;399(10321):271-334. [ CrossRef ] [ Medline ]
  • Maenner MJ, Warren Z, Williams AR, Amoakohene E, Bakian AV, Bilder DA, et al. Prevalence and characteristics of Autism spectrum disorder among children aged 8 years—Autism and developmental disabilities monitoring network, 11 sites, United States, 2020. MMWR Surveill Summ. 2023;72(2):1-14. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elsabbagh M, Divan G, Koh YJ, Kim YS, Kauchali S, Marcín C, et al. Global prevalence of autism and other pervasive developmental disorders. Autism Res. 2012;5(3):160-179. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sun X, Allison C, Wei L, Matthews FE, Auyeung B, Wu YY, et al. Autism prevalence in China is comparable to western prevalence. Mol Autism. 2019;10:7. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Luo Y, Pang L, Guo C, Zhang L, Wang Y, Zheng X. Urbanicity and autism of children in China. Psychiatry Res. 2020;286:112867. [ CrossRef ] [ Medline ]
  • Liu J, Miao J, Zhang D. Dilemma of healthcare reform and invention of new discipline of health fiscalogy. Glob Health Res Policy. 2016;1:4. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feng J, Gong Y, Li H, Wu J, Lu Z, Zhang G, et al. Development trend of primary healthcare after health reform in China: a longitudinal observational study. BMJ Open. 2022;12(6):e052239. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, et al. Multi-modality artificial intelligence in digital pathology. Brief Bioinform. 2022;23(6):bbac367. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • ChatGPT. OpenAI. URL: https://chat.openai.com/ [accessed 2024-04-26]
  • ERNIE Bot. Baidu. URL: https://yiyan.baidu.com/ [accessed 2024-04-26]
  • Singh M, Cambronero J, Gulwani S, Le V, Negreanu C, Verbruggen G. CodeFusion: a pre-trained diffusion model for code generation. ArXiv. Preprint posted online on October 26, 2023. [ FREE Full text ] [ CrossRef ]
  • Launch event for Baidu's ERNIE Bot. Quklive. Baidu, Inc. 2023. URL: https://cloud.quklive.com/cloud/a/embed/1678784107545129 [accessed 2024-04-16]
  • Schukow C, Smith SC, Landgrebe E, Parasuraman S, Folaranmi OO, Paner GP, et al. Application of ChatGPT in routine diagnostic pathology: promises, pitfalls, and potential future directions. Adv Anat Pathol. 2024;31(1):15-21. [ CrossRef ] [ Medline ]
  • Hwang T, Aggarwal N, Khan PZ, Roberts T, Mahmood A, Griffiths MM, et al. Can ChatGPT assist authors with abstract writing in medical journals? evaluating the quality of scientific abstracts generated by ChatGPT and original abstracts. PLoS One. 2024;19(2):e0297701. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jia K, Kenney M, Mattila J, Seppala T. The application of artificial intelligence at Chinese digital platform giants: Baidu, Alibaba and Tencent. SSRN Journal. Preprint posted online on April 25, 2018. [ FREE Full text ] [ CrossRef ]
  • Arora A, Arora A. The promise of large language models in health care. Lancet. 2023;401(10377):641. [ CrossRef ] [ Medline ]
  • Shah NH, Entwistle D, Pfeffer MA. Creation and adoption of large language models in medicine. JAMA. 2023;330(9):866-869. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Minssen T, Vayena E, Cohen IG. The challenges for regulating medical use of ChatGPT and other large language models. JAMA. 2023;330(4):315-316. [ CrossRef ] [ Medline ]
  • Dublin S, Greenwood-Hickman MA, Karliner L, Hsu C, Coley RY, Colemon L, et al. The electronic health record Risk of Alzheimer's and Dementia Assessment Rule (eRADAR) brain health trial: protocol for an embedded, pragmatic clinical trial of a low-cost dementia detection algorithm. Contemp Clin Trials. 2023;135:107356. [ CrossRef ] [ Medline ]
  • Deng Z, Deng Z, Liu S, Evans R. Knowledge transfer between physicians from different geographical regions in China's online health communities. Inf Technol Manag. 2023.:1-18. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bernstein IA, Zhang YV, Govil D, Majid I, Chang RT, Sun Y, et al. Comparison of ophthalmologist and large language model chatbot responses to online patient eye care questions. JAMA Netw Open. 2023;6(8):e2330320. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shao CY, Li H, Liu XL, Li C, Yang LQ, Zhang YJ, et al. Appropriateness and comprehensiveness of using ChatGPT for perioperative patient education in thoracic surgery in different language contexts: survey study. Interact J Med Res. 2023;12:e46900. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhu Z, Ying Y, Zhu J, Wu H. ChatGPT's potential role in non-English-speaking outpatient clinic settings. Digit Health. 2023;9:20552076231184091. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589-596. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung, S Ma; submitted 20.11.23; peer-reviewed by N Singh, J Li; comments to author 29.02.24; revised version received 20.03.24; accepted 02.04.24; published 30.04.24.

©Wenjie He, Wenyan Zhang, Ya Jin, Qiang Zhou, Huadan Zhang, Qing Xia. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. How to Write a Good Research Question (w/ Examples)

    research question objectives

  2. Research Objectives

    research question objectives

  3. Research objectives, Research questions, and Data resources.

    research question objectives

  4. research title objectives examples

    research question objectives

  5. Research Questions Objectives Examples Ppt Powerpoint Presentation

    research question objectives

  6. How do you develop a research objective?

    research question objectives

VIDEO

  1. Difference between Research Questions and Research Objectives

  2. objectives of Research

  3. Objectives, Types & Signification of Research

  4. How to Write Objectives in Research Proposal

  5. Creating Research Questions and Objectives #research #publication #internationalconference #tiikm

  6. Research Objectives

COMMENTS

  1. Research Questions, Objectives & Aims (+ Examples)

    The research aims, objectives and research questions (collectively called the "golden thread") are arguably the most important thing you need to get right when you're crafting a research proposal, dissertation or thesis.We receive questions almost every day about this "holy trinity" of research and there's certainly a lot of confusion out there, so we've crafted this post to help ...

  2. What Are Research Objectives and How to Write Them (with Examples)

    Research studies have a research question, research hypothesis, and one or more research objectives. A research question is what a study aims to answer, and a research hypothesis is a predictive statement about the relationship between two or more variables, which the study sets out to prove or disprove.

  3. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  4. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  5. Research Objectives

    Example: Research aim. To examine contributory factors to muscle retention in a group of elderly people. Example: Research objectives. To assess the relationship between sedentary habits and muscle atrophy among the participants. To determine the impact of dietary factors, particularly protein consumption, on the muscular health of the ...

  6. Research questions, hypotheses and objectives

    The development of the research question, including a supportive hypothesis and objectives, is a necessary key step in producing clinically relevant results to be used in evidence-based practice. A well-defined and specific research question is more likely to help guide us in making decisions about study design and population and subsequently ...

  7. Research Question 101

    The research objective, on the other hand, outlines the steps you'll take to answer your research question. Research objectives are often more action-oriented and can be broken down into smaller tasks that guide your research process. In a sense, they're something of a roadmap that helps you answer your research question. ...

  8. Research Questions

    Definition: Research questions are the specific questions that guide a research study or inquiry. These questions help to define the scope of the research and provide a clear focus for the study. Research questions are usually developed at the beginning of a research project and are designed to address a particular research problem or objective.

  9. Research Question: Definition, Types, Examples, Quick Tips

    However, they must all be pertinent to the study's objectives. What is a Research Question? A research question is an inquiry that the research attempts to answer. It is the heart of the systematic investigation. Research questions are the most important step in any research project. In essence, it initiates the research project and establishes ...

  10. Research Question Examples ‍

    A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights. But, if you're new to research, it's not always clear what exactly constitutes a good research question. In this post, we'll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

  11. Understanding the Difference between Research Questions and Objectives

    Research questions are more general and open-ended, while objectives are specific and measurable. Research questions identify the main problem or area of inquiry, while objectives define the specific outcomes that the researcher is looking to achieve. Research questions help define the study's scope, while objectives help guide the research ...

  12. PDF Research Questions and Hypotheses

    Research Questions and Hypotheses I nvestigators place signposts to carry the reader through a plan for a study. The first signpost is the purpose statement, which establishes the ... In a qualitative study, inquirers state research questions, not objectives (i.e., specific goals for the research) or hypotheses (i.e., predictions that involve

  13. Research Questions & Hypotheses

    The presence of multiple research questions in a study can complicate the design, statistical analysis, and feasibility. It's advisable to focus on a single primary research question for the study. The primary question, clearly stated at the end of a grant proposal's introduction, usually specifies the study population, intervention, and ...

  14. How to Write a Research Question in 2024: Types, Steps, and Examples

    Mixed-methods studies. Mixed-methods studies typically require a set of both quantitative and qualitative research questions. Separate questions are appropriate when the mixed-methods study focuses on the significance and differences in quantitative and qualitative methods and not on the study's integrative component (Tashakkori & Teddlie, 2010).

  15. PDF Setting a research question, aim and objective

    developing the research question, aim and objective. Subsequent steps develop from these and they govern the researchers choice of population, setting, data to be collected and time period for the study. Clear, succinctly posed research questions, aims and objectives are essential if studies are to be successful. Discussion Researchers ...

  16. Research Questions, Objectives & Aims (+ Examples)

    The search aims, objectives and research questions (collectively called the "golden thread") are arguably the of important point you need to get right at you're crafting a research propose, dissertation or thesis.We receive questions almost every days about all "holy trinity" of find and there's certainly a lot of confusion out there, so we've designed the post to help your ...

  17. How to Develop a Good Research Question?

    This form of question looks to understand something without influencing the results. The objective of exploratory questions is to learn more about a topic without attributing bias or preconceived notions to it. Research Question Example: Asking how a chemical is used or perceptions around a certain topic. ii.

  18. How do I write a research objective?

    A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement, before your research objectives. Research objectives are more specific than your research aim. They indicate the specific ways you'll address the overarching aim.

  19. The question: types of research questions and how to develop them

    Once a topic of interest develops into a research question, the next step is to ponder the closely linked aims and objectives. A study's aim is its overall purpose-its planned long-term accomplishments and goals. According to Newman et al. 14, there are nine types (Table 18.1).Research objectives are slightly more specific than aims and may be subdivided into primary (must be achieved) and ...

  20. Formulation of Research Question

    Abstract. Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise ...

  21. (PDF) Research questions and research objectives

    In every research, the terms 'research aim', 'research objectives', 'research questions' and 'research hypotheses' tend to have precise meaning, therefore defining the core objectives is the ...

  22. 1.3 Research Objectives and Research Questions

    For an objective to be good, it should be SMART: Specific, Measurable, Achievable, Relevant and Time-bound. The importance of research objectives lies in the fact that they determine: The kind of questions to be asked. In other words, research questions are derived from the objectives. The data collection and analysis procedure to be used.

  23. Writing Survey Questions

    When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. ... For example, in one of Pew Research Center's questions about abortion, half of the sample is asked whether abortion should be ...

  24. Journal of Medical Internet Research

    Background: There is a dearth of feasibility assessments regarding using large language models (LLMs) for responding to inquiries from autistic patients within a Chinese-language context. Despite Chinese being one of the most widely spoken languages globally, the predominant research focus on applying these models in the medical field has been on English-speaking populations.