2. American women
Question: | How often do British university students use Facebook each week? |
Variable: | Weekly Facebook usage |
Group: | British university students |
Question: | How often do male and female British university students upload photos and comment on other users' photos on Facebook each week? |
Variable: | 1. Weekly photo uploads on Facebook 2. Weekly comments on other users? photos on Facebook |
Group: | 1. Male, British university students 2. Female, British university students |
Question: | What are the most important factors that influence the career choices of Australian university students? |
Variable: | Factors influencing career choices |
Group: | Australian university students |
In each of these example descriptive research questions, we are quantifying the variables we are interested in. However, the units that we used to quantify these variables will differ depending on what is being measured. For example, in the questions above, we are interested in frequencies (also known as counts ), such as the number of calories, photos uploaded, or comments on other users? photos. In the case of the final question, What are the most important factors that influence the career choices of Australian university students? , we are interested in the number of times each factor (e.g., salary and benefits, career prospects, physical working conditions, etc.) was ranked on a scale of 1 to 10 (with 1 = least important and 10 = most important). We may then choose to examine this data by presenting the frequencies , as well as using a measure of central tendency and a measure of spread [see the section on Data Analysis to learn more about these and other statistical tests].
However, it is also common when using descriptive research questions to measure percentages and proportions , so we have included some example descriptive research questions below that illustrate this.
Question: | What percentage of American men and women exceed their daily calorific allowance? |
Variable: | Daily calorific intake |
Group: | 1. American men 2. American women |
Question: | What proportion of British male and female university students use the top 5 social networks? |
Variable: | Use of top 5 social networks (i.e. Facebook, MySpace, Twitter, LinkedIn, and Classmates) |
Group: | 1. Male, British university students 2. Female, British university students |
In terms of the first descriptive research question about daily calorific intake , we are not necessarily interested in frequencies , or using a measure of central tendency or measure of spread , but instead want understand what percentage of American men and women exceed their daily calorific allowance . In this respect, this descriptive research question differs from the earlier question that asked: How many calories do American men and women consume per day? Whilst this question simply wants to measure the total number of calories (i.e., the How many calories part that starts the question); in this case, the question aims to measure excess ; that is, what percentage of these two groups (i.e., American men and American women) exceed their daily calorific allowance, which is different for males (around 2500 calories per day) and females (around 2000 calories per day).
If you are performing a piece of descriptive , quantitative research for your dissertation, you are likely to need to set quite a number of descriptive research questions . However, if you are using an experimental or quasi-experimental research design , or a more involved relationship-based research design , you are more likely to use just one or two descriptive research questions as a means to providing background to the topic you are studying, helping to give additional context for comparative research questions and/or relationship-based research questions that follow.
Comparative research questions aim to examine the differences between two or more groups on one or more dependent variables (although often just a single dependent variable). Such questions typically start by asking "What is the difference in?" a particular dependent variable (e.g., daily calorific intake) between two or more groups (e.g., American men and American women). Examples of comparative research questions include:
Question: | What is the difference in the daily calorific intake of American men and women? |
Dependent variable: | Daily calorific intake |
Groups: | 1. American men 2. American women |
Question: | What is the difference in the weekly photo uploads on Facebook between British male and female university students? |
Dependent variable: | Weekly photo uploads on Facebook |
Groups: | 1. Male, British university students 2. Female, British university students |
Question: | What are the differences in usage behaviour on Facebook between British male and female university students? |
Dependent variable: | Usage behaviour on Facebook (e.g. logins, weekly photo uploads, status changes, commenting on other users' photos, app usage, etc.) |
Group: | 1. Male, British university students 2. Female, British university students |
Question: | What are the differences in perceptions towards Internet banking security between adolescents and pensioners? |
Dependent variable: | Perceptions towards Internet banking security |
Groups: | 1. Adolescents 2. Pensioners |
Question: | What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased? |
Dependent variable: | Attitudes towards music piracy |
Groups: | 1. Freely distributed pirated music 2. Purchased pirated music |
Groups reflect different categories of the independent variable you are measuring (e.g., American men and women = "gender"; Australian undergraduate and graduate students = "educational level"; pirated music that is freely distributed and pirated music that is purchased = "method of illegal music acquisition").
Comparative research questions also differ in terms of their relative complexity , by which we are referring to how many items/measures make up the dependent variable or how many dependent variables are investigated. Indeed, the examples highlight the difference between very simple comparative research questions where the dependent variable involves just a single measure/item (e.g., daily calorific intake) and potentially more complex questions where the dependent variable is made up of multiple items (e.g., Facebook usage behaviour including a wide range of items, such as logins, weekly photo uploads, status changes, etc.); or where each of these items should be written out as dependent variables.
Overall, whilst the dependent variable(s) highlight what you are interested in studying (e.g., attitudes towards music piracy, perceptions towards Internet banking security), comparative research questions are particularly appropriate if your dissertation aims to examine the differences between two or more groups (e.g., men and women, adolescents and pensioners, managers and non-managers, etc.).
Whilst we refer to this type of quantitative research question as a relationship-based research question, the word relationship should be treated simply as a useful way of describing the fact that these types of quantitative research question are interested in the causal relationships , associations , trends and/or interactions amongst two or more variables on one or more groups. We have to be careful when using the word relationship because in statistics, it refers to a particular type of research design, namely experimental research designs where it is possible to measure the cause and effect between two or more variables; that is, it is possible to say that variable A (e.g., study time) was responsible for an increase in variable B (e.g., exam scores). However, at the undergraduate and even master's level, dissertations rarely involve experimental research designs , but rather quasi-experimental and relationship-based research designs [see the section on Quantitative research designs ]. This means that you cannot often find causal relationships between variables, but only associations or trends .
However, when we write a relationship-based research question , we do not have to make this distinction between causal relationships, associations, trends and interactions (i.e., it is just something that you should keep in the back of your mind). Instead, we typically start a relationship-based quantitative research question, "What is the relationship?" , usually followed by the words, "between or amongst" , then list the independent variables (e.g., gender) and dependent variables (e.g., attitudes towards music piracy), "amongst or between" the group(s) you are focusing on. Examples of relationship-based research questions are:
Question: | What is the relationship between gender and attitudes towards music piracy amongst adolescents? |
Dependent variable: | Attitudes towards music piracy |
Independent variable: | Gender |
Group: | Adolescents |
Question: | What is the relationship between study time and exam scores amongst university students? |
Dependent variable: | Exam scores |
Independent variable: | Study time |
Group: | University students |
Question: | What is the relationship amongst career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers? |
Dependent variable: | Job satisfaction |
Independent variable: | 1. Career prospects 2. Salary and benefits 3. Physical working conditions |
Group: | 1. Managers 2. Non-managers |
As the examples above highlight, relationship-based research questions are appropriate to set when we are interested in the relationship, association, trend, or interaction between one or more dependent (e.g., exam scores) and independent (e.g., study time) variables, whether on one or more groups (e.g., university students).
The quantitative research design that we select subsequently determines whether we look for relationships , associations , trends or interactions . To learn how to structure (i.e., write out) each of these three types of quantitative research question (i.e., descriptive, comparative, relationship-based research questions), see the article: How to structure quantitative research questions .
The Ohio State University
Generally, in quantitative studies, reviewers expect hypotheses rather than research questions. However, both research questions and hypotheses serve different purposes and can be beneficial when used together.
Clarify the research’s aim (farrugia et al., 2010).
Feasible | ||
Interesting | ||
Novel | ||
Ethical | ||
Relevant |
Population (patients) | ||
Intervention (for intervention studies only) | ||
Comparison group | ||
Outcome of interest | ||
Time |
Present the researcher’s predictions based on specific statements.
If your research hypotheses are derived from your research questions, particularly when multiple hypotheses address a single question, it’s recommended to use both research questions and hypotheses. However, if this isn’t the case, using hypotheses over research questions is advised. It’s important to note these are general guidelines, not strict rules. If you opt not to use hypotheses, consult with your supervisor for the best approach.
Farrugia, P., Petrisor, B. A., Farrokhyar, F., & Bhandari, M. (2010). Practical tips for surgical research: Research questions, hypotheses and objectives. Canadian journal of surgery. Journal canadien de chirurgie , 53 (4), 278–281.
Hulley, S. B., Cummings, S. R., Browner, W. S., Grady, D., & Newman, T. B. (2007). Designing clinical research. Philadelphia.
Panke, D. (2018). Research design & method selection: Making good choices in the social sciences. Research Design & Method Selection , 1-368.
Research questions lie at the core of systematic investigation and this is because recording accurate research outcomes is tied to asking the right questions. Asking the right questions when conducting research can help you collect relevant and insightful information that ultimately influences your work, positively.
The right research questions are typically easy to understand, straight to the point, and engaging. In this article, we will share tips on how to create the right research questions and also show you how to create and administer an online questionnaire with Formplus .
A research question is a specific inquiry which the research seeks to provide a response to. It resides at the core of systematic investigation and it helps you to clearly define a path for the research process.
A research question is usually the first step in any research project. Basically, it is the primary interrogation point of your research and it sets the pace for your work.
Typically, a research question focuses on the research, determines the methodology and hypothesis, and guides all stages of inquiry, analysis, and reporting. With the right research questions, you will be able to gather useful information for your investigation.
Research questions are broadly categorized into 2; that is, qualitative research questions and quantitative research questions. Qualitative and quantitative research questions can be used independently and co-dependently in line with the overall focus and objectives of your research.
If your research aims at collecting quantifiable data , you will need to make use of quantitative research questions. On the other hand, qualitative questions help you to gather qualitative data bothering on the perceptions and observations of your research subjects.
A qualitative research question is a type of systematic inquiry that aims at collecting qualitative data from research subjects. The aim of qualitative research questions is to gather non-statistical information pertaining to the experiences, observations, and perceptions of the research subjects in line with the objectives of the investigation.
As the name clearly suggests, ethnographic research questions are inquiries presented in ethnographic research. Ethnographic research is a qualitative research approach that involves observing variables in their natural environments or habitats in order to arrive at objective research outcomes.
These research questions help the researcher to gather insights into the habits, dispositions, perceptions, and behaviors of research subjects as they interact in specific environments.
Ethnographic research questions can be used in education, business, medicine, and other fields of study, and they are very useful in contexts aimed at collecting in-depth and specific information that are peculiar to research variables. For instance, asking educational ethnographic research questions can help you understand how pedagogy affects classroom relations and behaviors.
This type of research question can be administered physically through one-on-one interviews, naturalism (live and work), and participant observation methods. Alternatively, the researcher can ask ethnographic research questions via online surveys and questionnaires created with Formplus.
Examples of Ethnographic Research Questions
A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time.
The aim of a case study is to analyze specific issues within definite contexts and arrive at detailed research subject analyses by asking the right questions. This research method can be explanatory, descriptive , or exploratory depending on the focus of your systematic investigation or research.
An explanatory case study is one that seeks to gather information on the causes of real-life occurrences. This type of case study uses “how” and “why” questions in order to gather valid information about the causative factors of an event.
Descriptive case studies are typically used in business researches, and they aim at analyzing the impact of changing market dynamics on businesses. On the other hand, exploratory case studies aim at providing answers to “who” and “what” questions using data collection tools like interviews and questionnaires.
Some questions you can include in your case studies are:
An interview is a qualitative research method that involves asking respondents a series of questions in order to gather information about a research subject. Interview questions can be close-ended or open-ended , and they prompt participants to provide valid information that is useful to the research.
An interview may also be structured, semi-structured , or unstructured , and this further influences the types of questions they include. Structured interviews are made up of more close-ended questions because they aim at gathering quantitative data while unstructured interviews consist, primarily, of open-ended questions that allow the researcher to collect qualitative information from respondents.
You can conduct interview research by scheduling a physical meeting with respondents, through a telephone conversation, and via digital media and video conferencing platforms like Skype and Zoom. Alternatively, you can use Formplus surveys and questionnaires for your interview.
Examples of interview questions include:
Quantitative research questions are questions that are used to gather quantifiable data from research subjects. These types of research questions are usually more specific and direct because they aim at collecting information that can be measured; that is, statistical information.
Descriptive research questions are inquiries that researchers use to gather quantifiable data about the attributes and characteristics of research subjects. These types of questions primarily seek responses that reveal existing patterns in the nature of the research subjects.
It is important to note that descriptive research questions are not concerned with the causative factors of the discovered attributes and characteristics. Rather, they focus on the “what”; that is, describing the subject of the research without paying attention to the reasons for its occurrence.
Descriptive research questions are typically closed-ended because they aim at gathering definite and specific responses from research participants. Also, they can be used in customer experience surveys and market research to collect information about target markets and consumer behaviors.
Descriptive Research Question Examples
A comparative research question is a type of quantitative research question that is used to gather information about the differences between two or more research subjects across different variables. These types of questions help the researcher to identify distinct features that mark one research subject from the other while highlighting existing similarities.
Asking comparative research questions in market research surveys can provide insights on how your product or service matches its competitors. In addition, it can help you to identify the strengths and weaknesses of your product for a better competitive advantage.
The 5 steps involved in the framing of comparative research questions are:
Comparative Research Question Samples
Just like the name suggests, a relationship-based research question is one that inquires into the nature of the association between two research subjects within the same demographic. These types of research questions help you to gather information pertaining to the nature of the association between two research variables.
Relationship-based research questions are also known as correlational research questions because they seek to clearly identify the link between 2 variables.
Read: Correlational Research Designs: Types, Examples & Methods
Examples of relationship-based research questions include:
Since research questions lie at the core of any systematic investigations, it is important to know how to frame a good research question. The right research questions will help you to gather the most objective responses that are useful to your systematic investigation.
A good research question is one that requires impartial responses and can be answered via existing sources of information. Also, a good research question seeks answers that actively contribute to a body of knowledge; hence, it is a question that is yet to be answered in your specific research context.
An open-ended question is a type of research question that does not restrict respondents to a set of premeditated answer options. In other words, it is a question that allows the respondent to freely express his or her perceptions and feelings towards the research subject.
Examples of Open-ended Questions
A close-ended question is a type of survey question that restricts respondents to a set of predetermined answers such as multiple-choice questions . Close-ended questions typically require yes or no answers and are commonly used in quantitative research to gather numerical data from research participants.
Examples of Close-ended Questions
A Likert scale question is a type of close-ended question that is structured as a 3-point, 5-point, or 7-point psychometric scale . This type of question is used to measure the survey respondent’s disposition towards multiple variables and it can be unipolar or bipolar in nature.
Example of Likert Scale Questions
A rating scale question is a type of close-ended question that seeks to associate a specific qualitative measure (rating) with the different variables in research. It is commonly used in customer experience surveys, market research surveys, employee reviews, and product evaluations.
Example of Rating Questions
Knowing what bad research questions are would help you avoid them in the course of your systematic investigation. These types of questions are usually unfocused and often result in research biases that can negatively impact the outcomes of your systematic investigation.
A loaded question is a question that subtly presupposes one or more unverified assumptions about the research subject or participant. This type of question typically boxes the respondent in a corner because it suggests implicit and explicit biases that prevent objective responses.
Example of Loaded Questions
A negative question is a type of question that is structured with an implicit or explicit negator. Negative questions can be misleading because they upturn the typical yes/no response order by requiring a negative answer for affirmation and an affirmative answer for negation.
Examples of Negative Questions
A l eading question is a type of survey question that nudges the respondent towards an already-determined answer. It is highly suggestive in nature and typically consists of biases and unverified assumptions that point toward its premeditated responses.
Examples of Leading Questions
Read More: Leading Questions: Definition, Types, and Examples
With Formplus, you can create and administer your online research questionnaire easily. In the form builder, you can add different form fields to your questionnaire and edit these fields to reflect specific research questions for your systematic investigation.
Here is a step-by-step guide on how to create an online research questionnaire with Formplus:
The success of your research starts with framing the right questions to help you collect the most valid and objective responses. Be sure to avoid bad research questions like loaded and negative questions that can be misleading and adversely affect your research data and outcomes.
Your research questions should clearly reflect the aims and objectives of your systematic investigation while laying emphasis on specific contexts. To help you seamlessly gather responses for your research questions, you can create an online research questionnaire on Formplus.
Connect to Formplus, Get Started Now - It's Free!
You may also like:
In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...
In this article, we will share some tips for writing an effective abstract, plus samples you can learn from.
Introduction A research summary is a requirement during academic research and sometimes you might need to prepare a research summary...
Learn how to write problem statements before commencing any research effort. Learn about its structure and explore examples
Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..
Warning: The NCBI web site requires JavaScript to function. more...
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Velentgas P, Dreyer NA, Nourjah P, et al., editors. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan.
Scott R Smith , PhD.
The steps involved in the process of developing research questions and study objectives for conducting observational comparative effectiveness research (CER) are described in this chapter. It is important to begin with identifying decisions under consideration, determining who the decisionmakers and stakeholders in the specific area of research under study are, and understanding the context in which decisions are being made. Synthesizing the current knowledge base and identifying evidence gaps is the next important step in the process, followed by conceptualizing the research problem, which includes developing questions that address the gaps in existing evidence. Understanding the stage of knowledge that the study is designed to address will come from developing these initial questions. Identifying which questions are critical to reduce decisional uncertainty and minimize gaps in the current knowledge base is an important part of developing a successful framework. In particular, it is beneficial to look at what study populations, interventions, comparisons, outcomes, timeframe, and settings (PICOTS framework) are most important to decisionmakers in weighing the balance of harms and benefits of action. Some research questions are easier to operationalize than others, and study limitations should be recognized and accepted from an early stage. The level of new scientific evidence that is required by the decisionmaker to make a decision or to take action must be recognized. Lastly, the magnitude of effect must be specified. This can mean defining what is a clinically meaningful difference in the study endpoints from the perspective of the decisionmaker and/or defining what is a meaningful difference from the patient's perspective.
The foundation for designing a new research protocol is the study's objectives and the questions that will be investigated through its implementation. All aspects of study design and analysis are based on the objectives and questions articulated in a study's protocol. Consequently, it is exceedingly important that a study's objectives and questions be formulated meticulously and written precisely in order for the research to be successful in generating new knowledge that can be used to inform health care decisions and actions.
An important aspect of CER 1 and other forms of translational research is the potential for early involvement and inclusion of patients and other stakeholders to collaborate with researchers in identifying study objectives, key questions, major study endpoints, and the evidentiary standards that are needed to inform decisionmaking. The involvement of stakeholders in formulating the research questions increases the applicability of the study to the end-users and facilitates appropriate translation of the results into health care practice and use by patient communities. While stakeholders may be defined in multiple ways, for the purposes of this User's Guide , a broad definition will be used. Hence, stakeholders are defined as individuals or organizations that use scientific evidence for decisionmaking and therefore have an interest in the results of new research. Implicit in this definition of stakeholders is the importance for stakeholders to understand the scientific process, including considerations of bioethics and the limitations of research, particularly with regard to studies involving human subjects. Ideally, stakeholders also should express commitment to using objective scientific evidence to inform their decisionmaking and recognize that disregarding sound scientific methods often will undermine decisionmaking. For stakeholder organizations, it is also advantageous if the organization has well-established processes for transparently reviewing and incorporating research findings into decisions as well as organized channels for disseminating research results.
There are at least seven essential steps in the conceptualization and development of a research question or set of questions for an observational CER protocol. These steps are presented as a general framework in Table 1.1 below and elaborated upon in the subsequent sections of this chapter. The framework is based on the principle that researchers and stakeholders will work together to objectively lay out the research problems, research questions, study objectives, and key parameters for which scientific evidence is needed to inform decisionmaking or health care actions. The intent of this framework is to facilitate communication between researchers and stakeholders in conceptualizing the research problem and the design of a study (or a program of research involving a series of studies) in order to maximize the potential that new knowledge will be created from the research with results that can inform decisionmaking. To do this, research results must be relevant, applicable, unbiased and sufficient to meet the evidentiary threshold for decisionmaking or action by stakeholders. In order for the results to be valid and credible, all persons involved must be committed to protecting the integrity of the research from bias and conflicts of interest. Most importantly, the study must be designed to protect the rights, welfare, and well-being of subjects involved in the research.
Framework for developing and conceptualizing a CER protocol.
In order for research findings to be useful for decisionmaking, the study protocol should clearly articulate the decisions or actions for which stakeholders seek new scientific evidence. While only some studies may be sufficiently robust for making decisions or taking action, statements that describe the stakeholders' decisions will help those who read the protocol understand the rationale for the study and its potential for informing decisions or for translating the findings into changes in health care practices. This information also improves the ability of protocol readers to understand the purpose of the study so they can critically review its design and provide recommendations for ways it may be potentially improved. If stakeholders have a need to make decisions within a critical time frame for regulatory, ethical, or other reasons, this interval should be expressed to researchers and described in the protocol. In some cases, the time frame for decisionmaking may influence the choice of outcomes that can be studied and the study designs that can be used. For some stakeholders' questions, research and decisionmaking may need to be divided into stages, since it may take years for outcomes with long lag times to occur, and research findings will be delayed until they do.
In writing this section of the protocol, investigators should ask stakeholders to describe the context in which the decision will be made or actions will be taken. This context includes the background and rationale for the decision, key areas of uncertainty and controversies surrounding the decision, ways scientific evidence will be used to inform the decision, the process stakeholders will use to reach decisions based on scientific evidence, and a description of the key stakeholders who will use or potentially be affected by the decision. By explaining these contextual factors that surround the decision, investigators will be able to work with stakeholders to determine the study objectives and other major parameters of the study. This work also provides the opportunity to discuss how the tools of science can be applied to generate new evidence for informing stakeholder decisions and what limits may exist in those tools. In addition, this initial step begins to clarify the number of analyses necessary to generate the evidence that stakeholders need to make a decision or take other actions with sufficient certainty about the outcomes of interest. Finally, the contextual information facilitates advance planning and discussions by researchers and stakeholders about approaches to translation and implementation of the study findings once the research is completed.
In designing a new study, investigators should conduct a comprehensive review of the literature, critically appraise published studies, and synthesize what is known related to the research objectives. Specifically, investigators should summarize in the protocol what is known about the efficacy, effectiveness, and safety of the interventions and about the outcomes being studied. Furthermore, investigators should discuss measures used in prior research and whether these measures have changed over time. These descriptions will provide background on the knowledge base for the current protocol. It is equally important to identify which elements of the research problem are unknown because evidence is absent, insufficient, or conflicting.
For some research problems, systematic reviews of the literature may be available and can be useful resources to guide the study design. The AHRQ Evidence-based Practice Centers 2 and the Cochrane Collaboration 3 are examples of established programs that conduct thorough systematic reviews, technology assessments, and specialized comparative effectiveness reviews using standardized methods. When available, systematic reviews and technology assessments should be consulted as resources for investigators to assess the current knowledge base when designing new studies and working with stakeholders.
When reviewing the literature, investigators and stakeholders should identify the most relevant studies and guidelines about the interventions that will be studied. This will allow readers to understand how new research will add to the existing knowledge base. If guidelines are a source of information, then investigators should examine whether these guidelines have been updated to incorporate recent literature. In addition, investigators should assess the health sciences literature to determine what is known about expected effects of the interventions based on current understanding of the pathophysiology of the target condition. Furthermore, clinical experts should be consulted to help identify gaps in current knowledge based on their expertise and interactions with patients. Relevant questions to ask to assess the current knowledge base for development of an observational CER study protocol are:
In designing studies for addressing stakeholder questions, investigators should engage multiple stakeholders in discussions about how the research problem is conceptualized from the stakeholders' perspectives. These discussions will aid in designing a study that can be used to inform decisionmaking. Together, investigators and stakeholders should work collaboratively to determine the major objectives of the study based on the health care decisions facing stakeholders. As pointed out by Heckman, 4 research objectives should be formalized outside considerations of available data and the inferences that can be made from various statistical estimation approaches. Doing so will allow the study objectives to be determined by stakeholder needs rather than the availability of existing data. A thorough discussion of these considerations is beyond the scope of this chapter, but some important considerations are summarized in supplement 1 of this User's Guide.
In order to conceptualize the problem, stakeholders and other experts should be asked to describe the potential relationships between the intervention and important health outcomes. This description will help researchers develop preliminary hypotheses about the stated relationships. Likewise, stakeholders, researchers, and other experts should be asked to enumerate all major assumptions that affect the conceptualization of the research problem, but will not be directly examined in the study. These assumptions should be described in the study protocol and in reporting final study results. By clearly stating the assumptions, protocol reviewers will be better able to assess how the assumptions may influence the study results.
Based on the conceptualization of the research problem, investigators and stakeholders should make use of applicable scientific theory in designing the study protocol and developing the analytic plan. Research that is designed using a validated theory has a higher potential to reach valid conclusions and improve the overall understanding of a phenomenon. In addition, theory will aid in the interpretation of the study findings, since these results can be put in context with the theory and with past research. Depending on the nature of the inquiry, theory from specific disciplines such as health behavior, sociology, or biology could be the basis for designing the study. In addition, the research team should work with stakeholders to develop a conceptual model or framework to guide the implementation of the study. The protocol should also contain one or more figures that summarize the conceptual model or framework as it applies to the study. These figures will allow readers to understand the theoretical or conceptual basis for the study and how the theory is operationalized for the specific study. The figures should diagram relationships between study variables and outcomes to help readers of the protocol visualize relationships that will be examined in the study.
For research questions about causal associations between exposures and outcomes, causal models such as directed acyclic graphs (DAGs) may be useful tools in designing the conceptual framework for the study and developing the analytic plan. The value of DAGs in the context of refining study questions is that they make assumptions explicit in ways that can clarify gaps in knowledge. Free software such as DAGitty is available for creating, editing, and analyzing causal models. A thorough discussion of DAGs is beyond the scope of this chapter, but more information about DAGs is available in supplement 2 of this User's Guide.
The following list of questions may be useful for defining and describing a study's conceptual framework in a CER protocol:
What is known about each element of the model?
Can relationships be expressed by causal diagrams?
The scientific method is a process of observation and experimentation in order for the evidence base to be expanded as new knowledge is developed. Therefore, stakeholders and investigators should consider whether a program of research comprising a sequential or concurrent series of studies, rather than a single study, is needed to adequately make a decision. Staging the research into multiple studies and making interim decisions may improve the final decision and make judicious use of scarce research resources. In some cases, the results of preliminary studies, descriptive epidemiology, or pilot work may be helpful in making interim decisions and designing further research. Overall, a planned series of related studies or a program of research may be needed to adequately address stakeholders' decisions.
An example of a structured program of research is the four phases of clinical studies used by the Food and Drug Administration (FDA) to reach a decision about whether or not a new drug is safe and efficacious for market approval in the United States. Using this analogy, the final decision about whether a drug is efficacious and safe to be marketed for specific medical indications is based upon the accumulation of scientific evidence from a series of studies (i.e., not from any individual study), which are conducted in multiple sequential phases. The evidence generated in each phase is reviewed to make interim decisions about the safety and efficacy of a new pharmaceutical until ultimately all the evidence is reviewed to make a final decision about drug approval.
Under the FDA model for decisionmaking, initial research involves laboratory and animal tests. If the evidence generated in these studies indicates that the drug is active and not toxic, the sponsor submits an application to the FDA for an “investigational new drug.” If the FDA approves, human testing for safety and efficacy can begin. The first phase of human testing is usually conducted in a limited number of healthy volunteers (phase 1). If these trials show evidence that the product is safe in healthy volunteers, then the drug is further studied in a small number of volunteers who have the targeted condition (phase 2). If phase 2 studies show that the drug has a therapeutic effect and lacks significant adverse effects, trials with large numbers of people are conducted to determine the drug's safety and efficacy (phase 3). Following these trials, all relevant scientific studies are submitted to the FDA for a decision about whether the drug should be approved for marketing. If there are additional considerations like special safety issues, observational studies may be required to assess the safety of the drug in routine clinical care after the drug is approved for marketing (phase 4). Overall, the decisionmaking and research are staged so that the cumulative findings from all studies are used by the FDA to make interim decisions until the final decision is made about whether a medical product will be approved for marketing.
While most decisions about the comparative effectiveness of interventions will not need such extensive testing, it still may be prudent to stage research in a way that allows for interim decisions and sequentially more rigorous studies. On the other hand conditional approval or interim decisions may risk confusing patients and other stakeholders about the extent to which current evidence indicates that a treatment is effective and safe for all individuals with a health condition. For instance, under this staged approach new treatments could rapidly diffuse into a market even when there is limited evidence of long-term effectiveness and safety for all potential users. An illustrative example of this is the case of lung-volume reduction surgery, which was increasingly being used to treat severe emphysema despite limited evidence supporting its safety and efficacy until new research raised questions about the safety of the procedure. 6
Below is one potential categorization for the stages of knowledge development as related to informing decisions about questions of comparative effectiveness:
The first stages (i.e., descriptive analysis, hypothesis generation, and feasibility studies) are not mutually exclusive and usually are not intended to provide conclusive results for most decisions. Instead these stages provide preliminary evidence or feasibility testing before larger, more resource-intensive studies are launched. Results from these categories of studies may allow for interim decisionmaking (e.g., conditional approval for reimbursement of a treatment while further research is conducted). While a phased approach to research may postpone the time when a conclusive decision can be reached it does help to conserve resources such as those that may be consumed in launching a large multicenter study when a smaller study may be sufficient. Investigators will need to engage stakeholders to prioritize what stage of research may be most useful for the practical range of decisions that will be made.
Investigators should discuss in the protocol what stage of knowledge the current study will fulfill in light of the actions available to different stakeholders. This will allow reviewers of the protocol to assess the degree to which the evidence generated in the study holds the potential to fill specific knowledge gaps. For studies that are described in the protocol as preliminary, this may also help readers understand other tradeoffs that were made in the design of the study, in terms of methodological limitations that were accepted a priori in order to gather preliminary information about the research questions.
As recommended in other AHRQ methods guides, 7 investigators should engage stakeholders in a dialogue in order to understand the objectives of the research in practical terms, particularly so that investigators know the types of decisions that the research may affect. In working with stakeholders to develop research questions that can be studied with scientific methods, investigators may ask stakeholders to identify six key components of the research questions that will form the basis for designing the study. These components are reflected in the PICOTS typology and are shown below in Table 1.2 . These components represent the critical elements that will help investigators design a study that will be able to address the stakeholders' needs. Additional references that expand upon how to frame research questions can be found in the literature. 8 - 9
PICOTS typology for developing research questions.
The PICOTS typology outlines the key parts of the research questions that the study will be designed to address. 10 As a new research protocol is developed these questions can be presented in preliminary form and refined as other steps in the process are implemented. After the preliminary questions are refined, investigators should examine the questions to make sure that they will meet the needs of the stakeholders. In addition, they should assess whether the questions can be answered within the timeframe allotted and with the resources that are available for the study.
Since stakeholders ultimately determine effectiveness, it is important for investigators to ensure that the study endpoints and outcomes will meet their needs. Stakeholders need to articulate to investigators the health outcomes that are most important for a particular stakeholder to make decisions about treatment or take other health care actions. The endpoints that stakeholders will use to determine effectiveness may vary considerably. Unlike efficacy trials, in which clinical endpoints and surrogate measures are frequently used to determine efficacy, effectiveness may need to be determined based on several measures, many of which are not biological. These endpoints may be categorized as clinical endpoints, patient-reported outcomes and quality of life, health resource utilization, and utility measures. Types of measures that could be used are mortality, morbidity and adverse effects, quality of life, costs, or multiple outcomes. Chapter 6 gives a more extensive discussion of potential outcome measures of effectiveness.
The reliability, validity, and accuracy of study instruments to validly measure the concepts they purport to measure will also need to be acceptable to stakeholders. For instance, if stakeholders are interested in quality of life as an outcome, but do not believe there is an adequate measure of quality of life, then measurement development may need to be done prior to study initiation or other measures will need to be identified by stakeholders.
Investigators and stakeholders should discuss the tradeoffs of different study designs that may be used for addressing the research questions. This dialogue will help researchers design a study that will be relevant and useful to the needs of stakeholders. All study designs have strengths and weaknesses, the latter of which may limit the conclusiveness of the final study results. Likewise, some decisions may require evidence that cannot be obtained from certain designs. In addition to design weaknesses, there are also practical tradeoffs that need to be considered in terms of research resources, like the time needed to complete the study, the availability of data, investigator expertise, subject recruitment, human subjects protection, research budget, difference to be detected, and lost-opportunity costs of doing the research instead of other studies that have priority for stakeholders. An important decision that will need to be made is whether or not randomization is needed for the questions being studied. There are several reasons why randomization might be needed, such as determining whether an FDA-approved drug can be used for a new use or indication that was not studied as part of the original drug approval process. A paper by Concato includes a thorough discussion of issues to consider when deciding whether randomization is necessary. 11
In discussing the tradeoffs of different study designs, researchers and stakeholders may wish to discuss the principal goals of research and ensure that researchers and stakeholders are aligned in their understanding of what is meant by scientific evidence. Fundamentally, research is a systematic investigation that uses scientific methods to measure, collect, and analyze data for the advancement of knowledge. This advancement is through the independent peer review and publication of study results, which are collectively referred to as scientific evidence. One definition of scientific evidence has been proposed by Normand and McNeil 12 as:
… the accumulation of information to support or refute a theory or hypothesis. … The idea is that assembling all the available information may reduce uncertainty about the effectiveness of the new technology compared to existing technologies in a setting where we believe particular relationships exist but are uncertain about their relevance …
While the primary aim of research is to produce new knowledge , the Normand and McNeil concept of evidence emphasizes that research helps create knowledge by reducing uncertainty about outcomes. However, rarely, if at all, does research eliminate all uncertainty around most decisions. In some cases, successful research will answer an important question and reduce uncertainty related to that question, but it may also increase uncertainty by leading to more, better informed questions regarding unknowns. As a result, nearly all decisions face some level of uncertainty even in a field where a body of research has been completed. This distinction is also critical because it helps to separate the research and subsequent actions that decisionmakers may take based on their assessment of the research results. Those subsequent actions may be informed by the research findings but will also be based on stakeholders' values and resources. Hence, as the definition by Normand and McNeil implies, research generates evidence but stakeholders decide whether to act on the evidence. Scientific evidence informs decisions to the extent it can adequately reduce the uncertainty about the problem for the stakeholder. Ultimately, treatment decisions are only guided by an assessment of the certainty that a course of therapy will lead to the outcomes of interest and the likelihood that this conclusion will be affected by the results of future studies.
In conceptualizing a study design, it is important for investigators to understand what constitutes sufficient and valid evidence from the stakeholder's perspective. In other words, what is the type of evidence that will be required to inform the stakeholder's decision to act or make a conscious decision not to take action? Evidence needed for action may vary by type of stakeholder and the scope of decisions that the stakeholder is making. For instance, a stakeholder who is making a population-based decision such as whether to provide insurance coverage for a new medical device with many alternatives may need substantially robust research findings in order to take action and provide that insurance coverage. In this example, the stakeholder may only accept as evidence a study with strong internal validity and generalizability (i.e., one conducted in a nationally representative sample of patients with the disease). On the other hand a patient who has a health condition where there are few treatments may be willing to accept lower-quality evidence in order to make a decision about whether to proceed with treatment despite a higher level of uncertainty about the outcome.
In many cases, there may exist a gradient of actions that can be taken based on available evidence. Quanstrum and Hayward 13 have discussed this gradient and argued that health care decisionmaking is changing, partly because more information is available to patients and other stakeholders about treatment options. As shown in the upper panel (A) in Figure 1.1 , many people may currently believe that health care treatment decisions are basically uniform for most people and under most circumstances. Panel A represents a hypothetical treatment whereby there is an evidentiary threshold or a point at which treatment is always beneficial and should be recommended. On the other hand below this threshold care provides no benefits and treatment should be discouraged. Quanstrum and Hayward argue that increasingly health care decisions are more like the lower panel (B). This panel portrays health care treatments as providing a large zone of discretion where benefits may be low or modest for most people. While above this zone treatment may always be recommended, individuals who fall within the zone may have questionable health benefits from treatment. As a result, different decisionmakers may take different actions based on their individual preferences.
Conceptualization of clinical decisionmaking. See Quanstrum KH, Hayward RA (Reference #). This figure is copyrighted by the Massachusetts Medical Society and reprinted with permission.
In light of this illustration, the following questions are suggested for discussion with stakeholders to help elicit the amount of uncertainty that is acceptable so that the study design can reach an appropriate level of evidence for the decision at hand:
As mentioned earlier, different stakeholders may disagree on the usefulness of different research designs, but it should be pointed out that this disagreement may be because stakeholders have different scopes of decisions to make. For example, high-quality research that is conclusive may be needed to make a decision that will affect the entire nation. On the other hand, results with more uncertainty as to the magnitude of the effect estimate(s) may be acceptable in making some decisions such as those affecting fewer people or where the risks to health are low. Often this disagreement occurs when different stakeholders debate whether evidence is needed from a new randomized controlled trial or whether evidence can be obtained from an analysis of an existing database. In this debate, both sides need to clarify whether they are facing the same decision or the decisions are different, particularly in terms of their scope.
Groups committed to evidence-based decisionmaking recognize that scientific evidence is only one component of the process of making decisions. Evidence generation is the goal of research, but evidence alone is not the only facet of evidence-based decisionmaking. In addition to scientific evidence, decisionmaking involves the consideration of (a) values, particularly the values placed on benefits and harms, and (b) resources. 14 Stakeholder differences in values and resources may mean that different decisions are made based on the same scientific evidence. Moreover, differences in values may create conflict in the decisionmaking process. One stakeholder may believe a particular study outcome is most important from their perspective, while another stakeholder may believe a different outcome is the most important for determining effectiveness.
Likewise, there may be inherent conflicts in values between individual decisionmaking and population decisionmaking, even though these decisions are often interrelated. For example, an individual may have a higher tolerance for treatment risk in light of the expected treatment benefits for him or her. On the other hand a regulatory health authority may determine that the population risk is too great without sufficient evidence that treatment provides benefits to the population. An example of this difference in perspective can be seen with how different decisionmakers responded to evidence about the drug Avastin ® (bevacizumab) for the treatment of metastatic breast cancer. In this case, the FDA revoked their approval of the breast cancer indication for Avastin after concluding that the drug had not been shown to be safe and effective for that use. Nonetheless, Medicare, the public insurance program for the elderly and disabled continued to allow coverage when a physician prescribes the drug, even for breast cancer. Likewise, some patient groups were reported to be concerned by the decision since it presumably would deny some women access to Avastin treatment. For a more thorough discussion of these issues around differences in perspective, the reader is referred to an article by Atkins 15 and the examples in Table 1.3 below.
Examples of individual versus population decisions (Adapted from Atkins, 2007).
In order for decisions to be objective, it is important for there to be an a priori discussion with stakeholders about the magnitude of effect that stakeholders believe represents a meaningful difference between treatment options. Researchers will be familiar with the basic tenet that statistically significant differences do not always represent clinically meaningful differences. Hence, researchers and stakeholders will need to have knowledge of the instruments that are used to measure differences and the accuracy, limitations, and properties of those instruments. Three key questions are recommended to use when eliciting from stakeholders the effect sizes that are important to them for making a decision or taking action:
In developing CER study objectives and questions, there are some potential challenges that face researchers and stakeholders. The involvement of patients and other stakeholders in determining study objectives and questions is a relatively new paradigm, but one that is consistent with established principles of translational research. A key principle of translational research is that users need to be involved in research at the earliest stages for the research to be adopted. 16 In addition, most research is currently initiated by an investigator, and traditionally there have been few incentives (and some disincentives) to involving others in designing a new research study. Although the research paradigm is rapidly shifting, 17 there is little information about how to structure, process, and evaluate outcomes from initiatives that attempt to engage stakeholders in developing study questions and objectives with researchers. As different approaches are taken to involve stakeholders in the research process, researchers will learn how to optimize the process of stakeholder involvement and improve the applicability of research to the end-users.
The bringing together of stakeholders may create some general challenges to the research team. For instance, it may be difficult to identify, engage, or manage all stakeholders who are interested in developing and using scientific evidence for addressing a problem. A process that allows for public commenting on research protocols through Internet postings may be helpful in reaching the widest network of interested stakeholders. Nevertheless, finding stakeholders who can represent all perspectives may not always be practical or available to the study team. In addition, competing interests among stakeholders may make prioritization of research questions challenging. Different stakeholders have different needs and this may make prioritization of research difficult. Nonetheless, as the science of translational research evolves, the collaboration of researchers with stakeholders will likely become increasingly the standard of practice in designing new research.
To assist researchers and stakeholders with working together, AHRQ has published several online resources to facilitate the involvement of stakeholders in the research process. These include a brief guide for stakeholders that highlights opportunities for taking part in AHRQ's Effective Health Care Program, a facilitation primer with strategies for working with diverse stakeholder groups, a table of suggested tasks for researchers to involve stakeholders in the identification and prioritization of future research, and learning modules with slide presentations on engaging stakeholders in the Effective Health Care Program. 18 - 19 In addition, AHRQ supports the Evidence-based Practice Centers in working with various stakeholders to further develop and prioritize decisionmakers' future research needs, which are published in a series of reports on AHRQ's Web site and on the National Library of Medicine's open-access Bookshelf. 20
Likewise, AHRQ supports the active involvement of patients and other stakeholders in the AHRQ DEcIDE program, in which different models of engagement have been used. These models include hosting in-person meetings with stakeholders to create research agendas; 21 - 22 developing research based on questions posed by public payers such as Centers for Medicare and Medicaid Services; addressing knowledge gaps that have been identified in AHRQ systematic reviews through new research; and supporting five research consortia, each of which involves researchers, patients, and other stakeholders working together to develop, prioritize, and implement research studies.
This chapter provides a framework for formulating study objectives and questions, for a research protocol on a CER topic. Implementation of the framework involves collaboration between researchers and stakeholders in conceptualizing the research objectives and questions and the design of the study. In this process, there is a shared commitment to protect the integrity of the research results from bias and conflicts of interest, so that the results are valid for informing decisions and health care actions. Due to the complexity of some health care decisions, the evidence needed for decisionmaking or action may need to be developed from multiple studies, including preliminary research that becomes the underpinning for larger studies. The principles described in this chapter are intended to strengthen the writing of research protocols and enhance the results from the emanating studies, for informing the important decisions facing patients, providers, and other stakeholders about health care treatments and new technologies. Subsequent chapters in this User's Guide provide specific principles for operationalizing the study objectives and research questions in writing a complete study protocol that can be executed as new research.
View in own window
Guidance | Key Considerations | Check |
---|---|---|
Characterize the primary uses and users (stakeholders) of the scientific evidence that will be generated by the study, and explain how the evidence may be used. | □ | |
Articulate the main study objectives in terms of a highly specific research question or set of related questions that the study will answer. | □ | |
Synthesize the literature and characterize the known effects of the exposures and interventions on patient outcomes. | □ | |
Provide a conceptual framework. | □ | |
Delineate study limitations that stakeholders and investigators are willing to accept a priori. | □ | |
Describe the meaningful magnitude of change in the outcomes of interest as defined by stakeholders. | □ |
Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide is copyrighted by the Agency for Healthcare Research and Quality (AHRQ). The product and its contents may be used and incorporated into other materials on the following three conditions: (1) the contents are not changed in any way (including covers and front matter), (2) no fee is charged by the reproducer of the product or its contents for its use, and (3) the user obtains permission from the copyright holders identified therein for materials noted as copyrighted by others. The product may not be sold for profit or incorporated into any profitmaking venture without the expressed written permission of AHRQ.
Other titles in these collections.
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.
Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.
Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.
Completely free for academics and students .
There are three basic types of questions that research projects can address:
The three question types can be viewed as cumulative. That is, a relational study assumes that you can first describe (by measuring or observing) each of the variables you are trying to relate. And, a causal study assumes that you can describe both the cause and effect variables and that you can show that they are related to each other. Causal studies are probably the most demanding of the three.
Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.
For more information on Conjointly's use of cookies, please read our Cookie Policy .
I am new to conjointly, i am already using conjointly.
Comparative analysis is a method that is widely used in social science . It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and explain every element of data that comparing.
Most social scientists are involved in comparative analysis. Macfarlane has thought that “On account of history, the examinations are typically on schedule, in that of other sociologies, transcendently in space. The historian always takes their society and compares it with the past society, and analyzes how far they differ from each other.
The comparative method of social research is a product of 19 th -century sociology and social anthropology. Sociologists like Emile Durkheim, Herbert Spencer Max Weber used comparative analysis in their works. For example, Max Weber compares the protestant of Europe with Catholics and also compared it with other religions like Islam, Hinduism, and Confucianism.
In social science, we can do comparisons in different ways. It is merely different based on the topic, the field of study. Like Emile Durkheim compare societies as organic solidarity and mechanical solidarity. The famous sociologist Emile Durkheim provides us with three different approaches to the comparative method. Which are;
2 . The unit of comparison
3. The motive of comparison
As another method of study, a comparative analysis is one among them for the social scientist. The researcher or the person who does the comparative method must know for what grounds they taking the comparative method. They have to consider the strength, limitations, weaknesses, etc. He must have to know how to do the analysis.
As mentioned earlier, the first step is to consider and determine the unit of comparison for your study. You must consider all the dimensions of your unit. This is where you put the two things you need to compare and to properly analyze and compare it. It is not an easy step, we have to systematically and scientifically do this with proper methods and techniques. You have to build your objectives, variables and make some assumptions or ask yourself about what you need to study or make a hypothesis for your analysis.
The best casings of reference are built from explicit sources instead of your musings or perceptions. To do that you can select some attributes in the society like marriage, law, customs, norms, etc. by doing this you can easily compare and contrast the two societies that you selected for your study. You can set some questions like, is the marriage practices of Catholics are different from Protestants? Did men and women get an equal voice in their mate choice? You can set as many questions that you wanted. Because that will explore the truth about that particular topic. A comparative analysis must have these attributes to study. A social scientist who wishes to compare must develop those research questions that pop up in your mind. A study without those is not going to be a fruitful one.
The grounds of comparison should be understandable for the reader. You must acknowledge why you selected these units for your comparison. For example, it is quite natural that a person who asks why you choose this what about another one? What is the reason behind choosing this particular society? If a social scientist chooses primitive Asian society and primitive Australian society for comparison, he must acknowledge the grounds of comparison to the readers. The comparison of your work must be self-explanatory without any complications.
The main element of the comparative analysis is the thesis or the report. The report is the most important one that it must contain all your frame of reference. It must include all your research questions, objectives of your topic, the characteristics of your two units of comparison, variables in your study, and last but not least the finding and conclusion must be written down. The findings must be self-explanatory because the reader must understand to what extent did they connect and what are their differences. For example, in Emile Durkheim’s Theory of Division of Labour, he classified organic solidarity and Mechanical solidarity . In which he means primitive society as Mechanical solidarity and modern society as Organic Solidarity. Like that you have to mention what are your findings in the thesis.
Your paper must link each point in the argument. Without that the reader does not understand the logical and rational advance in your analysis. In a comparative analysis, you need to compare the ‘x’ and ‘y’ in your paper. (x and y mean the two-unit or things in your comparison). To do that you can use likewise, similarly, on the contrary, etc. For example, if we do a comparison between primitive society and modern society we can say that; ‘in the primitive society the division of labour is based on gender and age on the contrary (or the other hand), in modern society, the division of labour is based on skill and knowledge of a person.
Comparative analysis is not always successful. It has some limitations. The broad utilization of comparative analysis can undoubtedly cause the feeling that this technique is a solidly settled, smooth, and unproblematic method of investigation, which because of its undeniable intelligent status can produce dependable information once some specialized preconditions are met acceptably.
One more basic issue with broad ramifications concerns the decision of the units being analyzed. The primary concern is that a long way from being a guiltless as well as basic assignment, the decision of comparison units is a basic and precarious issue. The issue with this sort of comparison is that in such investigations the depictions of the cases picked for examination with the principle one will in general turn out to be unreasonably streamlined, shallow, and stylised with contorted contentions and ends as entailment.
However, a comparative analysis is as yet a strategy with exceptional benefits, essentially due to its capacity to cause us to perceive the restriction of our psyche and check against the weaknesses and hurtful results of localism and provincialism. We may anyway have something to gain from history specialists’ faltering in utilizing comparison and from their regard for the uniqueness of settings and accounts of people groups. All of the above, by doing the comparison we discover the truths the underlying and undiscovered connection, differences that exist in society.
Also Read: How to write a Sociology Analysis? Explained with Examples
We believe in sharing knowledge with everyone and making a positive change in society through our work and contributions. If you are interested in joining us, please check our 'About' page for more information
Market research
TRY OUT NOW
Comparative research involves comparing elements to better understand the similarities and differences between them, applying rigorous methods and analyzing the results to draw meaningful conclusions. It helps to expand knowledge and provides a basis for informed decisions.
Learn more about its features and how it can be done.
Comparative research is research designed to analyse and compare two or more elements or phenomena to identify similarities, differences, and patterns between them. It is used in various disciplines such as science, psychology, sociology and economics.
The main features of comparative research are:
Comparative research is used in a variety of situations and for a variety of purposes. Here are some examples where you can use this approach:
Conducting comparative research requires a few basic steps. Here is a simple explanation of how to do it:
Before you begin, you should be clear about what you want to achieve with comparative research. Clearly define the goal and the research questions you want to answer.
Determine the elements, phenomena, or groups you want to compare. These can be different countries, cultures, policies, products, groups of people, etc. Make sure they are comparable and that you can get relevant data for each item.
Make sure you have a clear understanding of the elements you want to compare and that they are relevant to your research objective. The selected elements should be comparable to each other. This means that they should have characteristics and properties that can be measured and compared in a meaningful way.
When selecting a sample of items for comparison, ensure that it is a representative sample of the population or group to which you want to generalize the results.
Use a variety of sources and methods to collect data about the items being compared. Identify appropriate data sources to collect information about the items being compared. These sources may include surveys, interviews, direct observations, databases, historical records, government reports, academic literature, media, and others.
Perform a quality check on the data collected. This includes checking the consistency, accuracy and completeness of the data. If necessary, perform additional checks or contact participants to clarify any ambiguities or errors in the data.
Review the data collected and conduct a comparative analysis. Identify similarities and differences between the elements being compared. You can use statistical analysis techniques and comparison graphs, or simply compare the data qualitatively.
You can descriptive statistics Use to summarize and present quantitative data clearly and concisely. This can include measures of central tendency (such as mean, median or mode) and measures of dispersion (such as standard deviation or span). With the help of descriptive statistics you can understand the main characteristics of the items being compared.
Based on the analyses carried out, interpret the results of the investigation. Identify patterns, trends, or causal relationships that emerge from the comparison. Explain the similarities and differences observed and look for possible explanations.
Try to find possible explanations for the results observed in your comparative research. Identify key variables that may influence the similarities and differences identified. Consider whether there are underlying causal factors or mediating variables that could explain the results obtained.
Draw relevant conclusions based on the interpretation of the results. Summarize the most important results of the comparative research and answer the research questions asked in the first step.
Reflect on the impact of your findings in the broader context. Explore how the results may contribute to existing knowledge on the topic and how they might impact practice. Additionally, identify the limitations of your comparative research, such as: B. possible biases or limitations in the sample or methods used.
Communicate the results of your comparative research clearly and concisely. You can use written reports, visual presentations, charts, or comparative tables, whichever works best for your audience.
Comparative research is used to analyse and compare elements, phenomena or practices in order to understand differences, identify best practices, evaluate policies or programs and make informed decisions in various fields such as culture, economics, science, education, etc.
Remember that data collection is a crucial phase in comparative research. It is important that it is carried out carefully and accurately in order to obtain reliable and valid information that allows you to make meaningful comparisons between the selected items.
Online survey tools like QuestionPro, help you with structured data collection. If you choose, you can first set up a free account to try out the basic features, or request a demo to let us know your research needs and learn more about our products and various licenses.
Arrange an individual appointment and discover our market research software.
Do you have any questions about the content of this blog? Simply contact us via contact form . We look forward to a dialogue with you! You too can test QuestionPro for 10 days free of charge and without risk in depth!
Test the agile market research and experience management platform for qualitative and quantitative data collection and data analysis from QuestionPro for 10 days free of charge
FURTHER KEYWORDS
Market research | Empirical research | Research process | Survey research
KEYWORDS OF THIS BLOG POST
Comparative research | Research | Comparison
Ai generator.
Although not everyone would agree, comparing is not always bad. Comparing things can also give you a handful of benefits. For instance, there are times in our life where we feel lost. You may not be getting the job that you want or have the sexy body that you have been aiming for a long time now. Then, you happen to cross path with an old friend of yours, who happened to get the job that you always wanted. This scenario may put your self-esteem down, knowing that this friend got what you want, while you didn’t. Or you can choose to look at your friend as an example that your desire is actually attainable. Come up with a plan to achieve your personal development goal . Perhaps, ask for tips from this person or from the people who inspire you. According to the article posted in brit.co , licensed master social worker and therapist Kimberly Hershenson said that comparing yourself to someone successful can be an excellent self-motivation to work on your goals.
Aside from self-improvement, as a researcher, you should know that comparison is an essential method in scientific studies, such as experimental research and descriptive research . Through this method, you can uncover the relationship between two or more variables of your project in the form of comparative analysis .
Aiming to compare two or more variables of an experiment project, experts usually apply comparative research examples in social sciences to compare countries and cultures across a particular area or the entire world. Despite its proven effectiveness, you should keep it in mind that some states have different disciplines in sharing data. Thus, it would help if you consider the affecting factors in gathering specific information.
In comparing variables, the statistical and mathematical data collection, and analysis that quantitative research methodology naturally uses to uncover the correlational connection of the variables, can be essential. Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question.
The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative research , you can include this type of research method in your comparative research design.
Know more about comparative research by going over the following examples. You can download these zipped documents in PDF and MS Word formats.
Size: 113 KB
Size: 69 KB
Size: 172 KB
Size: 192 KB
Size: 516 KB
Size: 290 KB
Size: 19 KB
Size: 455 KB
Size: 244 KB
Size: 259 KB
If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing . To avoid those mistakes, follow the following pointers.
One of the mistakes that students do when writing a comparative essay is comparing the artists instead of artworks. Unless your instructor asked you to write a biographical essay, focus your writing on the works of the artists that you choose.
There is broad coverage of information that you can find on the internet for your project. Some students, however, prefer choosing the images randomly. In doing so, you may not create a successful comparative study. Therefore, we recommend you to discuss your selections with your teacher.
It is common for the students to repeat the ideas that they have listed in the comparison part. Keep it in mind that the spaces for this activity have limitations. Thus, it is crucial to reserve each space for more thoroughly debated ideas.
Unless instructed, it would be practical if you only include a few items(artworks). In this way, you can focus on developing well-argued information for your study.
We get it. You are doing this project because your instructor told you so. However, you can make your study more valuable by understanding the goals of doing the project. Know how you can apply this new learning. You should also know the criteria that your teachers use to assess your output. It will give you a chance to maximize the grade that you can get from this project.
Comparing things is one way to know what to improve in various aspects. Whether you are aiming to attain a personal goal or attempting to find a solution to a certain task, you can accomplish it by knowing how to conduct a comparative study. Use this content as a tool to expand your knowledge about this research methodology .
Text prompt
10 Examples of Public speaking
20 Examples of Gas lighting
An official website of the United States government
Here's how you know
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
The frequently asked questions below are related to the NOFO:
RFA-AG-25-028 Mentored Career Enhancement Awards to Build Cross-Disciplinary Knowledge and Skills for Comparative Studies of Human and Nonhuman Primate Species with Differing Life Spans (K18) .
These FAQs will be updated periodically as additional questions arise from applicant inquiries.
On August 15, 2024, from 10:00 a.m. to 12:00 p.m. ET, NIA will host a pre-application webinar for this RFA.
On this page you will find FAQs on:
Eligibility.
Below are general information FAQs.
A. Yes. This K18 award is intended for experienced scientists with expertise in human and/or NHP studies, who either wish to broaden their scientific capabilities or to make changes in their research careers by cross-training in a pertinent field to acquire new research skills or knowledge.
A. No. Examples of cross-training discipline collaborations for scientists with expertise in human and/or nonhuman primate (NHP) studies that would be well-suited for this award include collaborations on factors related to differences on primate species’ longevity engaging cross-disciplinary combinations such as (but not limited to):
A. Yes. Section IV: Application and Submission Information of this NOFO contains specific instructions regarding application content. Please review Section V: Application Review Information of the NOFO for review criteria “Specific to this NOFO” which outlines additional evaluation questions to address.
A: There is a single receipt date: November 1, 2024. No late applications will be accepted.
A. Letters of Intent are not required, but they are recommended. Letters of Intent are due on September 20, 2024. It is also strongly encouraged that all applicants consult with the NIA program staff early in their planning process and not later than the Letter of Intent due date. This consultation is considered separate from the Letter of Intent. Inquiries should be sent to the NIA Scientific/Research Contact listed below. Inquiries will be directed to the appropriate NIA Division staff who can then advise whether proposed activities would be considered responsive to the objectives and goals of this FOA.
A: NIA intends to commit up to $1.5 million in FY 2025 to fund approximately 4 to 5 awards, contingent on funding availability.
A: Award budgets are composed of salary and other program-related expenses. NIA will contribute up to $100,000 in direct costs per year toward the research development costs of the award recipient. The total salary requested may not exceed the legislatively mandated salary cap (see NOT-OD-24-057 for further guidance).
A: Application Submission Contacts
eRA Service Desk (Questions regarding ASSIST, eRA Commons, application errors and warnings, documenting system problems that threaten submission by the due date, and post-submission issues)
Finding Help Online: http://grants.nih.gov/support/ (preferred method of contact) Telephone: 301-402-7469 or 866-504-9552
General Grants Information (Questions regarding application instructions, application processes, and NIH grant resources) Email: [email protected] (preferred method of contact) Telephone: 301-945-7573
Grants.gov Customer Support (Questions regarding Grants.gov registration and Workspace) Contact Center Telephone: 800-518-4726 Email: [email protected]
Scientific/Research Contact Carol Nguyen National Institute on Aging (NIA) Email: [email protected]
Peer Review Contact Ramesh Vemuri, Ph.D. National Institute on Aging (NIA) Telephone: 301-402-7700 Email: [email protected]
Financial/Grants Management Contact Laura Pone National Institute on Aging (NIA) Telephone: 301-451-9956 Email: [email protected]
Below are eligibility information FAQs.
A. This award is intended for mid-career investigators who have established records of independent, peer-reviewed research grant funding. An academic rank of associate or full professor, or the equivalent in non-academic research settings, is required for eligibility for this award. Please see Section III of the NOFO for additional eligibility information.
A. An applicant must devote a minimum of three person months of effort (i.e., 25% of full-time professional effort) per year to their program of career development. This effort can be expended consistently throughout the academic or calendar year or be focused during summer or off-term months. Candidates may engage in other duties as part of the professional effort not covered by this award, as long as such duties do not interfere with or detract from the proposed career development program.
Below are mentor information FAQs.
A. Yes. Applicants may identify more than one mentor, as long as all proposed mentors have relevant expertise and available resources in one or more applicant knowledge-gap areas.
A. No. Award budgets are composed of applicant salary and research development costs of the award recipient (as outlined in Section II. Award Information). However, this interdisciplinary collaboration is intended to be mutually beneficial, by providing both the mentor and candidate with a broadened and more diverse research scope, as well as expanded networks for future collaboration and resource access. It is expected that this initiative will lead to new and/or augmented collaborative research programs competitive for NIA funding.
A. There are a variety of research resources that can assist in identifying potential mentors and/or be engaged as components of the applicant’s career enhancement plan. Many of these resources are listed in Section I. Funding Opportunity Description of RFA-AG-25-028. We recommend that potential applicants consult NIA program staff early on in their process of considering or planning an application, especially if help is needed in identifying a mentor or mentoring team. Inquiries should be sent to the NIA Scientific/Research Contact (link to below section with contact information).
A. No. Career enhancement activities may be conducted within the candidate’s institution but must not be in the department of the applicant’s current primary appointment. The proposed research project can be conducted in facilities or field sites affiliated with either the applicant’s institution (i.e., site of the candidate’s primary appointments) or the mentor’s institution. Additionally, the candidate and proposed mentor(s) should not have extensive previous research collaborations.
nia.nih.gov
An official website of the National Institutes of Health
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
A comparative study of the influence of communication on the adoption of digital agriculture in the united states and brazil †.
2. materials and methods, 2.1. study region, 2.2. survey instrument, 2.3. data collection, 2.4. data analysis.
3. results and discussion, 3.1. technology adoption, decisions, and benefits, 3.2. level of influence from mass media, social media, and interpersonal meetings, 3.3. relationship between the adoption of technologies and communication channels, 4. conclusions, author contributions, institutional review board statement, data availability statement, acknowledgments, conflicts of interest.
Click here to enlarge figure
Brazil | United States | |
---|---|---|
Use of Digital Technologies | Means | Means |
Guidance/Autosteer | 3.56 *** | 4.23 *** |
Yield monitors | 2.92 *** | 4.31 *** |
Satellite/drone imagery | 2.99 | 2.94 |
Soil electrical conductivity mapping | 1.50 *** | 1.81 *** |
Wired or wireless sensor networks | 2.10 ** | 2.36 ** |
Electronic records/mapping for traceability | 2.09 *** | 3.26 *** |
Sprayer control systems | 1.98 *** | 3.93 *** |
Automatic rate control telematics | 2.11 *** | 3.36 *** |
Brazil | United States | |
---|---|---|
Making Decisions | Means | U.S. Means |
NPK fertilization and liming application | 3.64 *** | 3.93 *** |
Overall hybrid/variety selection | 3.49 | 3.53 |
Overall crop planting rates | 3.44 | 3.45 |
Variable seeding rate prescriptions | 2.38 *** | 2.72 *** |
Pesticide selection (herbicides, insecticides or fungicides) | 3.26 *** | 2.91 *** |
Cropping sequence/rotation | 3.12 *** | 2.69 *** |
Irrigation | 2.02 *** | 1.41 *** |
Brazil | United States | |
---|---|---|
Benefits | Means | Means |
Increased crop productivity/yields | 3.70 ** | 3.92 ** |
Cost reductions | 3.63 | 3.78 |
Purchase of inputs | 3.38 | 3.40 |
Marketing choices | 3.31 *** | 2.96 *** |
Time savings (paper filing to digital) | 3.51 *** | 3.17 *** |
Labor efficiencies | 3.57 *** | 3.30 *** |
Lower environmental impact | 3.34 *** | 2.99 *** |
Autosteer (less fatigue/stress) | 3.54 *** | 4.18 *** |
Brazil | United States | |
---|---|---|
Mass Media | Means | Means |
Newspaper | 1.75 *** | 2.11 *** |
Magazine | 2.11 *** | 2.78 *** |
Radio | 2.17 ** | 2.40 ** |
Television | 2.15 | 2.10 |
Website and blog | 3.38 | 3.41 |
Cable television | 2.41 *** | 1.55 *** |
YouTube | 3.17 *** | 2.52 *** |
3.65 | - | |
2.40 *** | 1.74 *** | |
- | 1.89 | |
2.03 *** | 1.47 *** | |
2.61 *** | 1.26 *** | |
Snapchat | - | 1.26 |
Messenger | 1.71 | - |
Field days | 3.87 *** | 3.51 *** |
Conferences, forums, seminars | 3.86 *** | 3.53 *** |
Extension agents | 3.63 | 3.50 |
Retailers | 3.20 *** | 3.50 *** |
Peer groups | 3.42 | 3.41 |
Conversations with neighbors | 3.62 ** | 3.40 ** |
| | |
Guidance/Autosteer | 1st Conversation with neighbors (ρS 0.209) | 1st YouTube (ρS 0.208) |
2nd Conferences, forums, seminars (ρS 0.120) | 2nd Twitter (ρS 0.159) | |
3rd Field days (ρS 0.096) | 3rd Website and blog (ρS 0.154) | |
Yield monitors | 1st LinkedIn (ρS 0.178) | 1st YouTube (ρS 0.181) |
2nd Conversation with neighbors (ρS 0.170) | 2nd Peer groups (ρS 0.163) | |
3rd Cable television (ρS 0.145) | 3rd Website and blog (ρS 0.145) | |
Satellite/drone imagery | 1st LinkedIn (ρS 0.253) | 1st Website and blog (ρS 0.225) |
2nd Conferences, forums, seminars (ρS 0.246) | 2nd Twitter (ρS 0.180) | |
3rd Instagram (ρS 0.226) | 3rd YouTube (ρS 0.165) | |
Soil electrical conductivity map | 1st LinkedIn (ρS 0.228) | 1st Cable Television (ρS 0.199) |
2nd Instagram (ρS 0.183) | 2nd YouTube (ρS 0.163) | |
3rd Messenger (ρS 0.182) | 3rd Peer groups (ρS 0.141) | |
Wired or wireless sensor networks | 1st LinkedIn (ρS 0.261) | 1st Instagram (ρS 0.271) |
2nd Instagram (ρS 0.208) | 2nd YouTube (ρS 0.231) | |
3rd Conferences, forums, seminars (ρS 0.183) | 3rd Twitter (ρS 0.209) | |
Electronic records/mapping for traceability | 1st LinkedIn (ρS 0.224) | 1st Website and blog (ρS 0.252) |
2nd Instagram (ρS 0.180) | 2nd YouTube (ρS 0.190) | |
3rd Conferences, forums, seminars (ρS 0.148) | 3rd Facebook (ρS 0.158) | |
Sprayer control systems | 1st LinkedIn (ρS 0.221) | 1st YouTube (ρS 0.165) |
2nd Cable television (ρS 0.189) | 2nd Website and blog (ρS 0.164) | |
3rd WhatsApp (ρS 0.151) | 3rd Retailers and extension agents (ρS 0.133) | |
Automatic rate control telematics | 1st LinkedIn (ρS 0.246) | 1st YouTube (ρS 0.238) |
2nd Instagram (ρS 0.186) | 2nd Website and blog (ρS 0.204) | |
3rd Peer groups (ρS 0.135) | 3rd Facebook (ρS 0.145) | |
Website and blog | 0 | 6 |
Cable television | 2 | 1 |
Total | 2 | 7 |
YouTube | 0 | 8 |
7 | 0 | |
5 | 1 | |
0 | 3 | |
0 | 2 | |
1 | 0 | |
Messenger | 1 | 0 |
Total | 14 | 14 |
Conferences, forums, seminars | 4 | 0 |
Conversation with neighbors | 2 | 0 |
Peer groups | 1 | 2 |
Field days | 1 | 0 |
Retailers and extension agents | 0 | 1 |
Total | 8 | 3 |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Colussi, J.; Sonka, S.; Schnitkey, G.D.; Morgan, E.L.; Padula, A.D. A Comparative Study of the Influence of Communication on the Adoption of Digital Agriculture in the United States and Brazil. Agriculture 2024 , 14 , 1027. https://doi.org/10.3390/agriculture14071027
Colussi J, Sonka S, Schnitkey GD, Morgan EL, Padula AD. A Comparative Study of the Influence of Communication on the Adoption of Digital Agriculture in the United States and Brazil. Agriculture . 2024; 14(7):1027. https://doi.org/10.3390/agriculture14071027
Colussi, Joana, Steve Sonka, Gary D. Schnitkey, Eric L. Morgan, and Antônio D. Padula. 2024. "A Comparative Study of the Influence of Communication on the Adoption of Digital Agriculture in the United States and Brazil" Agriculture 14, no. 7: 1027. https://doi.org/10.3390/agriculture14071027
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Please note you do not have access to teaching notes, leadership dynamics in nursing: a comparative study of paternalistic approaches in china and pakistan.
Leadership in Health Services
ISSN : 1751-1879
Article publication date: 9 July 2024
This study aims to examine the impact of nurses’ paternalistic leadership style on performance, in the presence of underlying mechanisms, i.e. self-efficacy as a mediator in the high-power distance societies, namely, China and Pakistan, based on social exchange theory. Both healthcare sectors have seen several behavioral advancements in recent years. To improve things, even more, behavioral elements such as the influence of leadership styles, personality traits and so on have become more important. However, leadership styles, particularly paternalistic leadership, have received little attention in this field and need to be highlighted along with the mediating and moderating effects.
Data were collected from public and private sector hospitals in China and Pakistan using a 6-week time lag technique. Firstly, 356 Chinese and 411 Pakistani nurses were surveyed about their perceptions of power distance, self-efficacy and paternalistic leadership. Their managers were called six weeks later for a dyadic response to provide feedback on nurses’ performance. For confirmatory factor analysis, AMOS 22 and for regression analysis, SPSS 22 was used.
According to the study's findings, nurses in both countries perform well when led by a paternalistic leader. Furthermore, self-efficacy explains the relationship between paternalistic leaders and nurses’ performance. The moderated-mediation result also supported the importance of power distance.
This study highlights the kind of nursing leadership which is beneficial in high-power-distance societies and leads to better performance. According to this research, paternalistic leadership improves nurses’ performance in both China and Pakistan. As a result, this study will be useful in high-power-distance societies, where hospital administrators can ensure that paternalism is implemented in leadership, thereby improving nurse performance.
Safdar, S. , Faiz, S. and Muabark, N. (2024), "Leadership dynamics in nursing: a comparative study of paternalistic approaches in China and Pakistan", Leadership in Health Services , Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/LHS-03-2024-0028
Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited
All feedback is valuable.
Please share your general feedback
Contact Customer Support
IMAGES
VIDEO
COMMENTS
A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...
Structure of comparative research questions. There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:
The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.
These kinds of research question assist you in learning more about the type of relationship between two study variables. Because they aim to distinctly define the connection between two variables, relationship-based research questions are also known as correlational research questions. Examples of Comparative Research Questions
Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time.
Research questions can be categorized into different types, depending on the type of research to be undertaken. ... Comparative questions are helpful when studying groups with dependent variables where one variable is compared with another. ... International journal of qualitative studies in education, 22(4), 431-447.
Types of research questions. Now that we've defined what a research question is, let's look at the different types of research questions that you might come across. Broadly speaking, there are (at least) four different types of research questions - descriptive, comparative, relational, and explanatory. Descriptive questions ask what is happening. In other words, they seek to describe a ...
Comparative analysis is indispensable because it helps businesses focus on meaningful data that support doing things a particular way or, conversely, fostering growth by changing tactics. Comparative research helps rule out which theories and arguments are worth pursuing versus letting go of, always by the data rather than a hunch or intuition.
What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...
The quantitative research design that we select subsequently determines whether we look for relationships, associations, trends or interactions. To learn how to structure (i.e., write out) each of these three types of quantitative research question (i.e., descriptive, comparative, relationship-based research questions), see the article: How to ...
To write a good compare-and-contrast paper, you must take your raw data—the similarities and differences you've observed —and make them cohere into a meaningful argument. Here are the five elements required. Frame of Reference. This is the context within which you place the two things you plan to compare and contrast; it is the umbrella ...
Research goals. Comparative communication research is a combination of substance (specific objects of investigation studied in diferent macro-level contexts) and method (identification of diferences and similarities following established rules and using equivalent concepts).
The presence of multiple research questions in a study can complicate the design, statistical analysis, and feasibility. It's advisable to focus on a single primary research question for the study. The primary question, clearly stated at the end of a grant proposal's introduction, usually specifies the study population, intervention, and ...
A case study is a qualitative research approach that involves carrying out a detailed investigation into a research subject(s) or variable(s). In the course of a case study, the researcher gathers a range of data from multiple sources of information via different data collection methods, and over a period of time. ... Comparative Research ...
In a descriptive study, you may have one or more purely descriptive research questions. Whether your study is comparative or ex post facto, y ou will definitely have one or more comparative research questions pertaining to between- group differences, along with a null and alternative hypothesis pair for each comparative research question.
The steps involved in the process of developing research questions and study objectives for conducting observational comparative effectiveness research (CER) are described in this chapter. It is important to begin with identifying decisions under consideration, determining who the decisionmakers and stakeholders in the specific area of research under study are, and understanding the context in ...
There are three basic types of questions that research projects can address: Descriptive. When a study is designed primarily to describe what is going on or what exists. Public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. For instance, if we want to know what ...
Comparative analysis is a method that is widely used in social science. It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and ...
To calculate the minimum sample size for a comparative study on electrolytes in preterm and full-term infants, you can use a formula that takes into consideration the effect size, level of ...
How to Conduct Comparative Research. Conducting comparative research requires a few basic steps. Here is a simple explanation of how to do it: 1. Define the goal of comparative research. Before you begin, you should be clear about what you want to achieve with comparative research. Clearly define the goal and the research questions you want to ...
Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question. The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative ...
The project team designed a common questionnaire including 63 items and distributed it to the whole sample during school time, from September to October 2005. Based on the results of this quantitative phase, 240 young people (24 in each country) were selected according to their different levels of internet usage, age, and gender, for individual ...
Such a descending order aligns with our earlier comparative studies between the research questions written by Chinese expert and novice writers (Liu & Gong, 2023) and echoes Dillon's (1984) view that there is an implicit hierarchy of research questions. Lower-order questions generate more fundamental knowledge than higher-order ones; thus, they ...
NIA provided frequently asked questions related to RFA-AG-25-028: Mentored Career Enhancement Awards to Build Cross-Disciplinary Knowledge and Skills for Comparative Studies of Human and Nonhuman Primate Species with Differing Life Spans (K18).
Digital agriculture has been developing rapidly over the past decade. However, studies have shown that the need for more ability to use these tools and the shortage of knowledge contribute to current farmer unease about digital technology. In response, this study investigated the influence of communication channels—mass media, social media, and interpersonal meetings—on farmers' adoption ...
The moderated-mediation result also supported the importance of power distance.,This study highlights the kind of nursing leadership which is beneficial in high-power-distance societies and leads to better performance. According to this research, paternalistic leadership improves nurses' performance in both China and Pakistan.
Background: Artificial Intelligence (AI) has potential to transform healthcare including the field of infectious diseases diagnostics. This study assesses the capability of three large language models (LLMs), GPT 4, Llama 3, and Gemini 1.5 to generate differential diagnoses, comparing their outputs against those of medical experts to evaluate AI's potential in augmenting clinical decision-making.