• U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

National Institute of Environmental Health Sciences

Your environment. your health., what is ethics in research & why is it important, by david b. resnik, j.d., ph.d..

December 23, 2020

The ideas and opinions expressed in this essay are the author’s own and do not necessarily represent those of the NIH, NIEHS, or US government.

ethic image decorative header

When most people think of ethics (or morals), they think of rules for distinguishing between right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a religious creed like the Ten Commandments ("Thou Shalt not kill..."), or a wise aphorisms like the sayings of Confucius. This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior.

Most people learn ethical norms at home, at school, in church, or in other social settings. Although most people acquire their sense of right and wrong during childhood, moral development occurs throughout life and human beings pass through different stages of growth as they mature. Ethical norms are so ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand, if morality were nothing more than commonsense, then why are there so many ethical disputes and issues in our society?

Alternatives to Animal Testing

test tubes on a tray decorrative image

Alternative test methods are methods that replace, reduce, or refine animal use in research and testing

Learn more about Environmental science Basics

One plausible explanation of these disagreements is that all people recognize some common ethical norms but interpret, apply, and balance them in different ways in light of their own values and life experiences. For example, two people could agree that murder is wrong but disagree about the morality of abortion because they have different understandings of what it means to be a human being.

Most societies also have legal rules that govern behavior, but ethical norms tend to be broader and more informal than laws. Although most societies use laws to enforce widely accepted moral standards and ethical and legal rules use similar concepts, ethics and law are not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of protesting laws or expressing political viewpoints.

Another way of defining 'ethics' focuses on the disciplines that study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For example, a "medical ethicist" is someone who studies ethical standards in medicine. One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming , one may take an economic, ecological, political, or ethical perspective on the problem. While an economist might examine the cost and benefits of various policies related to global warming, an environmental ethicist could examine the ethical values and principles at stake.

See ethics in practice at NIEHS

Read latest updates in our monthly  Global Environmental Health Newsletter

global environmental health

Many different disciplines, institutions , and professions have standards for behavior that suit their particular aims and goals. These standards also help members of the discipline to coordinate their actions or activities and to establish the public's trust of the discipline. For instance, ethical standards govern conduct in medicine, law, engineering, and business. Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics and Research Ethics Timeline .

There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research , such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating , falsifying, or misrepresenting research data promote the truth and minimize error.

Join an NIEHS Study

See how we put research Ethics to practice.

Visit Joinastudy.niehs.nih.gov to see the various studies NIEHS perform.

join a study decorative image

Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote the values that are essential to collaborative work , such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship , copyright and patenting policies , data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely.

Third, many of the ethical norms help to ensure that researchers can be held accountable to the public . For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections, and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public.

Fourth, ethical norms in research also help to build public support for research. People are more likely to fund a research project if they can trust the quality and integrity of research.

Finally, many of the norms of research promote a variety of other important moral and social values , such as social responsibility, human rights, animal welfare, compliance with the law, and public health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.

Codes and Policies for Research Ethics

Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies have ethics rules for funded researchers.

  • National Institutes of Health (NIH)
  • National Science Foundation (NSF)
  • Food and Drug Administration (FDA)
  • Environmental Protection Agency (EPA)
  • US Department of Agriculture (USDA)
  • Singapore Statement on Research Integrity
  • American Chemical Society, The Chemist Professional’s Code of Conduct
  • Code of Ethics (American Society for Clinical Laboratory Science)
  • American Psychological Association, Ethical Principles of Psychologists and Code of Conduct
  • Statement on Professional Ethics (American Association of University Professors)
  • Nuremberg Code
  • World Medical Association's Declaration of Helsinki

Ethical Principles

The following is a rough and general summary of some ethical principles that various codes address*:

validity in research ethics

Strive for honesty in all scientific communications. Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.

validity in research ethics

Objectivity

Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research where objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.

validity in research ethics

Keep your promises and agreements; act with sincerity; strive for consistency of thought and action.

validity in research ethics

Carefulness

Avoid careless errors and negligence; carefully and critically examine your own work and the work of your peers. Keep good records of research activities, such as data collection, research design, and correspondence with agencies or journals.

validity in research ethics

Share data, results, ideas, tools, resources. Be open to criticism and new ideas.

validity in research ethics

Transparency

Disclose methods, materials, assumptions, analyses, and other information needed to evaluate your research.

validity in research ethics

Accountability

Take responsibility for your part in research and be prepared to give an account (i.e. an explanation or justification) of what you did on a research project and why.

validity in research ethics

Intellectual Property

Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.

validity in research ethics

Confidentiality

Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.

validity in research ethics

Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.

validity in research ethics

Responsible Mentoring

Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.

validity in research ethics

Respect for Colleagues

Respect your colleagues and treat them fairly.

validity in research ethics

Social Responsibility

Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.

validity in research ethics

Non-Discrimination

Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.

validity in research ethics

Maintain and improve your own professional competence and expertise through lifelong education and learning; take steps to promote competence in science as a whole.

validity in research ethics

Know and obey relevant laws and institutional and governmental policies.

validity in research ethics

Animal Care

Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

validity in research ethics

Human Subjects protection

When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.

* Adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Ethical Decision Making in Research

Although codes, policies, and principles are very important and useful, like any set of rules, they do not cover every situation, they often conflict, and they require interpretation. It is therefore important for researchers to learn how to interpret, assess, and apply various research rules and how to make decisions and act ethically in various situations. The vast majority of decisions involve the straightforward application of ethical rules. For example, consider the following case:

The research protocol for a study of a drug on hypertension requires the administration of the drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test. However, he really wants to finish his work in time to go to Florida on spring break with his friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5 additional results.

Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct , which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify as unethical are viewed as misconduct. It is important to remember, however, that misconduct occurs only when researchers intend to deceive : honest errors related to sloppiness, poor record keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct. Also, reasonable disagreements about research methods, procedures, and interpretations do not constitute research misconduct. Consider the following case:

Dr. T has just discovered a mathematical error in his paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print. In order to avoid embarrassment, Dr. T decides to ignore the error.

Dr. T's error is not misconduct nor is his decision to take no action to correct the error. Most researchers, as well as many different policies and codes would say that Dr. T should tell the journal (and any coauthors) about the error and consider publishing a correction or errata. Failing to publish a correction would be unethical because it would violate norms relating to honesty and objectivity in research.

There are many other activities that the government does not define as "misconduct" but which are still regarded by most researchers as unethical. These are sometimes referred to as " other deviations " from acceptable research practices and include:

  • Publishing the same paper in two different journals without telling the editors
  • Submitting the same paper to different journals without telling the editors
  • Not informing a collaborator of your intent to file a patent in order to make sure that you are the sole inventor
  • Including a colleague as an author on a paper in return for a favor even though the colleague did not make a serious contribution to the paper
  • Discussing with your colleagues confidential data from a paper that you are reviewing for a journal
  • Using data, ideas, or methods you learn about while reviewing a grant or a papers without permission
  • Trimming outliers from a data set without discussing your reasons in paper
  • Using an inappropriate statistical technique in order to enhance the significance of your research
  • Bypassing the peer review process and announcing your results through a press conference without giving peers adequate information to review your work
  • Conducting a review of the literature that fails to acknowledge the contributions of other people in the field or relevant prior work
  • Stretching the truth on a grant application in order to convince reviewers that your project will make a significant contribution to the field
  • Stretching the truth on a job application or curriculum vita
  • Giving the same research project to two graduate students in order to see who can do it the fastest
  • Overworking, neglecting, or exploiting graduate or post-doctoral students
  • Failing to keep good research records
  • Failing to maintain research data for a reasonable period of time
  • Making derogatory comments and personal attacks in your review of author's submission
  • Promising a student a better grade for sexual favors
  • Using a racist epithet in the laboratory
  • Making significant deviations from the research protocol approved by your institution's Animal Care and Use Committee or Institutional Review Board for Human Subjects Research without telling the committee or the board
  • Not reporting an adverse event in a human research experiment
  • Wasting animals in research
  • Exposing students and staff to biological risks in violation of your institution's biosafety rules
  • Sabotaging someone's work
  • Stealing supplies, books, or data
  • Rigging an experiment so you know how it will turn out
  • Making unauthorized copies of data, papers, or computer programs
  • Owning over $10,000 in stock in a company that sponsors your research and not disclosing this financial interest
  • Deliberately overestimating the clinical significance of a new drug in order to obtain economic benefits

These actions would be regarded as unethical by most scientists and some might even be illegal in some cases. Most of these would also violate different professional ethics codes or institutional policies. However, they do not fall into the narrow category of actions that the government classifies as research misconduct. Indeed, there has been considerable debate about the definition of "research misconduct" and many researchers and policy makers are not satisfied with the government's narrow definition that focuses on FFP. However, given the huge list of potential offenses that might fall into the category "other serious deviations," and the practical problems with defining and policing these other deviations, it is understandable why government officials have chosen to limit their focus.

Finally, situations frequently arise in research in which different people disagree about the proper course of action and there is no broad consensus about what should be done. In these situations, there may be good arguments on both sides of the issue and different ethical principles may conflict. These situations create difficult decisions for research known as ethical or moral dilemmas . Consider the following case:

Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 10,000 agricultural workers. She has an impressive dataset that includes information on demographics, environmental exposures, diet, genetics, and various disease outcomes such as cancer, Parkinson’s disease (PD), and ALS. She has just published a paper on the relationship between pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers from her dataset. She receives a request from another research team that wants access to her complete dataset. They are interested in examining the relationship between pesticide exposures and skin cancer. Dr. Wexford was planning to conduct a study on this topic.

Dr. Wexford faces a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with the other research team. Her funding agency may also have rules that obligate her to share data. On the other hand, if she shares data with the other team, they may publish results that she was planning to publish, thus depriving her (and her team) of recognition and priority. It seems that there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to think about what she should do. One possible option is to share data, provided that the investigators sign a data use agreement. The agreement could define allowable uses of the data, publication plans, authorship, etc. Another option would be to offer to collaborate with the researchers.

The following are some step that researchers, such as Dr. Wexford, can take to deal with ethical dilemmas in research:

What is the problem or issue?

It is always important to get a clear statement of the problem. In this case, the issue is whether to share information with the other research team.

What is the relevant information?

Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford needs to have more information concerning such matters as university or funding agency or journal policies that may apply to this situation, the team's intellectual property interests, the possibility of negotiating some kind of agreement with the other team, whether the other team also has some information it is willing to share, the impact of the potential publications, etc.

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In this case, there may be other choices besides 'share' or 'don't share,' such as 'negotiate an agreement' or 'offer to collaborate with the researchers.'

How do ethical codes or policies as well as legal rules apply to these different options?

The university or funding agency may have policies on data management that apply to this case. Broader ethical rules, such as openness and respect for credit and intellectual property, may also apply to this case. Laws relating to intellectual property may be relevant.

Are there any people who can offer ethical advice?

It may be useful to seek advice from a colleague, a senior researcher, your department chair, an ethics or compliance officer, or anyone else you can trust. In the case, Dr. Wexford might want to talk to her supervisor and research team before making a decision.

After considering these questions, a person facing an ethical dilemma may decide to ask more questions, gather more information, explore different options, or consider other ethical rules. However, at some point he or she will have to make a decision and then take action. Ideally, a person who makes a decision in an ethical dilemma should be able to justify his or her decision to himself or herself, as well as colleagues, administrators, and other people who might be affected by the decision. He or she should be able to articulate reasons for his or her conduct and should consider the following questions in order to explain how he or she arrived at his or her decision:

  • Which choice will probably have the best overall consequences for science and society?
  • Which choice could stand up to further publicity and scrutiny?
  • Which choice could you not live with?
  • Think of the wisest person you know. What would he or she do in this situation?
  • Which choice would be the most just, fair, or responsible?

After considering all of these questions, one still might find it difficult to decide what to do. If this is the case, then it may be appropriate to consider others ways of making the decision, such as going with a gut feeling or intuition, seeking guidance through prayer or meditation, or even flipping a coin. Endorsing these methods in this context need not imply that ethical decisions are irrational, however. The main point is that human reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve all ethical dilemmas in a finite amount of time.

Promoting Ethical Conduct in Science

globe decorative image

Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey

NCBI Pubmed

 Read about U.S. research instutuins follow federal manadates for ethics in research 

Learn more about NIEHS Research

Most academic institutions in the US require undergraduate, graduate, or postgraduate students to have some education in the responsible conduct of research (RCR) . The NIH and NSF have both mandated training in research ethics for students and trainees. Many academic institutions outside of the US have also developed educational curricula in research ethics

Those of you who are taking or have taken courses in research ethics may be wondering why you are required to have education in research ethics. You may believe that you are highly ethical and know the difference between right and wrong. You would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of your colleagues are highly ethical and that there is no ethics problem in research..

If you feel this way, relax. No one is accusing you of acting unethically. Indeed, the evidence produced so far shows that misconduct is a very rare occurrence in research, although there is considerable variation among various estimates. The rate of misconduct has been estimated to be as low as 0.01% of researchers per year (based on confirmed cases of misconduct in federally funded research) to as high as 1% of researchers per year (based on self-reports of misconduct on anonymous surveys). See Shamoo and Resnik (2015), cited above.

Clearly, it would be useful to have more data on this topic, but so far there is no evidence that science has become ethically corrupt, despite some highly publicized scandals. Even if misconduct is only a rare occurrence, it can still have a tremendous impact on science and society because it can compromise the integrity of research, erode the public’s trust in science, and waste time and resources. Will education in research ethics help reduce the rate of misconduct in science? It is too early to tell. The answer to this question depends, in part, on how one understands the causes of misconduct. There are two main theories about why researchers commit misconduct. According to the "bad apple" theory, most scientists are highly ethical. Only researchers who are morally corrupt, economically desperate, or psychologically disturbed commit misconduct. Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

According to the "stressful" or "imperfect" environment theory, misconduct occurs because various institutional pressures, incentives, and constraints encourage people to commit misconduct, such as pressures to publish or obtain grants or contracts, career ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor oversight of researchers (see Shamoo and Resnik 2015). Moreover, defenders of the stressful environment theory point out that science's peer review system is far from perfect and that it is relatively easy to cheat the system. Erroneous or fraudulent research often enters the public record without being detected for years. Misconduct probably results from environmental and individual causes, i.e. when people who are morally weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a course in research ethics can be useful in helping to prevent deviations from norms even if it does not prevent misconduct. Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For example, some unethical authorship practices probably reflect traditions and practices that have not been questioned seriously until recently. If the director of a lab is named as an author on every paper that comes from his lab, even if he does not make a significant contribution, what could be wrong with that? That's just the way it's done, one might argue. Another example where there may be some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a university administrator sees no ethical problem in taking a large gift with strings attached from a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a $300 finder’s fee for referring patients into a clinical trial.

If "deviations" from ethical conduct occur in research as a result of ignorance or a failure to reflect critically on problematic traditions, then a course in research ethics may help reduce the rate of serious deviations by improving the researcher's understanding of ethics and by sensitizing him or her to the issues.

Finally, education in research ethics should be able to help researchers grapple with the ethical dilemmas they are likely to encounter by introducing them to important concepts, tools, principles, and methods that can be useful in resolving these dilemmas. Scientists must deal with a number of different controversial topics, such as human embryonic stem cell research, cloning, genetic engineering, and research involving animal or human subjects, which require ethical reflection and deliberation.

Understanding Research Ethics

  • First Online: 22 April 2022

Cite this chapter

validity in research ethics

  • Sarah Cuschieri 2  

658 Accesses

1 Citations

As a researcher, whatever your career stage, you need to understand and practice good research ethics. Moral and ethical principles are requisite in research to ensure no deception or harm to participants, scientific community, and society occurs. Failure to follow such principles leads to research misconduct, in which case the researcher faces repercussions ranging from withdrawal of an article from publication to potential job loss. This chapter describes the various types of research misconduct that you should be aware of, i.e., data fabrication and falsification, plagiarism, research bias, data integrity, researcher and funder conflicts of interest. A sound comprehension of research ethics will take you a long way in your career.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Department of Anatomy, Faculty of Medicine and Surgery, University of Malta, Msida, Malta

Sarah Cuschieri

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cuschieri, S. (2022). Understanding Research Ethics. In: A Roadmap to Successful Scientific Publishing. Springer, Cham. https://doi.org/10.1007/978-3-030-99295-8_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-99295-8_2

Published : 22 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-99294-1

Online ISBN : 978-3-030-99295-8

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Fact sheets
  • Facts in pictures
  • Publications
  • Questions and answers
  • Tools and toolkits
  • Endometriosis
  • Excessive heat
  • Mental disorders
  • Polycystic ovary syndrome
  • All countries
  • Eastern Mediterranean
  • South-East Asia
  • Western Pacific
  • Data by country
  • Country presence 
  • Country strengthening 
  • Country cooperation strategies 
  • News releases
  • Feature stories
  • Press conferences
  • Commentaries
  • Photo library
  • Afghanistan
  • Cholera 
  • Coronavirus disease (COVID-19)
  • Greater Horn of Africa
  • Israel and occupied Palestinian territory
  • Disease Outbreak News
  • Situation reports
  • Weekly Epidemiological Record
  • Surveillance
  • Health emergency appeal
  • International Health Regulations
  • Independent Oversight and Advisory Committee
  • Classifications
  • Data collections
  • Global Health Observatory
  • Global Health Estimates
  • Mortality Database
  • Sustainable Development Goals
  • Health Inequality Monitor
  • Global Progress
  • World Health Statistics
  • Partnerships
  • Committees and advisory groups
  • Collaborating centres
  • Technical teams
  • Organizational structure
  • Initiatives
  • General Programme of Work
  • WHO Academy
  • Investment in WHO
  • WHO Foundation
  • External audit
  • Financial statements
  • Internal audit and investigations 
  • Programme Budget
  • Results reports
  • Governing bodies
  • World Health Assembly
  • Executive Board
  • Member States Portal
  • Activities /

Ensuring ethical standards and procedures for research with human beings

Research ethics govern the standards of conduct for scientific researchers. It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are being upheld. Discussion of the ethical principles of beneficence, justice and autonomy are central to ethical review.

WHO works with Member States and partners to promote ethical standards and appropriate systems of review for any course of research involving human subjects. Within WHO, the Research Ethics Review Committee (ERC) ensures that WHO only supports research of the highest ethical standards. The ERC reviews all research projects involving human participants supported either financially or technically by WHO. The ERC is guided in its work by the World Medical Association Declaration of Helsinki (1964), last updated in 2013, as well as the International Ethical Guidelines for Biomedical Research Involving Human Subjects (CIOMS 2016).

WHO releases AI ethics and governance guidance for large multi-modal models

Call for proposals: WHO project on ethical climate and health research

Call for applications: Ethical issues arising in research into health and climate change

Research Ethics Review Committee

lab digital health research south africa

Standards and operational guidance for ethics review of health-related research with...

WHO tool for benchmarking ethics oversight of health-related research involving human participants 

WHO tool for benchmarking ethics oversight of health-related research involving human...

Related activities

Developing normative guidance to address ethical challenges in global health

Supporting countries to manage ethical issues during outbreaks and emergencies

Engaging the global community in health ethics

Building ethics capacity

Framing the ethics of public health surveillance

Related health topics

Global health ethics

Human genome editing

Related teams

Related links

  • International ethical guidelines for biomedical research involving human subjects Council for International Organizations of Medical Sciences. pdf, 1.55Mb
  • International ethical guidelines for epidemiological studies Council for International Organizations of Medical Sciences. pdf, 634Kb
  • World Medical Association: Declaration of Helsinki
  • European Group on Ethics
  • Directive 2001/20/ec of the European Parliament and of the Council pdf, 152Kb
  • Council of Europe (Oviedo Convention - Protocol on biomedical research)
  • Nuffield Council: The ethics of research related to healthcare in developing countries

American Psychological Association Logo

This page has been archived and is no longer being updated regularly.

Cover Story

Five principles for research ethics

Cover your bases with these ethical strategies

By DEBORAH SMITH

Monitor Staff

January 2003, Vol 34, No. 1

Print version: page 56

13 min read

  • Conducting Research

Not that long ago, academicians were often cautious about airing the ethical dilemmas they faced in their research and academic work, but that environment is changing today. Psychologists in academe are more likely to seek out the advice of their colleagues on issues ranging from supervising graduate students to how to handle sensitive research data , says George Mason University psychologist June Tangney, PhD.

"There has been a real change in the last 10 years in people talking more frequently and more openly about ethical dilemmas of all sorts," she explains.

Indeed, researchers face an array of ethical requirements: They must meet professional, institutional and federal standards for conducting research with human participants, often supervise students they also teach and have to sort out authorship issues, just to name a few.

Here are five recommendations APA's Science Directorate gives to help researchers steer clear of ethical quandaries:

1. Discuss intellectual property frankly

Academe's competitive "publish-or-perish" mindset can be a recipe for trouble when it comes to who gets credit for authorship . The best way to avoid disagreements about who should get credit and in what order is to talk about these issues at the beginning of a working relationship, even though many people often feel uncomfortable about such topics.

"It's almost like talking about money," explains Tangney. "People don't want to appear to be greedy or presumptuous."

APA's Ethics Code offers some guidance: It specifies that "faculty advisors discuss publication credit with students as early as feasible and throughout the research and publication process as appropriate." When researchers and students put such understandings in writing, they have a helpful tool to continually discuss and evaluate contributions as the research progresses.

However, even the best plans can result in disputes, which often occur because people look at the same situation differently. "While authorship should reflect the contribution," says APA Ethics Office Director Stephen Behnke, JD, PhD, "we know from social science research that people often overvalue their contributions to a project. We frequently see that in authorship-type situations. In many instances, both parties genuinely believe they're right." APA's Ethics Code stipulates that psychologists take credit only for work they have actually performed or to which they have substantially contributed and that publication credit should accurately reflect the relative contributions: "Mere possession of an institutional position, such as department chair, does not justify authorship credit," says the code. "Minor contributions to the research or to the writing for publications are acknowledged appropriately, such as in footnotes or in an introductory statement."

The same rules apply to students. If they contribute substantively to the conceptualization, design, execution, analysis or interpretation of the research reported, they should be listed as authors. Contributions that are primarily technical don't warrant authorship. In the same vein, advisers should not expect ex-officio authorship on their students' work.

Matthew McGue, PhD, of the University of Minnesota, says his psychology department has instituted a procedure to avoid murky authorship issues. "We actually have a formal process here where students make proposals for anything they do on the project," he explains. The process allows students and faculty to more easily talk about research responsibility, distribution and authorship.

Psychologists should also be cognizant of situations where they have access to confidential ideas or research, such as reviewing journal manuscripts or research grants, or hearing new ideas during a presentation or informal conversation. While it's unlikely reviewers can purge all of the information in an interesting manuscript from their thinking, it's still unethical to take those ideas without giving credit to the originator.

"If you are a grant reviewer or a journal manuscript reviewer [who] sees someone's research [that] hasn't been published yet, you owe that person a duty of confidentiality and anonymity," says Gerald P. Koocher, PhD, editor of the journal Ethics and Behavior and co-author of "Ethics in Psychology: Professional Standards and Cases" (Oxford University Press, 1998).

Researchers also need to meet their ethical obligations once their research is published: If authors learn of errors that change the interpretation of research findings, they are ethically obligated to promptly correct the errors in a correction, retraction, erratum or by other means.

To be able to answer questions about study authenticity and allow others to reanalyze the results, authors should archive primary data and accompanying records for at least five years, advises University of Minnesota psychologist and researcher Matthew McGue, PhD. "Store all your data. Don't destroy it," he says. "Because if someone charges that you did something wrong, you can go back."

"It seems simple, but this can be a tricky area," says Susan Knapp, APA's deputy publisher. "The APA Publication Manual Section 8.05 has some general advice on what to retain and suggestions about things to consider in sharing data."

The APA Ethics Code requires psychologists to release their data to others who want to verify their conclusions, provided that participants' confidentiality can be protected and as long as legal rights concerning proprietary data don't preclude their release. However, the code also notes that psychologists who request data in these circumstances can only use the shared data for reanalysis; for any other use, they must obtain a prior written agreement.

2. Be conscious of multiple roles

APA's Ethics Code says psychologists should avoid relationships that could reasonably impair their professional performance or could exploit or harm others. But it also notes that many kinds of multiple relationships aren't unethical--as long as they're not reasonably expected to have adverse effects.

That notwithstanding, psychologists should think carefully before entering into multiple relationships with any person or group, such as recruiting students or clients as participants in research studies or investigating the effectiveness of a product of a company whose stock they own.

For example, when recruiting students from your Psychology 101 course to participate in an experiment, be sure to make clear that participation is voluntary. If participation is a course requirement, be sure to note that in the class syllabus, and ensure that participation has educative value by, for instance, providing a thorough debriefing to enhance students' understanding of the study. The 2002 Ethics Code also mandates in Standard 8.04b that students be given equitable alternatives to participating in research.

Perhaps one of the most common multiple roles for researchers is being both a mentor and lab supervisor to students they also teach in class. Psychologists need to be especially cautious that they don't abuse the power differential between themselves and students, say experts. They shouldn't, for example, use their clout as professors to coerce students into taking on additional research duties.

By outlining the nature and structure of the supervisory relationship before supervision or mentoring begins, both parties can avoid misunderstandings, says George Mason University's Tangney. It's helpful to create a written agreement that includes both parties' responsibilities as well as authorship considerations, intensity of the supervision and other key aspects of the job.

"While that's the ideal situation, in practice we do a lot less of that than we ought to," she notes. "Part of it is not having foresight up front of how a project or research study is going to unfold."

That's why experts also recommend that supervisors set up timely and specific methods to give students feedback and keep a record of the supervision, including meeting times, issues discussed and duties assigned.

If psychologists do find that they are in potentially harmful multiple relationships, they are ethically mandated to take steps to resolve them in the best interest of the person or group while complying with the Ethics Code.

3. Follow informed-consent rules

When done properly, the consent process ensures that individuals are voluntarily participating in the research with full knowledge of relevant risks and benefits.

"The federal standard is that the person must have all of the information that might reasonably influence their willingness to participate in a form that they can understand and comprehend," says Koocher, dean of Simmons College's School for Health Studies.

APA's Ethics Code mandates that psychologists who conduct research should inform participants about:

The purpose of the research, expected duration and procedures.

Participants' rights to decline to participate and to withdraw from the research once it has started, as well as the anticipated consequences of doing so.

Reasonably foreseeable factors that may influence their willingness to participate, such as potential risks, discomfort or adverse effects.

Any prospective research benefits.

Limits of confidentiality, such as data coding, disposal, sharing and archiving, and when confidentiality must be broken.

Incentives for participation.

Who participants can contact with questions.

Experts also suggest covering the likelihood, magnitude and duration of harm or benefit of participation, emphasizing that their involvement is voluntary and discussing treatment alternatives, if relevant to the research.

Keep in mind that the Ethics Code includes specific mandates for researchers who conduct experimental treatment research. Specifically, they must inform individuals about the experimental nature of the treatment, services that will or will not be available to the control groups, how participants will be assigned to treatments and control groups, available treatment alternatives and compensation or monetary costs of participation.

If research participants or clients are not competent to evaluate the risks and benefits of participation themselves--for example, minors or people with cognitive disabilities--then the person who's giving permission must have access to that same information, says Koocher.

Remember that a signed consent form doesn't mean the informing process can be glossed over, say ethics experts. In fact, the APA Ethics Code says psychologists can skip informed consent in two instances only: When permitted by law or federal or institutional regulations, or when the research would not reasonably be expected to distress or harm participants and involves one of the following:

The study of normal educational practices, curricula or classroom management methods conducted in educational settings.

Anonymous questionnaires, naturalistic observations or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability or reputation, and for which confidentiality is protected.

The study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants' employability, and confidentiality is protected.

If psychologists are precluded from obtaining full consent at the beginning--for example, if the protocol includes deception, recording spontaneous behavior or the use of a confederate--they should be sure to offer a full debriefing after data collection and provide people with an opportunity to reiterate their consent, advise experts.

The code also says psychologists should make reasonable efforts to avoid offering "excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation."

4. Respect confidentiality and privacy

Upholding individuals' rights to confidentiality and privacy is a central tenet of every psychologist's work. However, many privacy issues are idiosyncratic to the research population, writes Susan Folkman, PhD, in " Ethics in Research with Human Participants " (APA, 2000). For instance, researchers need to devise ways to ask whether participants are willing to talk about sensitive topics without putting them in awkward situations, say experts. That could mean they provide a set of increasingly detailed interview questions so that participants can stop if they feel uncomfortable.

And because research participants have the freedom to choose how much information about themselves they will reveal and under what circumstances, psychologists should be careful when recruiting participants for a study, says Sangeeta Panicker, PhD, director of the APA Science Directorate's Research Ethics Office. For example, it's inappropriate to obtain contact information of members of a support group to solicit their participation in research. However, you could give your colleague who facilitates the group a letter to distribute that explains your research study and provides a way for individuals to contact you, if they're interested.

Other steps researchers should take include:

Discuss the limits of confidentiality. Give participants information about how their data will be used, what will be done with case materials, photos and audio and video recordings, and secure their consent.

Know federal and state law. Know the ins and outs of state and federal law that might apply to your research. For instance, the Goals 2000: Education Act of 1994 prohibits asking children about religion, sex or family life without parental permission.

Another example is that, while most states only require licensed psychologists to comply with mandatory reporting laws, some laws also require researchers to report abuse and neglect. That's why it's important for researchers to plan for situations in which they may learn of such reportable offenses. Generally, research psychologists can consult with a clinician or their institution's legal department to decide the best course of action.

Take practical security measures. Be sure confidential records are stored in a secure area with limited access, and consider stripping them of identifying information, if feasible. Also, be aware of situations where confidentiality could inadvertently be breached, such as having confidential conversations in a room that's not soundproof or putting participants' names on bills paid by accounting departments.

Think about data sharing before research begins. If researchers plan to share their data with others, they should note that in the consent process, specifying how they will be shared and whether data will be anonymous. For example, researchers could have difficulty sharing sensitive data they've collected in a study of adults with serious mental illnesses because they failed to ask participants for permission to share the data. Or developmental data collected on videotape may be a valuable resource for sharing, but unless a researcher asked permission back then to share videotapes, it would be unethical to do so. When sharing, psychologists should use established techniques when possible to protect confidentiality, such as coding data to hide identities. "But be aware that it may be almost impossible to entirely cloak identity, especially if your data include video or audio recordings or can be linked to larger databases," says Merry Bullock, PhD, associate executive director in APA's Science Directorate.

Understand the limits of the Internet. Since Web technology is constantly evolving, psychologists need to be technologically savvy to conduct research online and cautious when exchanging confidential information electronically. If you're not a Internet whiz, get the help of someone who is. Otherwise, it may be possible for others to tap into data that you thought was properly protected.

5. Tap into ethics resources

One of the best ways researchers can avoid and resolve ethical dilemmas is to know both what their ethical obligations are and what resources are available to them.

"Researchers can help themselves make ethical issues salient by reminding themselves of the basic underpinnings of research and professional ethics," says Bullock. Those basics include:

The Belmont Report. Released by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in 1979, the report provided the ethical framework for ensuing human participant research regulations and still serves as the basis for human participant protection legislation (see Further Reading).

APA's Ethics Code , which offers general principles and specific guidance for research activities.

Moreover, despite the sometimes tense relationship researchers can have with their institutional review boards (IRBs), these groups can often help researchers think about how to address potential dilemmas before projects begin, says Panicker. But psychologists must first give their IRBs the information they need to properly understand a research proposal.

"Be sure to provide the IRB with detailed and comprehensive information about the study, such as the consent process, how participants will be recruited and how confidential information will be protected," says Bullock. "The more information you give your IRB, the better educated its members will become about behavioral research, and the easier it will be for them to facilitate your research."

As cliché as it may be, says Panicker, thinking positively about your interactions with an IRB can help smooth the process for both researchers and the IRBs reviewing their work.

Further reading

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57 (12).

Sales, B.D., & Folkman, S. (Eds.). (2000). Ethics in research with human participants . Washington, DC: American Psychological Association.

APA's Research Ethics Office in the Science Directorate; e-mail ; Web site: APA Science .

The National Institutes of Health (NIH) offers educational materials on human subjects .

NIH Bioethics Resources Web site .

The Department of Health and Human Services' (DHHS) Office of Research Integrity Web site .

DHHS Office of Human Research Protections Web site .

The 1979 Belmont Report on protecting human subjects .

Association for the Accreditation of Human Research Protection Programs Web site: www.aahrpp.org .

Related Articles

  • Ethics in research with animals

Letters to the Editor

  • Send us a letter

ETHICS AND VALIDITY AS CORE ISSUES IN QUALITATIVE RESEARCH: A REVIEW PAPER

  • January 2020
  • Pakistan Journal of Educational Research 3(2)

Zarina Waheed at Sardar Bahadur Khan Women's University Quetta

  • Sardar Bahadur Khan Women's University Quetta
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Abstract and Figures

Some common characteristics of qualitative research

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Zarina Waheed

  • Megat Ahmad Kamaluddin Bin Megat Daud

Nahid Golafshani

  • Lori E. Koelsch

Eileen Honan

  • Alan J Neville

Faye Mishna

  • Beverley J Antle

Cheryl Regehr

  • Educ Action Res

Nigel Norris

  • Allen Trent
  • Qual Res Rep Comm
  • James W. Chesebro
  • Deborah J. Borisoff
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Reliability vs. Validity in Research | Difference, Types and Examples

Published on July 3, 2019 by Fiona Middleton . Revised on June 22, 2023.

Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method , technique. or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt

It’s important to consider reliability and validity when you are creating your research design , planning your methods, and writing up your results, especially in quantitative research . Failing to do so can lead to several types of research bias and seriously affect your work.

Reliability vs validity
Reliability Validity
What does it tell you? The extent to which the results can be reproduced when the research is repeated under the same conditions. The extent to which the results really measure what they are supposed to measure.
How is it assessed? By checking the consistency of results across time, across different observers, and across parts of the test itself. By checking how well the results correspond to established theories and other measures of the same concept.
How do they relate? A reliable measurement is not always valid: the results might be , but they’re not necessarily correct. A valid measurement is generally reliable: if a test produces accurate results, they should be reproducible.

Table of contents

Understanding reliability vs validity, how are reliability and validity assessed, how to ensure validity and reliability in your research, where to write about reliability and validity in a thesis, other interesting articles.

Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable.

What is reliability?

Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable.

What is validity?

Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world.

High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t valid.

If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample’s temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are not valid.

However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not accurately reflect the real situation.

Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid.

Prevent plagiarism. Run a free check.

Reliability can be estimated by comparing different versions of the same measurement. Validity is harder to assess, but it can be estimated by comparing the results to other relevant data or theory. Methods of estimating reliability and validity are usually split up into different types.

Types of reliability

Different types of reliability can be estimated through various statistical methods.

Type of reliability What does it assess? Example
The consistency of a measure : do you get the same results when you repeat the measurement? A group of participants complete a designed to measure personality traits. If they repeat the questionnaire days, weeks or months apart and give the same answers, this indicates high test-retest reliability.
The consistency of a measure : do you get the same results when different people conduct the same measurement? Based on an assessment criteria checklist, five examiners submit substantially different results for the same student project. This indicates that the assessment checklist has low inter-rater reliability (for example, because the criteria are too subjective).
The consistency of : do you get the same results from different parts of a test that are designed to measure the same thing? You design a questionnaire to measure self-esteem. If you randomly split the results into two halves, there should be a between the two sets of results. If the two results are very different, this indicates low internal consistency.

Types of validity

The validity of a measurement can be estimated based on three main types of evidence. Each type can be evaluated through expert judgement or statistical methods.

Type of validity What does it assess? Example
The adherence of a measure to  of the concept being measured. A self-esteem questionnaire could be assessed by measuring other traits known or assumed to be related to the concept of self-esteem (such as social skills and ). Strong correlation between the scores for self-esteem and associated traits would indicate high construct validity.
The extent to which the measurement  of the concept being measured. A test that aims to measure a class of students’ level of Spanish contains reading, writing and speaking components, but no listening component.  Experts agree that listening comprehension is an essential aspect of language ability, so the test lacks content validity for measuring the overall level of ability in Spanish.
The extent to which the result of a measure corresponds to of the same concept. A is conducted to measure the political opinions of voters in a region. If the results accurately predict the later outcome of an election in that region, this indicates that the survey has high criterion validity.

To assess the validity of a cause-and-effect relationship, you also need to consider internal validity (the design of the experiment ) and external validity (the generalizability of the results).

The reliability and validity of your results depends on creating a strong research design , choosing appropriate methods and samples, and conducting the research carefully and consistently.

Ensuring validity

If you use scores or ratings to measure variations in something (such as psychological traits, levels of ability or physical properties), it’s important that your results reflect the real variations as accurately as possible. Validity should be considered in the very earliest stages of your research, when you decide how you will collect your data.

  • Choose appropriate methods of measurement

Ensure that your method and measurement technique are high quality and targeted to measure exactly what you want to know. They should be thoroughly researched and based on existing knowledge.

For example, to collect data on a personality trait, you could use a standardized questionnaire that is considered reliable and valid. If you develop your own questionnaire, it should be based on established theory or findings of previous studies, and the questions should be carefully and precisely worded.

  • Use appropriate sampling methods to select your subjects

To produce valid and generalizable results, clearly define the population you are researching (e.g., people from a specific age range, geographical location, or profession).  Ensure that you have enough participants and that they are representative of the population. Failing to do so can lead to sampling bias and selection bias .

Ensuring reliability

Reliability should be considered throughout the data collection process. When you use a tool or technique to collect data, it’s important that the results are precise, stable, and reproducible .

  • Apply your methods consistently

Plan your method carefully to make sure you carry out the same steps in the same way for each measurement. This is especially important if multiple researchers are involved.

For example, if you are conducting interviews or observations , clearly define how specific behaviors or responses will be counted, and make sure questions are phrased the same way each time. Failing to do so can lead to errors such as omitted variable bias or information bias .

  • Standardize the conditions of your research

When you collect your data, keep the circumstances as consistent as possible to reduce the influence of external factors that might create variation in the results.

For example, in an experimental setup, make sure all participants are given the same information and tested under the same conditions, preferably in a properly randomized setting. Failing to do so can lead to a placebo effect , Hawthorne effect , or other demand characteristics . If participants can guess the aims or objectives of a study, they may attempt to act in more socially desirable ways.

It’s appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper . Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.

Reliability and validity in a thesis
Section Discuss
What have other researchers done to devise and improve methods that are reliable and valid?
How did you plan your research to ensure reliability and validity of the measures used? This includes the chosen sample set and size, sample preparation, external conditions and measuring techniques.
If you calculate reliability and validity, state these values alongside your main results.
This is the moment to talk about how reliable and valid your results actually were. Were they consistent, and did they reflect true values? If not, why not?
If reliability and validity were a big problem for your findings, it might be helpful to mention this here.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Middleton, F. (2023, June 22). Reliability vs. Validity in Research | Difference, Types and Examples. Scribbr. Retrieved September 5, 2024, from https://www.scribbr.com/methodology/reliability-vs-validity/

Is this article helpful?

Fiona Middleton

Fiona Middleton

Other students also liked, what is quantitative research | definition, uses & methods, data collection | definition, methods & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5.2 Reliability and Validity of Measurement

Learning objectives.

  • Define reliability, including the different types and how they are assessed.
  • Define validity, including the different types and how they are assessed.
  • Describe the kinds of evidence that would be relevant to assessing the reliability and validity of a particular measure.

Again, measurement involves assigning scores to individuals so that they represent some characteristic of the individuals. But how do researchers know that the scores actually represent the characteristic, especially when it is a construct like intelligence, self-esteem, depression, or working memory capacity? The answer is that they conduct research using the measure to confirm that the scores make sense based on their understanding of the construct being measured. This is an extremely important point. Psychologists do not simply assume that their measures work. Instead, they collect data to demonstrate that they work. If their research does not demonstrate that a measure works, they stop using it.

As an informal example, imagine that you have been dieting for a month. Your clothes seem to be fitting more loosely, and several friends have asked if you have lost weight. If at this point your bathroom scale indicated that you had lost 10 pounds, this would make sense and you would continue to use the scale. But if it indicated that you had gained 10 pounds, you would rightly conclude that it was broken and either fix it or get rid of it. In evaluating a measurement method, psychologists consider two general dimensions: reliability and validity.

Reliability

Reliability refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (interrater reliability).

Test-Retest Reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the same group of people at a later time, and then looking at test-retest correlation between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing Pearson’s r . Figure 5.3 “Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart” shows the correlation between two sets of scores of several college students on the Rosenberg Self-Esteem Scale, given two times a week apart. Pearson’s r for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

Figure 5.3 Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart

Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal Consistency

A second kind of reliability is internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One approach is to look at a split-half correlation . This involves splitting the items into two sets, such as the first and second halves of the items or the even- and odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of scores is examined. For example, Figure 5.4 “Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale” shows the split-half correlation between several college students’ scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem Scale. Pearson’s r for these data is +.88. A split-half correlation of +.80 or greater is generally considered good internal consistency.

Figure 5.4 Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale

Split-Half Correlation Between Several College Students' Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale

Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called Cronbach’s α (the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good internal consistency.

Interrater Reliability

Many behavioral measures involve significant judgment on the part of an observer or a rater. Interrater reliability is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring college students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does in fact have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other. If they were not, then those ratings could not be an accurate representation of participants’ social skills. Interrater reliability is often assessed using Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter kappa) when they are categorical.

Validity is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Textbook presentations of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure. Here we consider four basic kinds: face validity, content validity, criterion validity, and discriminant validity.

Face Validity

Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behavior, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory (MMPI) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. Another example is the Implicit Association Test, which measures prejudice in a way that is nonintuitive to most people (see Note 5.31 “How Prejudiced Are You?” ).

How Prejudiced Are You?

The Implicit Association Test (IAT) is used to measure people’s attitudes toward various social groups. The IAT is a behavioral measure designed to reveal negative attitudes that people might not admit to on a self-report measure. It focuses on how quickly people are able to categorize words and images representing two contrasting groups (e.g., gay and straight) along with other positive and negative stimuli (e.g., the words “wonderful” or “nasty”). The IAT has been used in dozens of published research studies, and there is strong evidence for both its reliability and its validity (Nosek, Greenwald, & Banaji, 2006). You can learn more about the IAT—and take several of them for yourself—at the following website: https://implicit.harvard.edu/implicit .

Content Validity

Content validity is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion Validity

Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria ) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. Criteria can also include other measures of the same construct. For example, one would expect new measures of test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs. So the use of converging operations is one way to examine criterion validity.

Assessing criterion validity requires collecting data using the measure. Researchers John Cacioppo and Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people value and engage in thinking (Cacioppo & Petty, 1982). In a series of studies, they showed that college faculty scored higher than assembly-line workers, that people’s scores were positively correlated with their scores on a standardized academic achievement test, and that their scores were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009).

Discriminant Validity

Discriminant validity is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence of discriminant validity by showing that people’s scores were not correlated with certain other variables. For example, they found only a weak correlation between people’s need for cognition and a measure of their cognitive style—the extent to which they tend to think analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.” They also found no correlation between people’s need for cognition and measures of their test anxiety and their tendency to respond in socially desirable ways. All these low correlations provide evidence that the measure is reflecting a conceptually distinct construct.

Key Takeaways

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.
  • Practice: Ask several friends to complete the Rosenberg Self-Esteem Scale. Then assess its internal consistency by making a scatterplot to show the split-half correlation (even- vs. odd-numbered items). Compute Pearson’s r too if you know how.
  • Discussion: Think back to the last college exam you took and think of the exam as a psychological measure. What construct do you think it was intended to measure? Comment on its face and content validity. What data could you collect to assess its reliability, criterion validity, and discriminant validity?
  • Practice: Take an Implicit Association Test and then list as many ways to assess its criterion validity as you can think of.

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42 , 116–131.

Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2006). The Implicit Association Test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Social psychology and the unconscious: The automaticity of higher mental processes (pp. 265–292). London, England: Psychology Press.

Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp. 318–329). New York, NY: Guilford Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann Med Surg (Lond)
  • v.86(5); 2024 May
  • PMC11060189

Logo of amsu

Ethics in scientific research: a lens into its importance, history, and future

Associated data.

Not applicable.

Introduction

Ethics are a guiding principle that shapes the conduct of researchers. It influences both the process of discovery and the implications and applications of scientific findings 1 . Ethical considerations in research include, but are not limited to, the management of data, the responsible use of resources, respect for human rights, the treatment of human and animal subjects, social responsibility, honesty, integrity, and the dissemination of research findings 1 . At its core, ethics in scientific research aims to ensure that the pursuit of knowledge does not come at the expense of societal or individual well-being. It fosters an environment where scientific inquiry can thrive responsibly 1 .

The need to understand and uphold ethics in scientific research is pertinent in today’s scientific community. First, the rapid advancement of technology and science raises ethical questions in fields like biotechnology, biomedical science, genetics, and artificial intelligence. These advancements raise questions about privacy, consent, and the potential long-term impacts on society and its environment 2 . Furthermore, the rise in public perception and scrutiny of scientific practices, fueled by a more informed and connected populace, demands greater transparency and ethical accountability from researchers and institutions.

This commentary seeks to bring to light the need and benefits associated with ethical adherence. The central theme of this paper highlights how upholding ethics in scientific research is a cornerstone for progress. It buttresses the fact that ethics in scientific research is vital for maintaining the trust of the public, ensuring the safety of participants, and legitimizing scientific findings.

Historical perspective

Ethics in research is significantly shaped by past experiences where a lack of ethical consideration led to negative consequences. One of the most striking examples of ethical misconduct is the Tuskegee Syphilis Study 3 conducted between 1932 and 1972 by the U.S. Public Health Service. In this study, African American men in Alabama were used as subjects to study the natural progression of untreated syphilis. They were not informed of their condition and were denied effective treatment, even after penicillin became available as a cure in the 1940s 3 .

From an ethical lens today, this is a gross violation of informed consent and an exploitation of a vulnerable population. The public outcry following the revelation of the study’s details led to the establishment of the National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research 4 . This commission eventually produced the Belmont Report in 1979 4 , setting forth principles such as respect for persons, beneficence, and justice, which now underpin ethical research practices 4 .

Another example that significantly impacted ethical regulations was the thalidomide tragedy of the late 1950s and early 1960s 5 . Thalidomide was marketed as a safe sedative for pregnant women to combat morning sickness in Europe. Thalidomide resulted in the birth of approximately ten thousand children with severe deformities due to its teratogenic effects 5 , which were not sufficiently researched prior to the drug’s release. This incident underscored the critical need for comprehensive clinical testing and highlighted the ethical imperative of understanding and communicating potential risks, particularly for vulnerable groups such as pregnant women. In response, drug testing regulations became more rigorous, and the importance of informed consent, especially in clinical trials, was emphasized.

The Stanford Prison Experiment of 1971, led by psychologist Philip Zimbardo is another prime example of ethical oversight leading to harmful consequences 6 . The experiment, which aimed to study the psychological effects of perceived power, resulted in emotional trauma for participants. Underestimating potential psychological harm with no adequate systems to safeguard human participants from harm was a breach of ethics in psychological studies 6 . This case highlighted the necessity for ethical guidelines that prioritize the mental and emotional welfare of participants, especially in psychological research. It led to stricter review processes and the establishment of guidelines to prevent psychological harm in research studies. It influenced the American Psychological Association and other bodies to refine their ethical guidelines, ensuring the protection of participants’ mental and emotional well-being.

Impact on current ethical standards

These historical, ethical oversights have been instrumental in shaping the current landscape of ethical standards in scientific research. The Tuskegee Syphilis Study led to the Belmont Report in 1979, which laid out key ethical principles such as respect for persons, beneficence, and justice. It also prompted the establishment of Institutional Review Boards (IRBs) to oversee research involving human subjects. The thalidomide tragedy catalyzed stricter drug testing regulations and informed consent requirements for clinical trials. The Stanford Prison Experiment influenced the American Psychological Association to refine its ethical guidelines, placing greater emphasis on the welfare and rights of participants.

These historical episodes of ethical oversights have been pivotal in forging the comprehensive ethical frameworks that govern scientific research today. They serve as stark reminders of the potential consequences of ethical neglect and the perpetual need to prioritize the welfare and rights of participants in any research endeavor.

One may ponder on the reason behind the Tuskegee Syphilis Study, where African American men with syphilis were deliberately left untreated. What led scientists to prioritize research outcomes over human well-being? At the time, racial prejudices, lack of understanding of ethical principles in human research, and regulatory oversight made such studies pass. Similarly, the administration of thalidomide to pregnant women initially intended as an antiemetic to alleviate morning sickness, resulted in unforeseen and catastrophic birth defects. This tragedy highlights a critical lapse in the pre-marketing evaluation of drugs’ safety.

Furthermore, the Stanford prison experiment, designed to study the psychological effects of perceived power, spiraled into an ethical nightmare as participants suffered emotional trauma. This begs the question on how these researchers initially justified their methods. From today’s lens of ethics, the studies conducted were a complete breach of misconduct, and I wonder if there were any standards that guided primitive research in science.

Current ethical standards and guidelines in research

Informed consent.

This mandates that participants are fully informed about the nature of the research, including its objectives, procedures, potential risks, and benefits 7 , 8 . They must be given the opportunity to ask questions and must voluntarily agree to participate without coercion 7 , 8 . This ensures respect for individual autonomy and decision-making.

Confidentiality and privacy

Confidentiality is pivotal in research involving human subjects. Participants’ personal information must be protected from unauthorized access or disclosure 7 , 8 . Researchers are obliged to take measures to preserve the anonymity and privacy of participants, which fosters trust and encourages participation in research 7 , 8 .

Non-maleficence and beneficence

These principles revolve around the obligation to avoid harm (non-maleficence) and to maximize possible benefits while minimizing potential harm (beneficence) 7 , 8 . Researchers must ensure that their studies do not pose undue risks to participants and that any potential risks are outweighed by the benefits.

Justice in research ethics refers to the fair selection and treatment of research participants 8 . It ensures that the benefits and burdens of research are distributed equitably among different groups in society, preventing the exploitation of vulnerable populations 8 .

The role of Institutional Review Boards (IRB)

Institutional Review Boards play critical roles in upholding ethical standards in research. An IRB is a committee established by an institution conducting research to review, approve, and monitor research involving human subjects 7 , 8 . Their primary role is to ensure that the rights and welfare of participants are protected.

Review and approval

Before a study commences, the IRB reviews the research proposal to ensure it adheres to ethical guidelines. This includes evaluating the risks and benefits, the process of obtaining informed consent, and measures for maintaining confidentiality 7 , 8 .

Monitoring and compliance

IRB also monitors ongoing research projects to ensure compliance with ethical standards. They may require periodic reports and can conduct audits to ensure ongoing adherence to ethical principles 7 , 8 .

Handling ethical violations

In cases where ethical standards are breached, IRB has the authority to impose sanctions, which can range from requiring modifications to the study to completely halting the research project 7 , 8 .

Other agencies and boards enforcing standards

Beyond IRB, there are other regulatory bodies and agencies at national and international levels that enforce ethical standards in research. These include:

The Office for Human Research Protections (OHRP) in the United States, which oversees compliance with the Federal Policy for the Protection of Human Subjects.

The World Health Organization (WHO) , which provides international ethical guidelines for biomedical research.

The International Committee of Medical Journal Editors (ICMJE) , which sets ethical standards for the publication of biomedical research.

These organizations, along with IRB, form a comprehensive network that ensures the ethical conduct of scientific research. They safeguard the integrity of research using the reflections and lesson learnt from the past.

Benefits of ethical research

Credible and reliable outcomes, why is credibility so crucial in research, and how do ethical practices contribute to it.

Ethical practices such as rigorous peer review, transparent methodology, and adherence to established protocols ensure that research findings are reliable and valid 9 . When studies are conducted ethically, they are less likely to be marred by biases, fabrications, or errors that could compromise credibility. For instance, ethical standards demand accurate data reporting and full disclosure of any potential conflicts of interest 9 , which directly contribute to the integrity and trustworthiness of research findings.

How do ethical practices lead to socially beneficial outcomes?

Ethical research practices often align with broader societal values and needs, leading to outcomes that are not only scientifically significant but also socially beneficial. By respecting principles like justice and beneficence, researchers ensure that their work with human subjects contributes positively to society 7 , 8 . For example, ethical guidelines in medical research emphasize the need to balance scientific advancement with patient welfare, ensuring that new treatments are both effective and safe. This balance is crucial in addressing pressing societal health concerns while safeguarding individual rights and well-being.

Trust between the public and the scientific community

The relationship between the public and the scientific community is heavily reliant on trust, which is fostered through consistent ethical conduct in research. When the public perceives that researchers are committed to ethical standards, it reinforces their confidence in the scientific process and its outcomes. Ethical research practices demonstrate a respect for societal norms and values, reinforcing the perception that science serves the public good.

Case studies

Case study 1: the development and approval of covid-19 vaccines.

The development and approval of COVID-19 vaccines within a short time is a testament to how adherence to ethical research practices can achieve credible and beneficial outcomes. Strict adherence to ethical guidelines, even in the face of a global emergency, ensured that the vaccines were developed swiftly. However, safety standards were compromised to some extent as no animal trials were done before humans. The vaccine development was not transparent to the public, and this fuelled the anti-vaccination crowd in some regions. Ethical compliance, including rigorous testing and transparent reporting, should expedite scientific innovation while maintaining public trust.

Case study 2: The CRISPR babies

What ethical concerns were raised by the creation of the crispr babies, and what were the consequences.

The creation of the first genetically edited babies using CRISPR technology in China raised significant ethical concerns 10 . The lack of transparency, inadequate consent process, and potential risks to the children can be likened to ethical misconduct in genetic engineering research. This case resulted in widespread condemnation from the scientific community and the public, as well as international regulatory frameworks and guidelines for genetic editing research 10 .

Recommendation and conclusion

Continuous education and training.

The scientific community should prioritize ongoing education and training in ethics for researchers at all levels, ensuring awareness and understanding of ethical standards and their importance.

Enhanced dialogue and collaboration

Encourage multidisciplinary collaborations and dialogues between scientists, ethicists, policymakers, and the public to address emerging ethical challenges and develop adaptive guidelines.

Fostering a culture of ethical responsibility

Institutions and researchers should cultivate an environment where ethical considerations are integral to the research process, encouraging transparency, accountability, and social responsibility.

Global standards and cooperation

Work toward establishing and harmonizing international ethical standards and regulatory frameworks, particularly in areas like genetic engineering and AI, where the implications of research are global.

Ethics approval

Ethics approval was not required for this editorial.

Informed consent was not required for this editorial

Sources of funding

No funding was received for this research.

Author contribution

G.D.M. wrote this paper.

Conflicts of interest disclosure

The authors declare no conflicts of interest.

Research registration unique identifying number (UIN)

Goshen David Miteu.

Data availability statement

Provenance and peer review.

Not commissioned, externally peer-reviewed.

Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article.

Published online 21 March 2024

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Reliability vs Validity in Research | Differences, Types & Examples

Reliability vs Validity in Research | Differences, Types & Examples

Published on 3 May 2022 by Fiona Middleton . Revised on 10 October 2022.

Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method , technique, or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.

It’s important to consider reliability and validity when you are creating your research design , planning your methods, and writing up your results, especially in quantitative research .

Reliability vs validity
Reliability Validity
What does it tell you? The extent to which the results can be reproduced when the research is repeated under the same conditions. The extent to which the results really measure what they are supposed to measure.
How is it assessed? By checking the consistency of results across time, across different observers, and across parts of the test itself. By checking how well the results correspond to established theories and other measures of the same concept.
How do they relate? A reliable measurement is not always valid: the results might be reproducible, but they’re not necessarily correct. A valid measurement is generally reliable: if a test produces accurate results, they should be .

Table of contents

Understanding reliability vs validity, how are reliability and validity assessed, how to ensure validity and reliability in your research, where to write about reliability and validity in a thesis.

Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable.

What is reliability?

Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable.

What is validity?

Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world.

High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t valid.

However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not accurately reflect the real situation.

Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect your data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid.

Prevent plagiarism, run a free check.

Reliability can be estimated by comparing different versions of the same measurement. Validity is harder to assess, but it can be estimated by comparing the results to other relevant data or theory. Methods of estimating reliability and validity are usually split up into different types.

Types of reliability

Different types of reliability can be estimated through various statistical methods.

Type of reliability What does it assess? Example
The consistency of a measure : do you get the same results when you repeat the measurement? A group of participants complete a designed to measure personality traits. If they repeat the questionnaire days, weeks, or months apart and give the same answers, this indicates high test-retest reliability.
The consistency of a measure : do you get the same results when different people conduct the same measurement? Based on an assessment criteria checklist, five examiners submit substantially different results for the same student project. This indicates that the assessment checklist has low inter-rater reliability (for example, because the criteria are too subjective).
The consistency of : do you get the same results from different parts of a test that are designed to measure the same thing? You design a questionnaire to measure self-esteem. If you randomly split the results into two halves, there should be a between the two sets of results. If the two results are very different, this indicates low internal consistency.

Types of validity

The validity of a measurement can be estimated based on three main types of evidence. Each type can be evaluated through expert judgement or statistical methods.

Type of validity What does it assess? Example
The adherence of a measure to  of the concept being measured. A self-esteem questionnaire could be assessed by measuring other traits known or assumed to be related to the concept of self-esteem (such as social skills and optimism). Strong correlation between the scores for self-esteem and associated traits would indicate high construct validity.
The extent to which the measurement  of the concept being measured. A test that aims to measure a class of students’ level of Spanish contains reading, writing, and speaking components, but no listening component.  Experts agree that listening comprehension is an essential aspect of language ability, so the test lacks content validity for measuring the overall level of ability in Spanish.
The extent to which the result of a measure corresponds to of the same concept. A is conducted to measure the political opinions of voters in a region. If the results accurately predict the later outcome of an election in that region, this indicates that the survey has high criterion validity.

To assess the validity of a cause-and-effect relationship, you also need to consider internal validity (the design of the experiment ) and external validity (the generalisability of the results).

The reliability and validity of your results depends on creating a strong research design , choosing appropriate methods and samples, and conducting the research carefully and consistently.

Ensuring validity

If you use scores or ratings to measure variations in something (such as psychological traits, levels of ability, or physical properties), it’s important that your results reflect the real variations as accurately as possible. Validity should be considered in the very earliest stages of your research, when you decide how you will collect your data .

  • Choose appropriate methods of measurement

Ensure that your method and measurement technique are of high quality and targeted to measure exactly what you want to know. They should be thoroughly researched and based on existing knowledge.

For example, to collect data on a personality trait, you could use a standardised questionnaire that is considered reliable and valid. If you develop your own questionnaire, it should be based on established theory or the findings of previous studies, and the questions should be carefully and precisely worded.

  • Use appropriate sampling methods to select your subjects

To produce valid generalisable results, clearly define the population you are researching (e.g., people from a specific age range, geographical location, or profession). Ensure that you have enough participants and that they are representative of the population.

Ensuring reliability

Reliability should be considered throughout the data collection process. When you use a tool or technique to collect data, it’s important that the results are precise, stable, and reproducible.

  • Apply your methods consistently

Plan your method carefully to make sure you carry out the same steps in the same way for each measurement. This is especially important if multiple researchers are involved.

For example, if you are conducting interviews or observations, clearly define how specific behaviours or responses will be counted, and make sure questions are phrased the same way each time.

  • Standardise the conditions of your research

When you collect your data, keep the circumstances as consistent as possible to reduce the influence of external factors that might create variation in the results.

For example, in an experimental setup, make sure all participants are given the same information and tested under the same conditions.

It’s appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper. Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.

Reliability and validity in a thesis
Section Discuss
What have other researchers done to devise and improve methods that are reliable and valid?
How did you plan your research to ensure reliability and validity of the measures used? This includes the chosen sample set and size, sample preparation, external conditions, and measuring techniques.
If you calculate reliability and validity, state these values alongside your main results.
This is the moment to talk about how reliable and valid your results actually were. Were they consistent, and did they reflect true values? If not, why not?
If reliability and validity were a big problem for your findings, it might be helpful to mention this here.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Middleton, F. (2022, October 10). Reliability vs Validity in Research | Differences, Types & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/research-methods/reliability-or-validity/

Is this article helpful?

Fiona Middleton

Fiona Middleton

Other students also liked, the 4 types of validity | types, definitions & examples, a quick guide to experimental design | 5 steps & examples, sampling methods | types, techniques, & examples.

  • Open access
  • Published: 31 August 2024

Opportunities and challenges of a dynamic consent-based application: personalized options for personal health data sharing and utilization

  • Ah Ra Lee 1 ,
  • Dongjun Koo 1 , 2 ,
  • Il Kon Kim 3 ,
  • Eunjoo Lee 4 ,
  • Sooyoung Yoo 1 &
  • Ho-Young Lee 1 , 5  

BMC Medical Ethics volume  25 , Article number:  92 ( 2024 ) Cite this article

Metrics details

The principles of dynamic consent are based on the idea of safeguarding the autonomy of individuals by providing them with personalized options to choose from regarding the sharing and utilization of personal health data. To facilitate the widespread introduction of dynamic consent concepts in practice, individuals must perceive these procedures as useful and easy to use. This study examines the user experience of a dynamic consent-based application, in particular focusing on personalized options, and explores whether this approach may be useful in terms of ensuring the autonomy of data subjects in personal health data usage.

This study investigated the user experience of MyHealthHub, a dynamic consent-based application, among adults aged 18 years or older living in South Korea. Eight tasks exploring the primary aspects of dynamic consent principles–including providing consent, monitoring consent history, and managing personalized options were provided to participants. Feedback on the experiences of testing MyHealthHub was gathered via multiple-choice and open-ended questionnaire items.

A total of 30 participants provided dynamic consent through the MyHealthHub application. Most participants successfully completed all the provided tasks without assistance and regarded the personalized options favourably. Concerns about the security and reliability of the digital-based consent system were raised, in contrast to positive responses elicited in other aspects, such as perceived usefulness and ease of use.

Conclusions

Dynamic consent is an ethically advantageous approach for the sharing and utilization of personal health data. Personalized options have the potential to serve as pragmatic safeguards for the autonomy of individuals in the sharing and utilization of personal health data. Incorporating the principles of dynamic consent into real-world scenarios requires remaining issues, such as the need for powerful authentication mechanisms that bolster privacy and security, to be addressed. This would enhance the trustworthiness of dynamic consent-based applications while preserving their ethical advantages.

Peer Review reports

The advances in big data necessitate a complicated balance between protecting the privacy of individuals whose data are being used and leveraging the societal benefits provided by state-of-the-art data-driven technologies [ 1 ]. Personal health data are a valuable resource that significantly impacts biomedical research and digital health ecosystems [ 2 ]. The integration of sophisticated technologies with the widespread use of personal health data has resulted in groundbreaking work within the realm of medicine and tangible applications in the health care sector [ 3 ]. However, the combination of technology and personal health data has led to concerns associated with data privacy and security, as well as ethical implications in terms of consent and potential exploitation [ 4 , 5 ]. Therefore, to encourage innovation and enhance healthcare outcomes through the use of data, the perspectives of both data subjects and consumers, whose interests sometimes conflict, must be thoroughly considered.

Data sovereignty is indispensable within a data-driven economy [ 6 ]. This concept emphasizes the need for data subjects to have control over the use of their shared data. The absence of such sovereignty could hinder the advancement of the data-driven economy by decreasing the desire for data sharing and utilization [ 7 ]. To ensure the full potential of data utilization, discussions regarding sovereignty have evolved in the digital era. The protection of individual rights and the promotion of trust in data sharing environments are both mandatory in certain regulatory frameworks, such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) [ 8 , 9 ]. For instance, the fundamental tenet of the European Union data protection law is that individuals have authority over the sharing of their personal health data [ 10 ]. Provisions granting access, erasure, and transfer rights for personal data in specific circumstances in the GDPR help facilitate its fundamental aim of protecting data subjects. This current shift towards giving individuals autonomy over their data highlights the significance of data sovereignty in contemporary discussions in digital health ecosystems.

While obtaining consent from individuals before using their personal health data is generally crucial in clinical research, it may not always be feasible in every situation. Appropriate safeguards and ethical considerations should be implemented to protect individuals’ privacy in such cases. The fundamental basis of consent is respecting individual autonomy [ 11 ]. The Declaration of Helsinki and the Belmont Report aim to prevent exploitative and manipulative practices in clinical and medical research, and both highlight the importance of autonomy [ 12 , 13 ]. Safeguarding autonomy involves more than just preventing manipulation; it also entails offering guidance and support for making autonomous decisions. These ideas have been incorporated into practice as informed consent, which includes providing comprehensive and precise information to empower individuals to make voluntary decisions [ 14 , 15 ]. The All of Us research program in the United States provides individuals with adequate information to make well-informed decisions concerning their participation [ 16 , 17 ]. This program ensures that potential participants are motivated to join based on their personal interests and the inherent value of their involvement by providing comprehensive details regarding program operations. This approach ensures that individuals make informed decisions according to their preferences. The Guidelines for Tailoring the Informed Consent Process in Clinical Studies (i-CONSENT guidelines) also emphasize the significance of implementing comprehensive and individualized consent procedures [ 18 ]. These guidelines advocate for ongoing, two-way communication, initiated at the outset of participant engagement and sustained throughout the duration of the study.

Dynamic consent, an innovative principle that emphasizes the protection of the data sovereignty of individuals, has attracted considerable interest [ 19 ]. Many academic studies have examined the potential benefits of dynamic consent, specifically regarding its ethical advantages in comparison to conventional consent methods [ 20 , 21 , 22 , 23 , 24 ]. Due to its functionality within digital interfaces that enable uninterrupted communication between data subjects and consumers, irrespective of temporal and spatial constraints, dynamic consent is regarded as the most appropriate approach for acquiring consent in digital health ecosystems [ 25 ]. Furthermore, dynamic consent provides a variety of personalized options for individuals to enhance their autonomy and self-determination with respect to the sharing and utilization of personal health data. Establishing resilient mechanisms through which individuals can exert authority over their personal health data while maintaining continuous communication is essential in the pursuit of genuine informed consent in digital settings. Nevertheless, further considerations of personalized options are still required [ 26 , 27 ]. To assess the efficacy, usability, and ability to uphold individual autonomy of personalized options that are supported by dynamic consent principles, additional investigation is needed.

Therefore, in this study, user experiences based on dynamic consent principles are examined, specifically focusing on sovereignty over health data usage in various settings with personalized options. The evaluation was conducted using MyHealthHub, a digital consent application developed in this study based on dynamic consent principles. The primary objectives of this study are (1) to explore the viewpoints of individuals on the sharing and utilization of personal health data and (2) to assess user acceptance of MyHealthHub as a means for managing data sovereignty in a tailored manner while respecting individual autonomy. This study specifically focuses on individual patients, who are the principal subjects of personal health data. To analyze user acceptance, this study employed the Technology Acceptance Model (TAM), which has been widely used to understand user acceptance of information technology [ 28 ]. This study contributes to the exploration of processes to ensure data sovereignty with dynamic consent in the health care sector by examining user experiences associated with the MyHealthHub application, which facilitates the sharing and utilization of personal health data with personalized options in a tailored manner.

Study design

This study utilized a mixed-methods design, incorporating both a system usability test and questionnaires. MyHealthHub, a digital consent application designed in adherence with dynamic consent principles, was specifically developed for this study to facilitate the usability test. The study participants were provided with access to the MyHealthHub application, which facilitates experiences in a personalized data sharing process using virtual health data. The questionnaire included one open-ended item to elicit a wide range of perspectives from the participants, as well as multiple-choice items. The entire procedure was completed consecutively in a single session and adhered to the required ethical protocols under the necessary ethical clearance of informed consent from the Institutional Review Board of Kyungpook National University (KNU) (KNU IRB No. KNU-2021-0158).

Participant recruitment

Participants for this study were recruited through email invitations. Potential participants were defined as individuals who have interests or experiences in digital health services and were likely to utilize digital consent applications to generate personal health data during their daily lives and to share and utilize their data. Participants were eligible if they were 18 years of age or older, resided in South Korea, had internet access on personal devices, and were proficient in using websites for various activities, such as online shopping and internet banking. The potential participants were provided with basic information materials regarding the study through email. Email addresses of potential participants were obtained through the Smart Health Standards Forum, an organization supporting smart health standards and industry development. They were encouraged to voluntarily reach out to our research team to arrange an appointment if they were interested in participating. All participants provided informed written consent and received a gift voucher as compensation upon completion.

System usability test

MyHealthHub is a digital consent application designed based on dynamic consent principles. This application offers participants an all-encompassing experience of personalized data sharing and consent management. The prototype version of the application was available in the Korean language. The MyHealthHub application included functionalities for managing consent, monitoring data sharing history, and configuring personalized options regarding data usage (Fig.  1 ). Personalized options include specifying the scope of shared data according to the specific institutions and health data involved, conditions for automatic consent, designated representatives if necessary, and preferred communication methods or periods for receiving relevant updates on their data usage. These options were flexible and could be adjusted according to each individual’s preferences.

figure 1

Screenshots of the English version of the MyHealthHub application

The participants were provided with individual accounts and were instructed to access and log in to the MyHealthHub application. For the system usability test, the provided accounts were populated with temporary log-in credentials and comprised a variety of fictitious dummy personal health data. So that the participants could experience and understand the foundational attributes of dynamic consent, they were required to complete eight tasks via the MyHealthHub application. These tasks were determined through a combination of literature findings and insights from a scoping review on dynamic consent, as detailed in our previous study [ 29 ]. The tasks included providing consent, monitoring data usage history, and configuring personalized options. This approach ensures that the tasks reflected prior knowledge on dynamic consent, allowing for a comprehensive evaluation of the dynamic consent process. The participants were not specifically instructed to meticulously scrutinize each item of content available in the application during the execution of the designated tasks with the purpose of assessing real user interactions with MyHealthHub. The participants were given the autonomy to select the scope and variety of institutions that they wanted to share their data with, based on their individual preferences. After completing the tasks, participants evaluated their experience with a questionnaire designed to capture their feedback on the usability and functionality of the application.

Questionnaire

After the usability test, participants were prompted to complete a questionnaire. The questionnaire consisted of 30 items, which comprised a combination of multiple-choice and open-ended inquiries (Table 1 ). The multiple-choice questions were designed to investigate perceptions of the sharing of personal health data and to assess user acceptance of the MyHealthHub application. The questionnaire items were formulated by integrating findings from the literature and primary concepts derived from the TAM, specifically focusing on perceived ease of use, perceived usefulness, and intention to use. While advancements in the model, such as the Unified Theory of Acceptance and Use of Technology, are acknowledged, this study employed the original TAM for its simplicity and well-established use in similar contexts, which aligns well with the specific focus of this study [ 30 ]. Finally, participants were provided with an open-ended questionnaire to further explore their experiences and perspectives concerning the MyHealthHub application. The participants were encouraged to provide feedback on a multitude of application-related topics, such as interface design, content, usability, potential improvements, information quality, authentication and authorization procedures, and any other pertinent observations derived from the usability test. The full version of the questionnaire was originally written in Korean, and the English-translated version is available in Additional file 1.

Data analysis

To validate the responses gathered from the multiple-choice questionnaire items, statistical methods were utilized. The internal, concentration, and discriminant validity of each category were validated in this process [ 31 , 32 , 33 ]. Subsequently, descriptive statistics were employed to analyze quantitative data derived from the responses to multiple-choice questions. The qualitative data acquired through the open-ended item were analyzed thematically [ 34 ]. Two of the authors first carried out the review, coding, and categorization of the gathered data to construct the initial themes. Then, all the authors reviewed and discussed the themes to enhance their coherence and reasonability. Discrepancies identified among the authors were discussed and resolved. The data analysis procedure was performed using SmartPLS 3.0 and Excel.

Perceptions of the sharing and utilization of personal health data

A total of thirty participants accessed the MyHealthHub application for an average of thirty minutes to provide dynamic consent. The participants successfully accomplished the eight assigned tasks without requiring additional assistance or support. The demographic characteristics of the participants are available in Additional file 2.

Table 2 presents the participants’ perspectives on the sharing of personal health data. Twenty-four out of thirty participants agreed that the exchange of personal data is crucial to the advancement of the health care industry. Regarding the timing of requesting consent, twelve participants responded that consent is needed for each time data is to be shared, whereas three participants preferred providing consent only once, such as during the initial registration process for a specific service such as the MyHealthHub application. The remaining half of the participants indicated a preference for different frequencies of consent requests, contingent upon the purposes and subjects of data usage.

Table 3 presents the participants’ preferences concerning the sharing and utilization of their personal health data. Participants’ degrees of willingness to share their data varied by the type of institution or data. The average number of participants who expressed willingness to share basic health checkup data was 12.17, the highest result of any data type. In contrast, the average value for data concerning mental health was the lowest, at 9.83 individuals. Regarding the institution types, an average of 26.33 individuals considered medical institutions to be favour targets for data sharing. Additionally, private companies were given the lowest preferences, with an average of only 2.00 individuals.

User acceptance of the MyHealthHub application

Table 4 presents the descriptive statistics of participant responses regarding the level of user acceptance of the MyHealthHub application. The responses were validated for internal consistency, convergent validity, and discriminant validity within each category (Additional file 3). Average scores of 6.10 and 5.62 out of 7.00 were obtained for self-evaluated health literacy and health-related interests, respectively. An average of 5.67 was obtained for system usability, whereas a lower average, 4.67, was obtained for system reliability. The average score for the overall intention to use was 5.26, with 5.20 for perceived usefulness and 5.46 for perceived ease of use.

Thematic analysis results

Following an analysis of the responses to the open-ended questionnaire item, three themes were identified: the usability of the MyHealthHub application, the usefulness of the MyHealthHub application, and apprehensions regarding digital environments (Table 5 ).

The usability of the MyHealthHub application

In addition to the quantitative results presented in Table 4 , the overall evaluation results for usability were favorable, as evidenced by the fact that every participant independently completed the assigned tasks. Moreover, participants shared some opinions about enhancing the usability of the MyHealthHub application.

Some participants believed that mobile interfaces would offer greater benefits than web-based environments. Although the prototype distributed to the participants was compatible with desktop and mobile devices, it did not have responsive interface capabilities catering to different device types. Some participants expressed the opinion that the width of the tables utilized to present a record of consent requests or data usage history was excessively large on mobile devices, requiring them to scroll to cover the entire piece of information. They expressed their desire for an iteration of the MyHealthHub application that incorporates user-interface optimization tailored for mobile devices, thereby augmenting accessibility and enabling its utilization from any location without relying on desktop computers.

In addition to mobile optimization, participants commented on the intuitiveness of the interface. The majority of participants expressed satisfaction with the level of information that the MyHealthHub application provided in relation to their decision-making process regarding data usage. On the other hand, certain participants who perceived themselves to be deficient in providing information reported facing challenges in understanding content that included medical terminology. They desired further user interface enhancement through the addition of straightforward icons or descriptions to assist them in making decisions based on a comprehensive understanding of the data types to be shared and the specific purposes for which institutions would utilize the data.

The usefulness of the MyHealthHub application

The participants expressed contentment with the ability to tailor the level of data sharing in accordance with the type of institutions and data. In addition, the study participants highlighted the potential advantages of the MyHealthHub application in health management, monitoring chronic diseases, and insurance payment processing.

Personalized options were found to be the most appealing aspect to participants. Furthermore, this feature complies with dynamic consent principles, which safeguard the autonomy and self-determination of individuals. Some participants who initially expressed a preference for providing consent only once during the registration procedure felt that the option to automatically set conditions for providing consent was quite attractive. Certain participants opined that a more granular degree of options would be beneficial in the selection process for institution types. For example, a participant expressed a preference for choosing a specific insurance company rather than the institution type when they desired to share data with only insurance company A but not insurance company B. On the other hand, a few participants felt that the administration of personalized options was occasionally cumbersome, impeding their motivation to engage in the process of data utilization.

The majority of participants acknowledged the benefits associated with exercising control over data through the MyHealthHub application. They valued the convenience of monitoring their consent and data usage history to help manage their data utilization, in addition to the ability to tailor the extent of shared data to their preferences. Conversely, a subset of the participants conveyed a feeling of inadequate motivation to use the MyHealthHub application. They stated that they were young and currently in excellent health, resulting in a lack of need to manage personal health data, in contrast to financial management services. Some of them suggested that it would be advantageous to employ rewards or incentives as a means of motivating individuals to share their personal health data, thereby fostering their interest in health data management.

Apprehensions regarding digital environments

While the participants expressed contentment with the personalized options, there were some concerns regarding security. The participants highlighted the significance of establishing security protocols to prevent disastrous data breaches in digital environments, with a particular focus on health data that may include sensitive personal information.

In response to the identification procedure, the participants provided mixed responses. The participants were able to access the MyHealthHub application through the login ID and password that were assigned for the usability test in this study. Some participants conveyed a desire to enable effortless login via the single sign-on (SSO) approach in practical situations. These participants were aware of the SSO procedure, which is an authentication approach that enables individuals to access different services with a single set of login credentials [ 35 ]. They perceived the SSO as a dependable and practical approach to accessing multiple applications, owing to its widespread adoption across various services. Conversely, certain participants expressed that they might be hesitant to use the MyHealthHub application in the future out of apprehension, citing a need for increased security measures. One participant suggested that enhanced security technologies be implemented at a level comparable to the authentication process utilized in financial applications, such as two-factor authentication [ 36 ].

An additional noteworthy opinion concerned the criticality of communication in fostering relationships of trust with system end-users. The majority of participants indicated that the functionalities provided by the MyHealthHub application are advantageous for safeguarding individual autonomy and ensuring data sovereignty over their health data. However, they also emphasized the need to provide more comprehensive information regarding data management procedures so as to enhance transparency. One participant underscored the importance of secure and permanent deletion of shared data once a contractual period has expired. Another participant contended that it is critical to convey both technical and emotional aspects pertaining to the secure storage and management of data to foster trust and assurance with system users. For instance, individuals may want to know how their provided data are transmitted to the designated institution in a secure way and how the shared data are protected within the institution.

This study investigated the potential of a digital consent system that adheres to dynamic consent principles for safeguarding the autonomy and data sovereignty of individuals regarding their personal health data. Dynamic consent is an innovative approach to facilitating digital health ecosystems that helps balance the use of personal health data while simultaneously safeguarding individual autonomy [ 19 ]. Previous scholarly investigations have explored dynamic consent, such as its conceptual evolution, user acceptance, and technological advancements that facilitate its practical implementation [ 29 ]. Although previous research has recognized the ethical benefits of dynamic consent in comparison to conventional consent models, a need to assess user acceptance of systems based on dynamic consent for its practical use has been consistently expressed. Notably, very few publications have linked the TAM to dynamic consent, highlighting the originality of this study in understanding user acceptance within the context of personal health data management.

The results of this study provide valuable insights into participants’ preferences and perceptions regarding the sharing and utilization of personal health data through dynamic consent mechanisms. The study demonstrated a strong acceptance of the MyHealthHub application, with participants successfully completing tasks and expressing a preference for personalized consent options tailored to the type of data and institutions involved. Notably, participants showed a higher willingness to share data with medical institutions compared to private companies, and there was a clear preference for dynamic consent methods that allow for continuous and adaptable consent management. Despite the favorable reception, some participants indicated that the abundance of options could be cumbersome, suggesting the need for further refinement of user interfaces and the incorporation of more intuitive design elements. Additionally, participants highlighted concerns about security and the importance of transparent data management practices, underscoring the necessity for robust security measures and clear communication to build trust.

In particular, there has been little effort to investigate whether dynamic consent genuinely upholds individual autonomy through the sharing and utilization of personal health data. This critical question is central to the ethical considerations of digital health technologies and the protection of individual rights and privacy [ 37 ]. It is imperative to assess the efficacy of dynamic consent in preserving these principles amid the complex interplay of technology, healthcare delivery, and individual rights [ 38 , 39 ]. Individuals should be able to modify and update their consent, including actions such as protocol shifts, alterations, and withdrawal [ 40 ]. Furthermore, addressing concerns about the temporal aspect and control over the pace of interaction is essential for maintaining individual autonomy.

The personalization and flexibility of consent are enhanced by dynamic consent principles, which permit individuals to modify their consent preferences as circumstances change. This study used a digital consent application, MyHealthHub that operates on dynamic consent principles. MyHealthHub enables continuous interaction with participants, promoting self-determination in accordance with their consent preferences. The study participants comprehended and accepted the dynamic consent model according to their performance on the usability test. Participants were able to make decisions regarding the sharing of data in accordance with their individual preferences, considering the information at their disposal regarding data usage, including target institution, purpose, and duration of data sharing. Additionally, the questionnaire responses revealed that the perceived usefulness and ease of use of the MyHealthHub application led to positive intentions to use it.

The findings from this study indicate the usefulness of dynamic consent by demonstrating that individuals’ preferences regarding consent are substantially affected by a range of factors, including the kind of data to be shared, the type of institution involved, and the context in which the data is shared. These findings align with the observations made in prior studies regarding individuals’ perspectives on the utilization of their health data [ 41 , 42 , 43 , 44 ]. One study has indicated that individuals may exhibit a preference for providing limited data to for-profit enterprises [ 45 ]. Similarly, significant disparities in consent preferences were observed based on the type of institution in this study. The participants exhibited a greater propensity to provide consent for the sharing of their data with medical institutions or research institutes than with private enterprises. The type of health data also influenced the participants’ inclination to share their data. There was a heightened reluctance to share mental health-related data with specific institutions compared to basic health check-ups and physical health-related data.

Another notable observation from this study is that the participants displayed a mixed reaction to the personalized options. The majority of participants expressed satisfaction with the ability to independently determine the scope and extent of their data sharing, allowing customization. Some participants expressed that the abundance of options available may deter individuals from engaging in their data sharing and utilization processes. They exhibited a greater preference for automatic consent, as it eliminates the need for frequent decision-making or consent provision. The expanded role of individuals in the dynamic consent approach with respect to conventional consent mechanisms may be perceived as burdensome due to the multitude of options available for selection [ 46 ]. This particular concern has been identified as a significant barrier to the widespread adoption of the dynamic consent model in prior scholarly investigations. However, it has been argued that these opinions stem from a misinterpretation of dynamic consent. The concept of autonomy, as outlined in the dynamic consent principles, pertains to the ability to adapt approaches to accommodate various circumstances. This includes allowing individuals to choose the level of involvement they wish to have in their data-sharing processes. For example, passive individuals have the option to utilize broad-informed consent as a means to adopt a more inclusive approach within the framework of the dynamic consent model.

There are several limitations to this study. The recruitment of participants was conducted via convenience sampling. Convenience sampling was carried out by distributing invitations to individuals who were easily accessible and met the study criteria, such as members or subscribers of smart health standard forum. The majority of the study participants expressed interest in utilizing digital health services and personal health data. In fact, since this innovative method, the dynamic consent mechanism, affects the entirety of society, it is crucial to solicit the general public’s opinion. However, despite the satisfactory validity of the questionnaire responses, which suggests their potential for future research, the sample size employed in this study was relatively modest. It was difficult to recruit many public individuals, as well as older individuals, in our sample due to recruitment challenges; consequently, the characteristics of this study sample may not be representative of the general population in South Korea. The presence of such a selection bias may lead to overly optimistic conclusions regarding the level of interest and engagement of participants in utilizing the application.

Additionally, this study did not evaluate uninterrupted communication, a critical component of dynamic consent. The average duration of the participants’ experience was only 30 minutes, which is insufficient for a thorough evaluation. It is imperative to evaluate whether consent is altered over an extended period and whether participants prefer to continue utilizing the system. This limitation should be recognized, as it affects the comprehension of the continuous interaction necessary for dynamic consent systems. Furthermore, the experience was constructed using fictitious data rather than the actual data of the participants, which could potentially influence their responses and engagement. These aspects should be the focus of future research in order to conduct a more comprehensive assessment of dynamic consent systems.

Given the potential for data sharing to expand globally, it is required to address the specific contents of dynamic consent items. Previous studies have defined these items using the Data Use Ontology (DUO) developed by the Global Alliance for Genomics and Health (GA4GH) [ 47 , 48 , 49 ]. Additionally, international standards such as the Fast Healthcare Interoperability Resources (FHIR), developed by Health Level Seven (HL7), offer structured standards for representing consent directives in healthcare, emphasizing the importance of interoperability and consistency [ 50 ]. The Basic Patient Privacy Consents (BPPC) profile by Integrating the Healthcare Enterprise (IHE) also provides a mechanism for managing patient privacy consents, further supporting the need for standardized approaches [ 51 ]. Standardizing and clearly defining the items within dynamic consent, including the types of data shared, the purposes of data use, and the entities involved, is crucial for establishing trust among users and ensuring transparency. However, this study primarily focused on the user experience and did not explore the detailed standardization and definition of dynamic consent items, which constitutes a limitation. Addressing these aspects would enhance the scalability and interoperability of dynamic consent mechanisms on a broader scale.

Nevertheless, a notable aspect of this study was the simulation of real-world scenarios regarding the use of personalized options through the MyHealthHub application. Although the personalized option is an aspect of dynamic consent principles that safeguards individual autonomy in sharing and utilizing personal health data, it has received comparatively less attention than other features, such as withdrawal of consent, contactless communication, and unlimited communication. Participants in this study were not required to carefully read or view any particular page or information; rather, they used the application as usual and completed the assigned tasks by themselves. This statement underscores our endeavors to acquire a more realistic depiction of the circumstances in which individuals are anticipated to operate the application in the future. As evidenced by the fact that not all participants preferred to select personalized options each time rather than specifying conditions for automatic consent, it would appear that the intended reflection of a variety of realistic perspectives in this study was achieved, at least to some degree. Further research that juxtaposes the perspectives of active and passive individuals should provide a more holistic understanding of the effects of dynamic consent protocols on participation rates as well as whether such protocols yield more favorable outcomes while preserving individual autonomy.

In a data-driven economy, personal health data facilitate progress in digital health ecosystems beyond their potential value as an asset. In digital health environments, dynamic consent is a promising strategy for protecting the autonomy and data sovereignty of individuals regarding their personal health data. The findings of this study indicate that by utilizing dynamic consent principles in the implementation of a digital consent application, individuals can be adequately informed regarding the manner in which their data are shared and used, thereby empowering them to make well-informed decisions. Participants highly valued the ability of digital interfaces to modify individual preferences in response to changing circumstances; this feature should be expanded to its fullest potential. Nevertheless, digital consent has certain challenges, such as apprehensions about the identification process and a lack of establishing trustworthy relationships with individuals. Therefore, while embracing the personalized and flexible advantages of the dynamic consent model, it is imperative to continuously contemplate technological and legal measures to ensure individual rights and privacy in the ever-evolving digital landscape.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Health insurance portability and accountability act

General data protection regulation

Technology Acceptance Model

Single sign-on

Data use ontology

Global alliance for genomics and health

Fast healthcare interoperability resources

Health level seven

Basic patient privacy consents

Integrating the healthcare enterprise

Graef I, Petročnik T, Tombal T. Conceptualizing Autonomy in an Era of Collective Data Processing: From Theory to Practice. Digit Soc. 2023;2(2):19.

Article   Google Scholar  

Mirchev M, Mircheva I, Kerekovska A. The academic viewpoint on patient data ownership in the context of big data: scoping review. J Med Internet Res. 2020;22(8):e22214.

Alonso SG, de la Torre Díez I, Zapiraín BG. Predictive, personalized, preventive and participatory (4P) medicine applied to telemedicine and eHealth in the literature. J Med Syst. 2019;43(5):140.

Abouelmehdi K, Beni-Hssane A, Khaloufi H, Saadi M. Big data security and privacy in healthcare: A Review. Procedia Comput Sci. 2017;113:73–80.

Piasecki J, Cheah PY. Ownership of individual-level health data, data sharing, and data governance. BMC Med Ethics. 2022;23(1):104.

Tang C, Plasek JM, Zhu Y, Huang Y. Data sovereigns for the world economy. Humanit Soc Sci Commun. 2020;7(1):1–4.

Opriel S, Fraunhofer I, Skubowius GE, Fraunhofer I, Lamberjohann M. How usage control fosters willingness to share sensitive data in inter-organizational processes of supply chains. In: International Scientific Symposium on Logistics. vol. 91. Bremen: Bundesvereinigung Logistik (BVL) e.V.; 2021.

Regulation P. Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (EU). 2016;679:2016.

Google Scholar  

Cohen IG, Mello MM. HIPAA and protecting health information in the 21st century. Jama. 2018;320(3):231–2.

Graef I, van der Sloot B. Collective data harms at the crossroads of data protection and competition law: Moving beyond individual empowerment. Eur Bus Law Rev. 2022;33(4).

Pugh J. Informed consent, autonomy, and beliefs. In: Autonomy, rationality, and contemporary bioethics [Internet]. Oxford(UK): Oxford University Press; 2020.

Goodyear MD, Krleza-Jeric K, Lemmens T. The declaration of Helsinki. BMJ. 2007;335:624. London: British Medical Journal Publishing Group; 2007. https://doi.org/10.1136/bmj.39339.610000.BE .

Sims JM. A brief review of the Belmont report. Dimens Crit Care Nurs. 2010;29(4):173–4.

Koonrungsesomboon N, Laothavorn J, Karbwang J. Understanding of essential elements required in informed consent form among researchers and institutional review board members. Trop Med Health. 2015;43(2):117–22.

Yusof MYPM, Teo CH, Ng CJ. Electronic informed consent criteria for research ethics review: a scoping review. BMC Med Ethics. 2022;23(1):117.

All of Us Research Program Investigators. The “All of Us” research program. N Engl J Med. 2019;381(7):668–76.

Doerr M, Moore S, Barone V, Sutherland S, Bot BM, Suver C, et al. Assessment of the All of Us research program’s informed consent process. AJOB Empir Bioeth. 2021;12(2):72–83.

i CONSENT Consortium, et al. Guidelines for tailoring the informed consent process in clinical studies. Spain FISABIO Generalitat Valencia. 2021;10:1–63.

Kaye J, Whitley EA, Lund D, Morrison M, Teare H, Melham K. Dynamic consent: a patient interface for twenty-first century research networks. Eur J Hum Genet. 2015;23(2):141–6.

Wee R, Henaghan M, Winship I. Ethics: Dynamic consent in the digital age of biology: online initiatives and regulatory considerations. J Prim Health Care. 2013;5(4):341–7.

Steinsbekk KS, Kåre Myskja B, Solberg B. Broad consent versus dynamic consent in biobank research: is passive participation an ethical problem? Eur J Hum Genet. 2013;21(9):897–902.

Budin-Ljøsne I, Teare HJ, Kaye J, Beck S, Bentzen HB, Caenazzo L, et al. Dynamic consent: a potential solution to some of the challenges of modern biomedical research. BMC Med Ethics. 2017;18:1–10.

Wallace SE, Miola J. Adding dynamic consent to a longitudinal cohort study: a qualitative study of EXCEED participant perspectives. BMC Med Ethics. 2021;22:1–10.

Mascalzoni D, Melotti R, Pattaro C, Pramstaller PP, Gögele M, De Grandi A, et al. Ten years of dynamic consent in the CHRIS study: informed consent as a dynamic process. Eur J Hum Genet. 2022;30(12):1391–7.

Appenzeller A, Rode E, Krempel E, Beyerer J. Enabling data sovereignty for patients through digital consent enforcement. In: Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA `20); 2020 Jun 30-Jul 3; Corfu, Greece. New York: Association for Computing Machinery; 2020. ISBN: 978-1-4503-7773-7. https://doi.org/10.1145/3389189 .

Prictor M, Teare HJ, Bell J, Taylor M, Kaye J. Consent for data processing under the General Data Protection Regulation: Could ‘dynamic consent’ be a useful tool for researchers? J Data Prot Priv. 2019;3(1):93–112.

Villalobos-Quesada M. Participative consent: Beyond broad and dynamic consent for health big data resources. Law Hum Genome Rev. 2019;(Extraold.I2019):485–510.  https://bioderecho.eu/sumarios/ .

Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 1989;13(3):319–40.

Lee AR, Koo D, Kim IK, Lee E, Kim HH, Yoo S, et al. Identifying facilitators of and barriers to the adoption of dynamic consent in digital health ecosystems: a scoping review. BMC Med Ethics. 2023;24(1):107.

Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quart. 2003;27(3):425–78.

Tentama F, Anindita WD. Employability scale: Construct validity and reliability. Int J Sci Technol Res. 2020;9(4):3166–70.

Ahmad S, Zulkurnain N, Khairushalimi F. Assessing the validity and reliability of a measurement model in Structural Equation Modeling (SEM). Brit J Math Comput Sci. 2016;15(3):1–8.

Ab Hamid MR, Sami W, Sidek MM. Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. In: Journal of physics: Conference series. vol. 890. Bristol: IOP Publishing; 2017. p. 012163.

Kiger ME, Varpio L. Thematic analysis of qualitative data: AMEE Guide No. 131. Med Teach. 2020;42(8):846–854.

De Clercq J. Single sign-on architectures. In: International Conference on Infrastructure Security. Berlin: Springer; 2002. p. 40–58.

Wong ZSY, Rigby M. Identifying and addressing digital health risks associated with emergency pandemic response: Problem identification, scoping review, and directions toward evidence-based evaluation. Int J Med Inform. 2022;157:104639.

Vedder A, Spajić D. Moral autonomy of patients and legal barriers to a possible duty of health related data sharing. Ethics Inf Technol. 2023;25(1):23.

Verreydt S, Yskout K, Joosen W. Security and privacy requirements for electronic consent: a systematic literature review. ACM Trans Comput Healthc. 2021;2(2):1–24.

Saksena N, Matthan R, Bhan A, Balsari S. Rebooting consent in the digital age: a governance framework for health data exchange. BMJ Glob Health. 2021;6(Suppl 5):e005057.

Appenzeller A, Hornung M, Kadow T, Krempel E, Beyerer J. Sovereign digital consent through privacy impact quantification and dynamic consent. Technologies. 2022;10(1):35.

Mont MC, Sharma V, Pearson S. EnCoRe: dynamic consent, policy enforcement and accountable information sharing within and across organisations. HP Laboratories Technical Report. 2012.

De Sutter E, Zaçe D, Boccia S, Di Pietro ML, Geerts D, Borry P, et al. Implementation of electronic informed consent in biomedical research and stakeholders’ perspectives: systematic review. J Med Internet Res. 2020;22(10):e19129.

Tosoni S, Voruganti I, Lajkosz K, Habal F, Murphy P, Wong RK, et al. The use of personal health information outside the circle of care: consent preferences of patients from an academic health care institution. BMC Med Ethics. 2021;22:1–14.

Kalkman S, van Delden J, Banerjee A, Tyl B, Mostert M, van Thiel G. Patients’ and public views and attitudes towards the sharing of health data for research: a narrative review of the empirical evidence. J Med Ethics. 2022;48(1):3–13.

Cumyn A, Barton A, Dault R, Safa N, Cloutier AM, Ethier JF. Meta-consent for the secondary use of health data within a learning health system: a qualitative study of the public’s perspective. BMC Med Ethics. 2021;22(1):81.

Ploug T, Holm S. Meta consent: a flexible and autonomous way of obtaining informed consent for secondary research. BMJ. 2015;350:1–4.

Haas MA, Teare H, Prictor M, Ceregra G, Vidgen ME, Bunker D, et al. ‘CTRL’: an online, Dynamic Consent and participant engagement platform working towards solving the complexities of consent in genomic research. Eur J Hum Genet. 2021;29(4):687–98.

Haas MA, Madelli EO, Brown R, Prictor M, Boughtwood T. Evaluation of CTRL: a web application for dynamic consent and engagement with individuals involved in a cardiovascular genetic disorders cohort. Eur J Hum Genet. 2024;32(1):61–8.

Dyke SO, Philippakis AA, Rambla De Argila J, Paltoo DN, Luetkemeier ES, Knoppers BM, et al. Consent codes: upholding standard data use conditions. PLoS Genet. 2016;12(1):e1005772.

HL7 International. Resource Consent. 2023. [cited 2024 April 16]. https://www.hl7.org/fhir/consent.html .

IHE International. Basic Patient Privacy Consents (BPPC). 2023. [cited 2024 April 16]. https://profiles.ihe.net/ITI/TF/Volume1/ch-19.html .

Download references

Acknowledgements

This research was supported by a Government-wide R&D Fund project for infectious disease research (GFID), Republic of Korea (grant number: HG22C0024, KH124685).

Not applicable.

Author information

Authors and affiliations.

Office of eHealth Research and Business, Seoul National University Bundang Hospital, 172, Dolma-ro, Seongnam-si, 13605, Gyeonggi-do, Republic of Korea

Ah Ra Lee, Dongjun Koo, Sooyoung Yoo & Ho-Young Lee

Interdisciplinary Program in Bioengineering, Seoul National University, 1, Gwanak-ro, Seoul, 08826, Seoul, Republic of Korea

Dongjun Koo

School of Computer Science & Engineering, College of IT Engineering, Kyungpook National University, 80, Daehak-ro, Daegu, 41566, Daegu, Republic of Korea

College of Nursing, Research Institute of Nursing Science, Kyungpook National University, 680, Gukchaebosang-ro, Daegu, 41944, Daegu, Republic of Korea

Department of Nuclear Medicine, Seoul National University Bundang Hospital, 172, Dolma-ro, Seongnam-si, 13605, Gyeonggi-do, Republic of Korea

Ho-Young Lee

You can also search for this author in PubMed   Google Scholar

Contributions

AL and IK conceptualized the design of the study. AL and DK performed system implementation, data collection, analysis, and interpretation of the findings. AL wrote the original draft. EL, SY, IK and HL commented. IK and HL reviewed the final version. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Ho-Young Lee .

Ethics declarations

Ethics approval and consent to participate.

The Institutional Review Board of Kyungpook National University (KNU) granted approval for this research (KNU IRB No. KNU-2021-0158). Prior to the commencement of the study, written informed consent was obtained from all participants, and the objectives and scope of the research were explained. They were informed of the possibility and their right to withdraw from the study at any time, and their participation was entirely voluntary. No personally identifiable information was gathered during the data collection process, and no unique identifiable information was included in the presentation of the findings. All methods were conducted in accordance with the relevant guidelines and regulations.

Consent for publication

All participants provided informed consent to publish their contributing data.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: questionnaire, additional file 2: demographic characteristics of the study participants, additional file 3: validation results for the questionnaire items, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lee, A.R., Koo, D., Kim, I.K. et al. Opportunities and challenges of a dynamic consent-based application: personalized options for personal health data sharing and utilization. BMC Med Ethics 25 , 92 (2024). https://doi.org/10.1186/s12910-024-01091-3

Download citation

Received : 16 April 2024

Accepted : 21 August 2024

Published : 31 August 2024

DOI : https://doi.org/10.1186/s12910-024-01091-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dynamic consent
  • Data sovereignty
  • Personalized
  • Personal health data

BMC Medical Ethics

ISSN: 1472-6939

validity in research ethics

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

jrfm-logo

Article Menu

validity in research ethics

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Do corporate ethics enhance financial analysts’ behavior and performance.

validity in research ethics

1. Introduction

2. literature review and hypotheses development, 2.1. theories of corporate ethics, 2.2. corporate ethics and transparency, 2.3. corporate ethics and financial analysts’ behavior, 3.1. empirical model, 3.2. variables, 3.2.1. dependent variables, 3.2.2. independent variable, 3.2.3. control variables, 4.1. sample and data, 4.2. descriptive statistics, 4.2.1. descriptive statistics of the sample, 4.2.2. descriptive statistics of variables, 4.3. correlation matrix, 4.4. multivariate analyses, 4.5. additional tests, 5. discussion and conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

1 ( ), which investigates the impact of ESG performance on the dispersion of financial analyst forecasts, both conceptually and empirically, as detailed in the literature review and methods sections below. In a recent literature synthesis about earnings quality, ethics, and CSR, ( ) propose a conceptual framework that underlines ethics as a key antecedent of CSR/ESG and financial reporting quality, in line with prior research. In addition, our proxy for “corporate ethics” is unique and distinctly differs from commonly used ethics measures, as well as ESG and CSR ratings. In this study, we use Sustainalytics’ business ethics incidents legacy score, which depicts a company’s actual involvement in ethical misconduct and controversies.
2
3
  • Adhikari, Binay K. 2016. Causal effect of analyst following on corporate social responsibility. Journal of Corporate Finance 41: 201–16. [ Google Scholar ] [ CrossRef ]
  • Aftab, Junaib, Nabila Abid, Huma Sarwar, and Monica Veneziani. 2022. Environmental ethics, green innovation, and sustainable performance: Exploring the role of environmental leadership and environmental strategy. Journal of Cleaner Production 378: 134639. [ Google Scholar ] [ CrossRef ]
  • Alazzani, Abdulsamad, Wan Nordin Wan-Hussin, Michael Jones, and Ahmed Al-Hadi. 2021. ESG reporting and analysts’ recommendations in GCC: The Moderation role of royal family directors. Journal of Risk and Financial Management 14: 72. [ Google Scholar ] [ CrossRef ]
  • Armour, John, Daniel Awrey, Paul Lyndon Davies, Luca Enriques, Jeffrey N. Gordon, Colin Mayer, and Jennifer Payne. 2016. Principles of Financial Regulation , Oxford: Oxford University Press.
  • Armstrong, Christopher S., John E. Core, and Wayne R. Guay. 2014. Do Independent Directors Cause Improvements in Firm Transparency. Journal of Financial Economics 113: 383–403. [ Google Scholar ] [ CrossRef ]
  • Balakrishnan, Karthik, Jennfier Blouin, and Wayne Guay. 2019. Tax aggressiveness and corporate transparency. The Accounting Review 94: 45–69. [ Google Scholar ] [ CrossRef ]
  • Baskaran, Shathees, Nalini Nedunselian, Chun Howe Ng, Nomahaza Mahadi, and Siti Zaleha Abdul Rasid. 2020. Earnings management: A strategic adaptation or deliberate manipulation? Journal of Financial Crime 27: 369–86. [ Google Scholar ] [ CrossRef ]
  • Beccalli, Elena, Peter Miller, and Ted O’leary. 2015. How analysts process information: Technical and financial disclosures in the microprocessor industry. European Accounting Review 24: 519–49. [ Google Scholar ] [ CrossRef ]
  • Beyer, Anne, Daniel Cohen, Thomas Z. Lis, and Beverly R. Walther. 2010. The Financial Reporting Environment: Review of the Recent Literature. Journal of Accounting and Economics 50: 296–343. [ Google Scholar ] [ CrossRef ]
  • Bhat, Gauri, Ole-Kristian Hope, and Tony Kang. 2006. Does corporate governance transparency affect the accuracy of analyst forecasts? Accounting & Finance 46: 715–32. [ Google Scholar ]
  • Bhushan, Ravi. 1989. Firm characteristics and analyst following. Journal of Accounting and Economics 11: 255–74. [ Google Scholar ] [ CrossRef ]
  • Biggerstaff, Lee, David C. Cicero, and Andy Puckett. 2015. Suspect CEOs, unethical culture, and corporate misbehavior. Journal of Financial Economics 117: 98–121. [ Google Scholar ] [ CrossRef ]
  • Bonini, Stefano, Laura Zanetti, Roberto Bianchini, and Antonio Salvi. 2010. Target price accuracy in equity research. Journal of Business Finance & Accounting 37: 1177–217. [ Google Scholar ]
  • Bouteska, Ahmed, and Mehdi Mili. 2022. Does corporate governance affect financial analysts’ stock recommendations, target prices accuracy and earnings forecast characteristics? An empirical investigation of US companies. Empirical Economics 63: 2125–71. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bradley, Daniel, Connie X. Mao, and Chi Zhang. 2021. Does analyst coverage affect workplace safety? Management Science 68: 3175–973. [ Google Scholar ] [ CrossRef ]
  • Bradshaw, Mark T., Brandon Lock, Xue Wang, and Dexin Zhou. 2021. Soft information in the financial press and analyst revisions. The Accounting Review 96: 107–32. [ Google Scholar ] [ CrossRef ]
  • Brauer, Matthias, and Margarethe Wiersema. 2018. Analyzing analyst research: A review of past coverage and recommendations for future research. Journal of Management 44: 218–48. [ Google Scholar ] [ CrossRef ]
  • Bushman, Robert M., Joseph D. Piotroski, and Abbie J. Smith. 2004. What determines corporate transparency? Journal of Accounting Research 42: 207–52. [ Google Scholar ] [ CrossRef ]
  • Byard, Donal, Ying Li, and Joseph Weintrop. 2006. Corporate Governance and the Quality of Financial Analysts’ Information. Journal of Accounting and Public Policy 25: 609–25. [ Google Scholar ] [ CrossRef ]
  • Cao, Sheng, Shuang Xue, and Hongjun Zhu. 2022. Analysts’ knowledge structure and performance. Accounting & Finance 62: 4727–53. [ Google Scholar ]
  • Carroll, Archie B. 2000. Ethical challenges for business in the new millennium: Corporate social responsibility and models of management morality. Business Ethics Quarterly 10: 33–42. [ Google Scholar ] [ CrossRef ]
  • Chen, Tai-Yuan, Sudipto Dasgupta, and Yangxin Yu. 2014. Transparency and Financing Choices of Family Firms. Journal of Financial and Quantitative Analysis 49: 381–408. [ Google Scholar ] [ CrossRef ]
  • Cheung, Kwok Yip, and Chung Yee Lai. 2023. The impacts of business ethics and diversity on ESG disclosure: Evidence from Hong Kong. Journal of Corporate Accounting & Finance 34: 208–21. [ Google Scholar ]
  • Chiang, Hsiang-tsai, and Feng Chia. 2005. Analyst’s financial forecast accuracy and corporate transparency. Paper presented at Allied Academies International Conference, Academy of Accounting and Financial Studies, Proceedings, Memphis, TN, USA, April 13–16, vol. 10, pp. 9–14. [ Google Scholar ]
  • Chih, Hsiang-Lin, Chung-Hua Shen, and Feng-Ching Kang. 2008. Corporate Social Responsibility, Investor Protection, and Earnings Management: Some International Evidence. Journal of Business Ethics 79: 179–98. [ Google Scholar ] [ CrossRef ]
  • Choi, Tae Hee, and Jinhan Pae. 2011. Business Ethics and Financial Reporting Quality: Evidence from Korea. Journal of Business Ethics 103: 403–27. [ Google Scholar ] [ CrossRef ]
  • Chua, Frances, and Asheq Rahman. 2011. Institutional Pressures and Ethical Reckoning by Business Corporation. Journal of Business Ethics 98: 307–29. [ Google Scholar ] [ CrossRef ]
  • Clarke, Jonathan, Stephen P. Ferris, Narayanan Jayaraman, and Jinsoo Lee. 2006. Are analyst recommendations biased? Evidence from corporate bankruptcies. Journal of Financial and Quantitative Analysis 41: 169–96. [ Google Scholar ] [ CrossRef ]
  • Cramer, Jacqueline. 2003. Learning about Corporate Social Responsibility: The Dutch Experience, National Initiative for Sustainable Development (NIDO) . Amsterdam: IOS Press. [ Google Scholar ]
  • das Neves, João César, and Antonino Vaccaro. 2013. Corporate Transparency: A Perspective from Thomas Aquinas’ Summa Theologiae. Journal of Business Ethics 113: 639–48. [ Google Scholar ] [ CrossRef ]
  • Dhaliwal, Dan, Suresh Radhakrishnan, Albert Tsang, and Yong George Yang. 2012. Nonfinancial Disclosure and Analyst Forecast Accuracy: International Evidence on Corporate Social Responsibility Disclosure. The Accounting Review 87: 723–59. [ Google Scholar ] [ CrossRef ]
  • Duong, Hong Kim, Giorgio Gotti, Michael Stein, and Anthony Chen. 2022. Code of ethics quality and audit fees. Journal of Accounting and Public Policy 41: 107001. [ Google Scholar ] [ CrossRef ]
  • Dyck, Alexander, Adair Morse, and Luigi Zingales. 2010. Who blows the whistle on corporate fraud? Journal of Finance 65: 2213–53. [ Google Scholar ] [ CrossRef ]
  • Elayan, Fayez A., Jingyu Li, Zhefeng Frank Liu, Thomas O. Meyer, and Sandra Felton. 2016. Changes in the Covalence Ethical Quote, Financial Performance and Financial Report Quality. Journal of Business Ethics 134: 369–95. [ Google Scholar ] [ CrossRef ]
  • European Commission. 2001. Promoting a European Framework for Corporate Social Responsibility. Brussels. Available online: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2001:0366:FIN:EN:PDF%20 (accessed on 23 August 2024).
  • Fang, Lily H., and Ayako Yasuda. 2014. Are stars’ opinions worth more? The relation between analyst reputation and recommendation values. Journal of Financial Services Research 46: 235–69. [ Google Scholar ] [ CrossRef ]
  • Felo, Andrew Jean. 2000. Ethic Programs, Board Oversight, and Perceived Disclosure Credibility: Was the Treadway Commission Correct about Ethics and Financial Reporting? Research on Accounting Ethics 7: 157–77. [ Google Scholar ]
  • Felo, Andrew Jean. 2007. Board Oversight of Corporate Ethics Programs and Disclosure Transparency. Accounting and the Public Interest 7: 1–26. [ Google Scholar ] [ CrossRef ]
  • Firth, Michael, Kailong Wang, and Sonia M. L. Wong. 2015. Corporate Transparency and the Impact of Investor Sentiment on Stock Prices. Management Science 61: 1630–47. [ Google Scholar ] [ CrossRef ]
  • Fischer, Josie. 2004. Social responsibility and ethics: Clarifying the concepts. Journal of Business Ethics 52: 381–90. [ Google Scholar ] [ CrossRef ]
  • Fukukawa, Kyoko, John M. T. Balmer, and Edmund R. Gray. 2007. Mapping the interface between corporate identity, ethics and corporate social responsibility. Journal of Business Ethics 76: 1–5. [ Google Scholar ] [ CrossRef ]
  • Fung, Archon, Mary Graham, and David Weil. 2007. Full Disclosure: The Perils and Promise of Transparency . New York: Cambridge University Press. [ Google Scholar ]
  • Furlotti, Katia, and Tatiana Mazza. 2022. Corporate social responsibility versus business ethics: Analysis of employee-related policies. Social Responsibility Journal 20: 20–37. [ Google Scholar ] [ CrossRef ]
  • Gino, Francesca, and Lamar Pierce. 2009. The abundance effect: Unethical behavior in the presence of wealth. Organizational Behavior and Human Decision Processes 109: 142–55. [ Google Scholar ] [ CrossRef ]
  • Graafland, Johan. 2022. Ethics and Economics: An Introduction to Free Markets, Equality and Happiness . Abingdon: Taylor & Francis, p. 270. [ Google Scholar ]
  • Gu, Lifeng, and Dirk Hackbarth. 2013. Governance and Equity Prices: Does Transparency Matter? Review of Finance 17: 1989–2033. [ Google Scholar ] [ CrossRef ]
  • Gul, Ferdinand A., Marion Hutchinson, and Karen M. Y. Lai. 2013. Gender-Diverse Boards and Properties of Analyst Earnings Forecasts. Accounting Horizons 27: 511–38. [ Google Scholar ] [ CrossRef ]
  • He, Jie Jack, and Xuan Tian. 2013. The dark side of analyst coverage: The case of innovation. Journal of Financial Economics 109: 856–78. [ Google Scholar ] [ CrossRef ]
  • Healy, Paul M., Amy P. Hutton, and Krishna G. Palepu. 1999. Stock Performance and Intermediation Changes Surrounding Sustained Increases in Disclosure. Contemporary Accounting Research 16: 485–520. [ Google Scholar ] [ CrossRef ]
  • Herrmann, Don, and Wayne B. Thomas. 2005. Rounding of analyst forecasts. The Accounting Review 80: 805–23. [ Google Scholar ] [ CrossRef ]
  • Hess, David. 2007. Social Reporting and New Governance Regulation: The Prospects of Achieving Corporate Accountability through Transparency. Business Ethics Quarterly 17: 453–76. [ Google Scholar ] [ CrossRef ]
  • Hong, Yongtao, and Margaret L. Andersen. 2011. The Relationship between Corporate Social Responsibility and Earnings Management: An Exploratory Study. Journal of Business Ethics 104: 461–71. [ Google Scholar ] [ CrossRef ]
  • Houqe, Muhammad Nurul, Tony van Zijl, Keitha Dunstan, and A. Waresul Karim. 2015. Corporate Ethics and Auditor Choice—International Evidence. Research in Accounting Regulation 27: 57–65. [ Google Scholar ] [ CrossRef ]
  • Huang, Allen H., An-Ping Lin, and Amy Y. Zang. 2022. Cross-industry information sharing among colleagues and analyst research. Journal of Accounting and Economics 74: 101496. [ Google Scholar ] [ CrossRef ]
  • Hui, Kai Wai, and Steven R. Matsunaga. 2015. Are CEOs and CFOs rewarded for disclosure quality? The Accounting Review 90: 1013–47. [ Google Scholar ] [ CrossRef ]
  • Hussain, Nazim, Isabel-María García-Sánchez, Sana Akbar Khan, Zaheer Khan, and Jennifer Martínez-Ferrero. 2023. Connecting the dots: Do financial analysts help corporate boards improve corporate social responsibility? British Journal of Management 34: 363–89. [ Google Scholar ] [ CrossRef ]
  • Hussain, Simon, and Chen Su. 2024. The impact of analyst, CEO, and firm reputations on investors in the UK and Japanese stock markets. British Journal of Management 35: 259–80. [ Google Scholar ] [ CrossRef ]
  • Imam, Shahed, and Crawford Spence. 2016. Context, not predictions: A field study of financial analysts. Accounting, Auditing & Accountability Journal 29: 226–47. [ Google Scholar ]
  • Jegadeesh, Narasimhan, Joonghyuk Kim, Susan D. Krische, and Charles Lee. 2004. Analyzing the analysts: When do recommendations add value? Journal of Finance 59: 1083–124. [ Google Scholar ] [ CrossRef ]
  • Jensen, Michael C., and William H. Meckling. 1976. Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure. Journal of Financial Economics 3: 305–60. [ Google Scholar ] [ CrossRef ]
  • Jiang, Danling, Alok Kumar, and Kelvin K. F. Law. 2016. Political contributions and analyst behavior. Review of Accounting Studies 21: 37–88. [ Google Scholar ]
  • Jones, Jennifer. 1991. Earnings Management during Import Relief Investigation. Journal of Accounting Research 29: 193–228. [ Google Scholar ] [ CrossRef ]
  • Kaptein, Muel. 2010. The Ethics of Organizations: A Longitudinal Study of the U.S. Working Population. Journal of Business Ethics 92: 601–18. [ Google Scholar ] [ CrossRef ]
  • Kerl, Alexander G. 2011. Target price accuracy. Business Research 4: 74–96. [ Google Scholar ] [ CrossRef ]
  • Kim, Yongtae, Myung Seok Park, and Benson Wier. 2012. Is Earnings Quality Associated with Corporate Social Responsibility? The Accounting Review 87: 761–96. [ Google Scholar ] [ CrossRef ]
  • Krishnan, Gopal V., and Linda M. Parsons. 2008. Getting to the bottom line: An exploration of gender and earnings quality. Journal of Business Ethics 78: 65–76. [ Google Scholar ] [ CrossRef ]
  • Labelle, Real, Rim Makni Gargouri, and Claude Francoeur. 2010. Ethics, Diversity Management and Financial Reporting Quality. Journal of Business Ethics 93: 335–53. [ Google Scholar ] [ CrossRef ]
  • Lang, Mark, and Mark Maffett. 2011. Transparency and Liquidity Uncertainty in Crisis Periods. Journal of Accounting and Economics 52: 101–25. [ Google Scholar ] [ CrossRef ]
  • Lang, Mark, Karl V. Lins, and Darius P. Miller. 2003. ADRs, analysts, and accuracy: Does cross listing in the United States improve a firm’s information environment and increase market value? Journal of Accounting Research 41: 317–45. [ Google Scholar ] [ CrossRef ]
  • Lang, Mark, Karl V. Lins, and Mark Maffett. 2012. Transparency, Liquidity, and Valuation: International Evidence on When Transparency Matters Most. Journal of Accounting Research 50: 729–74. [ Google Scholar ] [ CrossRef ]
  • Lang, Mark, and Russell Jean Lundholm. 1996. Corporate Disclosures Policy and Analyst Behavior. The Accounting Review 71: 467–92. [ Google Scholar ]
  • Liang, Dawei, Yukun Pan, Qianqian Du, and Ling Zhu. 2022. The information content of analysts’ textual reports and stock returns: Evidence from China. Finance Research Letters 46: 102817. [ Google Scholar ] [ CrossRef ]
  • Linthicum, Cheryl, Austin L. Reitenga, and Juan Manuel Sanchez. 2010. Social responsibility and corporate reputation: The case of the Arthur Andersen Enron audit failure. Journal of Accounting and Public Policy 29: 160–76. [ Google Scholar ] [ CrossRef ]
  • Loe, Terry W., Linda Ferrell, and Phylis Mansfield. 2000. A Review of Empirical Studies Assessing Ethical Decision Making in Business. Journal of Business Ethics 25: 185–204. [ Google Scholar ] [ CrossRef ]
  • Luo, Kun, and Sirui Wu. 2022. Corporate sustainability and analysts’ earnings forecast accuracy: Evidence from environmental, social and governance ratings. Corporate Social Responsibility and Environmental Management 29: 1465–81. [ Google Scholar ] [ CrossRef ]
  • Luo, Xueming, Heli Wang, Sascha Raithel, and Qinqin Zheng. 2015. Corporate social performance, analyst stock recommendations, and firm future returns. Strategic Management Journal 36: 123–36. [ Google Scholar ] [ CrossRef ]
  • Mao, Zhihong, Siyang Wang, and Yu-En Lin. 2024. ESG, ESG rating divergence and earnings management: Evidence from China. Corporate Social Responsibility and Environmental Management 31: 3328–47. [ Google Scholar ] [ CrossRef ]
  • McShane, Steven L., and Mary Ann Von Glinow. 2021. Organizational Behavior . New York: McGraw-Hill Higher Education. [ Google Scholar ]
  • Nugroho, Deinera, Yi Hsu, Christian Hartauer, and Andreas Hartauer. 2024. Investigating the Interconnection between Environmental, Social, and Governance (ESG), and Corporate Social Responsibility (CSR) Strategies: An Examination of the Influence on Consumer Behavior. Sustainability 16: 614. [ Google Scholar ] [ CrossRef ]
  • Palazzo, Guido. 2007. Organizational Integrity—Understanding the Dimensions of Ethical and Unethical Behavior in Corporations. In Corporate Governance and Corporate Ethics . Edited by Walther Ch Zimmerli, Markus Holzinger and Klaus Richter. Berlin and Heidelberg: Springer, pp. 113–28. [ Google Scholar ]
  • Parris, Denise Linda, Jennifer L. Dapko, Richard Wade Arnold, and Danny Arnold. 2016. Exploring transparency: A new framework for responsible business management. Management Decision 54: 222–47. [ Google Scholar ] [ CrossRef ]
  • Peasnell, Kenneth, Sayjda Talib, and Steven Young. 2011. The fragile returns to investor relations: Evidence from a period of declining market confidence. Accounting and Business Research 41: 69–90. [ Google Scholar ] [ CrossRef ]
  • Qian, Cuili, Louise Y. Lu, and Yangxin Yu. 2019. Financial analyst coverage and corporate social performance: Evidence from natural experiments. Strategic Management Journal 40: 2271–86. [ Google Scholar ] [ CrossRef ]
  • Ramnath, Sundaresh, Steve Rock, and Philip Shane. 2008. The financial analyst forecasting literature: A taxonomy with suggestions for further research. International Journal of Forecasting 24: 34–75. [ Google Scholar ] [ CrossRef ]
  • Roulstone, Darren. 2003. Analyst Following and Market Liquidity. Contemporary Accounting Research 20: 552–78. [ Google Scholar ] [ CrossRef ]
  • Sardžoska, Elisaveta Gjorgji, and Thomas Li-Ping Tang. 2015. Monetary intelligence: Money attitudes—Unethical intentions, intrinsic and extrinsic job satisfaction, and coping strategies across public and private sectors in Macedonia. Journal of Business Ethics 130: 93–115. [ Google Scholar ] [ CrossRef ]
  • Schermerhorn, John R. 2002. Management , 7th ed. New York: Wiley. [ Google Scholar ]
  • Sengupta, Partha, and Suning Zhang. 2015. Equity-Based Compensation of Outside Directors and Corporate Disclosure Quality. Contemporary Accounting Research 32: 1073–98. [ Google Scholar ] [ CrossRef ]
  • Soppe, Aloy, Niels Van Zijl, and Auke de Bos. 2012. Board Transparency, CEO Monitoring and Firms’ Financial Performance. Transparency and Governance in a Global World International Finance Review 13: 99–125. [ Google Scholar ]
  • Sustainalytics. 2024. ESG Risk Ratings. Available online: https://www.sustainalytics.com/corporate-solutions/esg-solutions/esg-risk-ratings (accessed on 23 August 2024).
  • Tapscott, Don, and David Ticoll. 2003. The Naked Corporation: How the Age of Transparency Will Revolutionize Business . New York: Free Press. [ Google Scholar ]
  • Toffler, Barbara Ley, and Jennifer Reingold. 2003. Final Accounting: Ambition, Greed and the Fall of Arthur Andersen . New York: Broadway Books. [ Google Scholar ]
  • Upadhyay, Arun. 2014. Social concentration on boards, corporate information environment and cost of capital. Journal of Business Finance & Accounting 41: 974–1001. [ Google Scholar ]
  • Vaccaro, Antonino, and Peter Madsen. 2007. ICT. Non-governmental organizations and transparency: Ethical concerns and perspectives. In Proceedings of CEPE 2007 . Enschede: Centre for Telematics and Information Technology (CTIT), pp. 410–17. [ Google Scholar ]
  • Vaccaro, Antonino, and Peter Madsen. 2009. Corporate Dynamic Transparency: The New ICT-Driven Ethics? Ethics and Information Technology 11: 113–22. [ Google Scholar ] [ CrossRef ]
  • von Koch, Christopher, and Magnus Willesson. 2020. Firms’ information environment measures: A literature review with focus on causality. Managerial Finance 46: 1343–72. [ Google Scholar ] [ CrossRef ]
  • Williams, Cynthia Clark. 2005. Trust diffusion: The effect of interpersonal trust on structure, function, and organizational transparency. Business and Society 44: 357–68. [ Google Scholar ] [ CrossRef ]
  • Wong, Jin Boon, and Qin Zhang. 2022. Stock market reactions to adverse ESG disclosure via media channels. The British Accounting Review 54: 101045. [ Google Scholar ] [ CrossRef ]
  • Wu, Keping, Dongmin Kong, and Wei Yang. 2024. Does Environmental, Social, and Governance Rating Affect Firms’ Real Earnings Management—Evidence from China. Finance Research Letters 67: 105764. [ Google Scholar ] [ CrossRef ]
  • Wu, Yi, and Shan Zhou. 2022. Do firms practicing integrated reporting engage in less myopic behavior? International evidence on opportunistic earnings management. Corporate Governance: An International Review 30: 290–310. [ Google Scholar ] [ CrossRef ]
  • Yu, Bing, Shengxiong Wu, and Mary Jane Lenard. 2022. Do ethical companies have high stock prices or high returns? Journal of Risk and Financial Management 15: 81. [ Google Scholar ] [ CrossRef ]
  • Yu, Fang Frank. 2008. Analyst coverage and earnings management. Journal of Financial Economics 88: 245–71. [ Google Scholar ] [ CrossRef ]
  • Yu, Minna, and Yanming Wang. 2018. Firm-specific corporate governance and analysts’ earnings forecast characteristics: Evidence from Asian stock markets. International Journal of Accounting and Information Management 26: 335–61. [ Google Scholar ] [ CrossRef ]
  • Zadeh, Mohammad Hendijani, Michel Magnan, Denis Cormier, and Ahmed Hammami. 2021. Environmental and social transparency and investment efficiency: The mediating effect of analysts’ monitoring. Journal of Cleaner Production 322: 128991. [ Google Scholar ] [ CrossRef ]
  • Zona, Fabio, Luis R. Gomez-Mejia, and Michael C. Withers. 2018. Board interlocks and firm performance: Toward a combined agency–resource dependence perspective. Journal of Management 44: 589–618. [ Google Scholar ] [ CrossRef ]
Panel A. Frequency of observations per year
YearNumberPercentage
201068813.04
201174314.08
201278014.78
201378014.78
201478014.78
201576714.54
201673813.99
Total5276100
Panel B. Frequency of observations by industry
IndustryFrequencyPercentage
Agriculture and Forestry (01–09)140.27
Mining (10–14)3216.08
Construction (15–17)831.57
Manufacturing (20–39)188035.63
Telecommunication (48)1212.29
Wholesale (50–51)1452.75
Retail (52–59)4037.64
Financial (60–67)108920.64
Services (70–88)70213.31
Others5189.82
Total5276100.00
VariableMeanSDMinMedianMax
PRECISION−0.0820.225−1.714−0.020
CONSENSUS−0.0490.128−1−0.0140
ANAL_FOL15.4758.20411538
ETHIC95.69411.7340100100
TA (Total Assets)39,214.72163,000227.7927752.6142,573,126
SIZE (log (TA))9.1551.4026.5038.95613.502
ROA0.0530.065−0.1950.0470.241
MTB3.3485.107−18.5052.43931.961
LEV0.6190.2170.1260.6121.296
FHORIZON50.04211.69284875
GOVERNANCE64.3569.048376595
Variables(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)
(1) PRECISION1.000
(2) CONSENSUS0.6361.000
(3) ANAL_FOL0.1820.1151.000
(4) ETHIC0.0070.018−0.2081.000
(5) SIZE0.0430.0290.342−0.3451.000
(6) ROA0.2970.3350.1490.026−0.1891.000
(7) LEV−0.049−0.051−0.066−0.0790.359−0.2741.000
(8) MTB0.0800.0700.1290.005−0.0920.202−0.0441.000
(9) FHORIZON−0.163−0.151−0.2940.114−0.299−0.161−0.072−0.0851.000
(10) GOVERN.0.0650.0660.0530.156−0.0970.079−0.0400.0680.0921.000
(11) LOSS−0.329−0.388−0.044−0.011−0.095−0.5780.073−0.0430.1730.0121.000
(1)(2)(2)(4)
ANAL_FOL PRECISIONCONSENSUSTRANSIND
ETHIC0.00090.00080.00040.0026
(0.060) * (0.006) ***(0.026) **(0.011) **
SIZE0.18550.00870.00550.0593
(0.000) ***(0.008) ***(0.001) ***(0.000) ***
GOVERNANCE−0.00060.00020.00040.0021
(0.517)(0.658)(0.074) *(0.139)
MTB0.00770.00070.00000.0024
(0.000) ***(0.069) *(0.954)(0.070) *
LEV−0.2767−0.0159−0.0069−0.1044
(0.000) ***(0.386)(0.497)(0.092) *
FHORIZON−0.0072−0.0015−0.0005−0.0047
(0.000) ***(0.000) ***(0.010) **(0.000) ***
LOSS0.1419−0.1758−0.1172−0.7485
(0.000) ***(0.000) ***(0.000) ***(0.000) ***
ROA1.20200.41280.28161.8606
(0.000) ***(0.000) ***(0.000) ***(0.000) ***
_cons1.4129−0.1152−0.1126−0.4801
(0.000) ***(0.000) ***(0.027) **(0.007) ***
Year fixed effectsYesYesYesYes
Industry fixed effectsYesYesYesYes
P0.000 ***0.000 ***0.000 ***0.000 ***
R2 0.2300.1900.261
N5246515552435153
VIF1.6691.6691.6691.669
(1)(3)(2)(4)
ANAL_FOLPRECISIONCONSENSUSTRANSIND
L.ETHIC0.00100.00120.00040.0037
(0.035) **(0.003) ***(0.045) **(0.005) ***
L.SIZE0.18570.00830.00440.0554
(0.000) ***(0.032) **(0.023) **(0.000) ***
L.GOVERNANCE−0.00110.00050.00040.0023
(0.227)(0.292)(0.102)(0.141)
L.ROA1.20690.33790.21451.4886
(0.000) ***(0.000) ***(0.000) ***(0.000) ***
L.MTB0.00870.00100.00000.0029
(0.000) ***(0.035) **(0.940)(0.073) *
L.LEV−0.2849−0.00390.0037−0.0450
(0.000) ***(0.847)(0.734)(0.513)
L.FHORIZON−0.0066−0.0020−0.0010−0.0077
(0.000) ***(0.000) ***(0.000) ***(0.000) ***
L.LOSS0.0975−0.1248−0.0810−0.5242
(0.003) ***(0.000) ***(0.000) ***(0.000) ***
_cons1.5919−0.1209−0.0816−0.3686
(0.000) ***(0.048) **(0.007) ***(0.064) *
Fixed year effectsyesYesyesyes
Fixed industry effectsyesYesyesyes
P0.000 ***0.000 ***0.000 ***0.000 ***
R2 0.1590.1800.210
N4475446944114405
VIF1.651.651.651.65
(1)(2)(3)(4)
ANAL_FOLCONSENSUSPRECISIONTRANSIND
ETHICres0.158
(0.447)
ETHIC0.0150.00040.00080.0026
(0.026) *(0.024) **(0.005) ***(0.010) ***
GOVERNANCE0.0420.00040.00020.0021
(0.001) ***(0.072) *(0.657)(0.136)
SIZE1.9820.00550.00870.0593
(0.000) ***(0.001) ***(0.008) ***(0.000) ***
ROA−2.120.26860.41281.8607
(0.063) *(0.000) ***(0.000) ***(0.000) ***
MTB0.0290.00000.00070.0024
(0.014) ***(0.954)(0.066) *(0.068) *
LEV−1.229−0.0069−0.0159−0.1044
(0.057) ***(0.493)(0.382)(0.089) *
FHORIZON−0.073−0.0005−0.0015−0.0047
(0.000) ***(0.009) ***(0.000) ***(0.000) ***
LOSS0.1420−0.1172−0.1758−0.7485
(0.000) ***(0.000) ***(0.000) ***(0.000) ***
_cons1.4088−0.1151−0.1126−0.4793
(0.000) ***(0.000) ***(0.025) **(0.006) ***
P0.000 ***0.000 ***0.000 ***0.000 ***
R20.030.2300.1890.261
N5245515552435153
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Ben Hassine, S.; Francoeur, C. Do Corporate Ethics Enhance Financial Analysts’ Behavior and Performance? J. Risk Financial Manag. 2024 , 17 , 396. https://doi.org/10.3390/jrfm17090396

Ben Hassine S, Francoeur C. Do Corporate Ethics Enhance Financial Analysts’ Behavior and Performance? Journal of Risk and Financial Management . 2024; 17(9):396. https://doi.org/10.3390/jrfm17090396

Ben Hassine, Sana, and Claude Francoeur. 2024. "Do Corporate Ethics Enhance Financial Analysts’ Behavior and Performance?" Journal of Risk and Financial Management 17, no. 9: 396. https://doi.org/10.3390/jrfm17090396

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. 9 Types of Validity in Research (2024)

    validity in research ethics

  2. Importance of validity and reliability in research

    validity in research ethics

  3. 8 Types of Validity in Research

    validity in research ethics

  4. Qualitative Research- Ethics, Reliability and validity Flashcards

    validity in research ethics

  5. Research Ethics: Definition, Principles and Advantages

    validity in research ethics

  6. Validity

    validity in research ethics

VIDEO

  1. Rules For Validity

  2. Research Methodology: Philosophically Explained!

  3. BSN

  4. Internal & External Validity Research

  5. Research Methodology for Life Science Projects (4 Minutes)

  6. What is Reliability and Validity-Research Methodology-TheRISD

COMMENTS

  1. Reliability, Validity and Ethics

    This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-times these issues are evident from the research ...

  2. Guiding Principles for Ethical Research

    NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research: Social and clinical value. Scientific validity. Fair subject selection. Favorable risk-benefit ratio. Independent review. Informed consent. Respect for potential and enrolled subjects.

  3. Validity, reliability, and generalizability in qualitative research

    In assessing validity of qualitative research, the challenge can start from the ontology and epistemology of the issue being studied, e.g. the concept of "individual" is seen differently between humanistic and positive psychologists due to differing philosophical perspectives: Where humanistic psychologists believe "individual" is a ...

  4. PDF Reliability, Validity and Ethics

    Reliability, Validity and Ethics 3.1 Introduction This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-

  5. What Is Ethics in Research and Why Is It Important?

    Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For ...

  6. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  7. Foundations of Integrity in Research: Core Values and Guiding Norms

    Synopsis:The integrity of research is based on adherence to core values—objectivity, honesty, openness, fairness, accountability, and stewardship. These core values help to ensure that the research enterprise advances knowledge. Integrity in science means planning, proposing, performing, reporting, and reviewing research in accordance with these values. Participants in the research ...

  8. Validity in Qualitative Evaluation: Linking Purposes, Paradigms, and

    Although validity in qualitative research has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. ... (2002) argue that because of this, the quest for greater rigor cannot be separated from the interaction with the research subject and the ethics ...

  9. Understanding Research Ethics

    Research ethics are moral principles that need to be adhered to when conducting a research study as well as when writing a scientific article, with the prime aim of avoiding deception or intent to harm study's participants, the scientific community, and society. Practicing and adhering to research ethics is essential for personal integrity as ...

  10. Scientific Value and Validity as Ethical Requirements

    fundamental basis within the ethics research. A proposal's validity is, I believe, a function solely of the hypothesis tested and the study's protocol. A variety of deficiencies in design may impugn a study's validity. The number of subjects to be enrolled may be too small to demonstrate the effect under study at the level of probability ...

  11. Research Ethics

    Multiple examples of unethical research studies conducted in the past throughout the world have cast a significant historical shadow on research involving human subjects. Examples include the Tuskegee Syphilis Study from 1932 to 1972, Nazi medical experimentation in the 1930s and 1940s, and research conducted at the Willowbrook State School in the 1950s and 1960s.[1] As the aftermath of these ...

  12. Ensuring ethical standards and procedures for research with human beings

    It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are being upheld. Discussion of the ethical principles of beneficence, justice and ...

  13. Five principles for research ethics

    4. Respect confidentiality and privacy. Upholding individuals' rights to confidentiality and privacy is a central tenet of every psychologist's work. However, many privacy issues are idiosyncratic to the research population, writes Susan Folkman, PhD, in "Ethics in Research with Human Participants" (APA, 2000).

  14. Principles for ethical research involving humans: ethical professional

    the discussion around research ethics as presented in social research methods textbooks - for example, Babbie (Citation 1989), Bulmer ... Respect for participants as well as professional probity means that the research procedure must have reliability and validity. Participants give their time (whether free or paid) on the presumption that the ...

  15. Validity and Ethics in Science

    Validity and Ethics in Science. T wo recent events have forced scientists and others to confront the issue of ethical behavior in scientific work, leading some to question the validity of the body of accumulated scientific knowledge. But fraud and validity are separable matters, and it is important that the public understand the differences ...

  16. The 4 Types of Validity in Research

    The 4 Types of Validity in Research | Definitions & Examples. Published on September 6, 2019 by Fiona Middleton.Revised on June 22, 2023. Validity tells you how accurately a method measures something. If a method measures what it claims to measure, and the results closely correspond to real-world values, then it can be considered valid.

  17. (Pdf) Ethics and Validity As Core Issues in Qualitative Research: a

    Abstract and Figures. Qualitative research is criticized for having issues validity, reliability, and ethics. This review paper focuses on two core issues involved in qualitative research ...

  18. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  19. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    The search was made on September 22, 2020 using the descriptors "research ethics ... In a study that questions the validity and the ethicality of sharing research outcomes with participants (member-check), Goldblatt et al. (2011) present four examples of ethically conflicting situations. The first example is of an interviewee who felt invaded ...

  20. Reliability vs. Validity in Research

    Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt. It's important to consider reliability and validity when you are creating your research design, planning your methods, and writing up your results, especially in quantitative research. Failing to do so can lead to several types of research ...

  21. 5.2 Reliability and Validity of Measurement

    If they cannot show that they work, they stop using them. There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability).

  22. Ethics in scientific research: a lens into its importance, history, and

    Ethics are a guiding principle that shapes the conduct of researchers. It influences both the process of discovery and the implications and applications of scientific findings 1. Ethical considerations in research include, but are not limited to, the management of data, the responsible use of resources, respect for human rights, the treatment ...

  23. Reliability vs Validity in Research

    Revised on 10 October 2022. Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method, technique, or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure. It's important to consider reliability and validity when you are ...

  24. Opportunities and challenges of a dynamic consent-based application

    In fact, since this innovative method, the dynamic consent mechanism, affects the entirety of society, it is crucial to solicit the general public's opinion. However, despite the satisfactory validity of the questionnaire responses, which suggests their potential for future research, the sample size employed in this study was relatively modest.

  25. Do Corporate Ethics Enhance Financial Analysts' Behavior and ...

    This study investigates the relationship between corporate ethics and the information intermediation element of public companies' information environment. Drawing on the well-established virtue, deontological, and consequential ethical theories, we predict that higher corporate ethics standards have a positive effect on financial analysts' behavior and earnings forecasts. Using a sample of ...