• Privacy Policy

Research Method

Home » Informed Consent in Research – Types, Templates and Examples

Informed Consent in Research – Types, Templates and Examples

Table of Contents

Informed Consent in Research

Informed Consent in Research

Informed consent is a process of communication between a researcher and a potential participant in which the researcher provides adequate information about the study, its risks and benefits, and the participant voluntarily agrees to participate. It is a cornerstone of ethical research involving human subjects and is intended to protect the rights and welfare of participants.

Types of Informed Consent in Research

There are different types of informed consent in research , which may vary depending on the nature of the study, the type of participants, and the context. Some of the common types of informed consent in research include:

Written Consent

This is the most common type of informed consent, where participants are provided with a written document that explains the study and its requirements. The document typically includes information about the purpose of the study, procedures involved, risks and benefits, confidentiality, and participant rights. Participants are asked to sign the document as an indication of their willingness to participate.

Oral Consent

In some cases, oral consent may be used when a written document is not practical or feasible. Oral consent involves explaining the study and its requirements to participants verbally and obtaining their consent. This method may be used for studies with illiterate or visually impaired participants or when conducting research remotely.

Implied Consent

Implied consent is used in studies where participants’ actions are taken as an indication of their willingness to participate. For example, a participant may be considered to have given implied consent if they show up for a scheduled appointment for the study.

Opt-out Consent

This method is used when participants are given the opportunity to decline participation in a study. Participants are provided with information about the study and are given the option to opt-out if they do not wish to participate. This method is commonly used in population-based studies or surveys.

Assent is used in studies involving minors or participants who are unable to provide informed consent due to cognitive impairment or disability. Assent involves obtaining the agreement of the participant to participate in the study, along with the consent of a legally authorized representative.

Informed Consent Format in Research

Here’s a basic format for informed consent that can be customized for specific research studies:

  • Introduction : Begin by introducing yourself and the purpose of the study. Clearly state that participation is voluntary and that participants can withdraw at any time without penalty.
  • Study Overview : Provide a brief overview of the study, including its purpose, methods, and expected outcomes.
  • Procedures : Describe the procedures involved in the study in clear, concise language. Include information about the types of data that will be collected, how they will be collected, and how long the study will take.
  • Risks and Benefits : Outline the potential risks and benefits of participating in the study. Be honest and upfront about any discomfort, inconvenience, or potential harm that may be involved, as well as any potential benefits.
  • Confidentiality and Privacy : Explain how participant data will be collected, stored, and used, and what measures will be taken to ensure confidentiality and privacy.
  • Voluntary Participation: Emphasize that participation is voluntary and that participants can withdraw at any time without penalty. Explain how to withdraw from the study and who to contact if participants have questions or concerns.
  • Compensation and Incentives: If applicable, explain any compensation or incentives that will be offered to participants for their participation.
  • Contact Information: Provide contact information for the researcher or a representative from the research team who can answer questions and address concerns.
  • Signature : Ask participants to sign and date the consent form to indicate their voluntary agreement to participate in the study.

Informed Consent Templates in Research

Here is an example of an informed consent template that can be used in research studies:

Introduction

You are being invited to participate in a research study. Before you decide whether or not to participate, it is important for you to understand why the research is being done, what your participation will involve, and what risks and benefits may be associated with your participation.

Purpose of the Study

The purpose of this study is [insert purpose of study].

If you agree to participate, you will be asked to [insert procedures involved in the study].

Risks and Benefits

There are several potential risks and benefits associated with participation in this study. Some of the risks include [insert potential risks of participation]. Some of the benefits include [insert potential benefits of participation].

Confidentiality

Your participation in this study will be kept confidential to the extent allowed by law. All data collected during the study will be stored in a secure location and only accessed by authorized personnel. Your name and other identifying information will not be included in any reports or publications resulting from this study.

Voluntary Participation

Your participation in this study is completely voluntary. You have the right to withdraw from the study at any time without penalty. If you choose not to participate or if you withdraw from the study, there will be no negative consequences.

Contact Information

If you have any questions or concerns about the study, you can contact the investigator(s) at [insert contact information]. If you have questions about your rights as a research participant, you may contact [insert name of institutional review board and contact information].

Statement of Consent

By signing below, you acknowledge that you have read and understood the information provided in this consent form and that you freely and voluntarily consent to participate in this study.

Participant Signature: _____________________________________ Date: _____________

Investigator Signature: ____________________________________ Date: _____________

Examples of Informed Consent in Research

Here’s an example of informed consent in research:

Title : The Effects of Yoga on Stress and anxiety levels in college students

Introduction :

We are conducting a research study to investigate the effects of yoga on stress and anxiety levels in college students. We are inviting you to participate in this study.

If you agree to participate, you will be asked to attend four yoga classes per week for six weeks. Before and after the six-week period, you will be asked to complete surveys about your stress and anxiety levels. Additionally, we will measure your heart rate variability at the beginning and end of the six-week period.

Risks and Benefits:

There are no known risks associated with participating in this study. However, the benefits of practicing yoga may include decreased stress and anxiety levels, increased flexibility and strength, and improved overall well-being.

Confidentiality:

All information collected during this study will be kept strictly confidential. Your name will not be used in any reports or publications resulting from this study.

Voluntary Participation:

Participation in this study is completely voluntary. You are free to withdraw from the study at any time without penalty.

Contact Information:

If you have any questions or concerns about this study, you may contact the principal investigator at (phone number/email address).

By signing this form, I acknowledge that I have read and understood the above information and agree to participate in this study.

Participant Signature: ___________________________

Date: ___________________________

Researcher Signature: ___________________________

Importance of Informed Consent in Research

Here are some reasons why informed consent is important in research:

  • Protection of participants’ rights : Informed consent ensures that participants understand the nature and purpose of the research, the risks and benefits of participating, and their rights as participants. It empowers them to make an informed decision about whether to participate or not.
  • Ethical responsibility : Researchers have an ethical responsibility to respect the autonomy of participants and to protect them from harm. Informed consent is a crucial way to uphold these principles.
  • Legality : Informed consent is a legal requirement in most countries. It is necessary to protect researchers from legal liability and to ensure that research is conducted in accordance with ethical standards.
  • Trust : Informed consent helps build trust between researchers and participants. When participants understand the research process and their role in it, they are more likely to trust the researchers and the study.
  • Quality of research : Informed consent ensures that participants are fully informed about the research and its purpose, which can lead to more accurate and reliable data. This, in turn, can improve the quality of research outcomes.

Purpose of Informed Consent in Research

Informed consent is a critical component of research ethics, and it serves several important purposes, including:

  • Respect for autonomy: Informed consent respects an individual’s right to make decisions about their own health and well-being. It recognizes that individuals have the right to choose whether or not to participate in research, based on their own values, beliefs, and preferences.
  • Protection of participants : Informed consent helps protect research participants from potential harm or risks that may arise from their involvement in a study. By providing participants with information about the study, its risks and benefits, and their rights, they are able to make an informed decision about whether to participate.
  • Transparency: Informed consent promotes transparency in the research process. It ensures that participants are fully informed about the research, including its purpose, methods, and potential outcomes, which helps to build trust between researchers and participants.
  • Legal and ethical requirements: Informed consent is a legal and ethical requirement in most research studies. It ensures that researchers obtain voluntary and informed agreement from participants to participate in the study, which helps to protect the rights and welfare of research participants.

Advantages of Informed Consent in Research

The advantages of informed consent in research are numerous, and some of the most significant benefits include:

  • Protecting participants’ autonomy: Informed consent allows participants to exercise their right to self-determination and make decisions about whether to participate in a study or not. It also ensures that participants are fully informed about the risks, benefits, and implications of participating in the study.
  • Promoting transparency and trust: Informed consent helps build trust between researchers and participants by providing clear and accurate information about the study’s purpose, procedures, and potential outcomes. This transparency promotes open communication and a positive research experience for all parties involved.
  • Reducing the risk of harm: Informed consent ensures that participants are fully aware of any potential risks or side effects associated with the study. This knowledge enables them to make informed decisions about their participation and reduces the likelihood of harm or negative consequences.
  • Ensuring ethical standards are met : Informed consent is a fundamental ethical requirement for conducting research involving human participants. By obtaining informed consent, researchers demonstrate their commitment to upholding ethical principles and standards in their research practices.
  • Facilitating future research : Informed consent enables researchers to collect high-quality data that can be used for future research purposes. It also allows participants to make an informed decision about whether they are willing to participate in future studies.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Law Audience Journal (e-ISSN: 2581-6705)

You are currently viewing The Concept of Free Consent With Special Reference to a Landmark Case of Section 15 of The Indian Contract Act, 1872

The Concept of Free Consent With Special Reference to a Landmark Case of Section 15 of The Indian Contract Act, 1872

  • Post author: Varun Kumar
  • Post published: January 11, 2022
  • Post category: Volume 3 & Issue 3
  • Post comments: 0 Comments

Click here to download the full pa per (PDF)

Authored By: Mr. Sarthak Das (BLS. L.L.B), Government Law College Mumbai,

Click here for Copyright Policy.

Click here for Disclaimer .

“Section 15 of the Indian Contract Act, 1872 deals with one of the vitiating factors of free consent to an agreement, that is coercion. This paper deals with one of the landmark cases in relation to the particular section. The case in question is Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr . The paper discusses the provisions of the Contract Act relating to Free Consent and the factors vitiating it. The facts of the case, the judgement and the rationale are also analysed. A critical analysis of the judgement is done at the end of the paper”.

I. INTRODUCTION:

Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr [1] , is a case that deals primarily with Section 15 and of the Indian Contract Act, 1872. The case revolves around the topic of Coercion as defined under Section 15 of the Indian Contract Act, 1872.

A Contract is defined as an agreement enforceable by the law. [2] Under Chapter II, and Section 10 of the Indian Contract Act, the requirements needed for an agreement to be called a contract are mentioned. As per the provisions of the law;

“What agreements are contracts. —All agreements are contracts if they are made by the free consent of parties competent to contract, for a lawful consideration and with a lawful object, and are not hereby expressly declared to be void.”

As can be seen from the above provision, there are five essential elements for an agreement to be called a legally binding contract. They are;

  • Free Consent of the Parties
  • Competency of the Parties to the Contract
  • Lawful Consideration
  • Lawful Object
  • Not expressly declared to be void by the law

In this research paper, our concern is with discussing the aspects of the first essential element as mentioned above, that is, the free consent of the parties to a contract. In furtherance of this, the legal provisions regarding consent and free consent are defined in Section 13 and 14 of the Indian Contract Act respectively.

Section 13 defines consent as follows ; Two or more persons are said to consent when they agree to the same thing in the same sense. It can be summarised through the maxim “consensus ad idem”, i.e., a meeting of the minds.

Moving forward to Section 14, which deals with ‘free consent’. As has already been noted, free consent of the parties to a contract is sine qua non for a contract to be valid in the eyes of the law. As per the Indian Contract Act, 1872, Free Consent is defined as follows;

“Free consent” defined. —Consent is said to be free when it is not caused by—

(1) coercion, as defined in section 15, or

(2) undue influence, as defined in section 16, or

(3) fraud, as defined in section 17, or

(4) misrepresentation, as defined in section 18, or

(5) mistake, subject to the provisions of sections 20, 21 and 22.

Consent is said to be so caused when it would not have been given but for the existence of such coercion, undue influence, fraud, misrepresentation or mistake. As can be seen from the above definition, free consent is defined negatively in the Contract Act, which basically means that the absence of the five conditions of Coercion or Undue Influence or Fraud or Misrepresentation or Mistake makes the contract a valid one. These can be said to be the factors that vitiate consent of the parties to a contract.

Flowing from this, If the consent of one of the parties is not free consent, i.e., it has been caused by one or the other of the above stated factors, the contract is not a valid one. When consent to an agreement is caused by coercion, undue influence, fraud or misrepresentation, the agreement is a contract voidable at the option of the party whose consent was so caused. [3]

For example, A person is induced to enter into an agreement on the basis of fraud, he may, upon discovering the fact either uphold the contract or reject it. If the contract is not rejected, it becomes legally binding on both parties. If, however, the consent has been caused by mistake, the agreement is void. [4] [5] The first condition, which must be absent for consent to be free, is that consent should not have been obtained through coercion under Section 15 of The Indian Contract Act, 1872.

Coercion is defined as follows:

“Coercion” is the committing, or threatening to commit, any act forbidden by the Indian Penal Code (45 of 1860) or the unlawful detaining, or threatening to detain, any property, to the prejudice of any person whatever, with the intention of causing any person to enter into an agreement. [6]

From the definition of “coercion”, we can identify the following essential characteristics of the crime of coercion. Coercion can be said to be caused when the consent of one of the parties to the agreement has been caused either by

  • Committing or threatening to commit any act forbidden by the Indian Penal Code, 1860 or
  • By unlawfully detaining or threatening to detain any property

Moreover, such an act should be to the prejudice of any person whatsoever.

II. FACTS OF THE CASE:

Chikkam Ammiraju and five others were the defendants, the first defendant was the younger brother of the husband of the first plaintiff i.e., Chikkam Seshamma. The first plaintiff and the second plaintiff had alleged that the property in possession of the defendants, originally belonged to the father of the first plaintiff, who is now deceased. The widow of the plaintiff’s father (the plaintiff’s mother) had alienated the property to the defendants without any justification. As a result, the plaintiff sued to set aside the same and got a decree that the alienation was not binding on the reversion. [7]

In furtherance of this, the husband of the first plaintiff and father of the second plaintiff was pressured by his younger brother and father (the defendants). As a result, the husband of the first plaintiff threatened to commit suicide in case, his wife and son did not execute a deed of release of property in favour of his brother. In consequence of this threat by the husband of the first plaintiff, she was made to transfer the said properties through a deed to the brother of her husband. It was this transfer that was then intended to be reversed by the plaintiffs.

II.I KEY ISSUES INVOLVED:

The fundamental questions that were before the court, when the appeal was filed by the plaintiff were;

  • Whether a threat to commit suicide would constitute an act of Coercion or Undue Influence or both under Section 15 and 16 of the Indian Contract Act?
  • Whether suicide was an act “forbidden by the Indian Penal Code” or not?
  • If the act of threatening to commit suicide could have prejudiced the plaintiff?  

II.II CONTENTIONS RAISED:

The defendants contended inter alia that no such threat of committing suicide was given. However, the lower court found that threat was used and held that it amounted to coercion under Section 15 of the Indian Contract Act, and gave a decree in favour of the plaintiffs. The defendants then proceeded to file a second appeal in the High Court. Sadasiva Ayyar, J., agreeing with the lower Courts dismissed the appeal. Mooeb, J., held that the threat held out did not in law amount to coercion or undue influence and allowed the appeal. The result was that the second appeal was dismissed under section 98, Civil Procedure Code. The defendants then filed this Appeal under Section 15 of the Letters Patent from the judgment of Sadasiva Ayyar J. [8]

Fatanjali Sastri for the appellants argued that suicide is not an offence which is punishable under then Indian Penal Code, so Section 15 of the Indian Contract Act should not apply in this case. Moreover, the counsel argued that ‘Prejudice in section 16 means some detriment to property and not any sentimental grievance as in this case.

G, Yenkatarcumayya was the counsel for the respondent argued that Section 15 mentions ‘an act forbidden by the Indian Penal Code’ and not of an offence punishable under the Indian Penal Code. He further argued that suicide is an act that is forbidden by the Indian Penal Code, 1860 as the attempt or abetment of suicide is punishable by law. Moreover, he countered the argument of his learned opponent by stating how prejudice is not limited to ‘prejudice against property alone’. He argued that “section 15 provides for an exception where freedom of consent is absent and any circumstance which influences the mind of a party to a contract and destroys the freedom of volition would constitute coercion.” [9] Flowing from this he asserted that the loss of a husband to a wife or father to a son, both of which would have occurred had the threat to suicide been successful, would amount to sufficient coercion in the eyes of the law.  

II.III JUDGEMENT:

The judgement to the second appeal filed under Section 15 of the Letter Patent Act was decided by a three-judge bench of the Madras High Court. The bench consisted of Chief Justice Wallis, Justice Oldfield and Justice Seshagiri Aiyar. Wallis CJ and Aiyar J formed the majority opinion in this case with Oldfield J being the sole dissenting judge. Wallis CJ. disagreed emphatically with the assertion that a threat to commit suicide cannot be considered as an act ‘forbidden by the Indian Penal Code’.  He argued that “At common law suicide was a form of homicide. “Homicide properly so called” says Hawkins, (Pleas of the Crown, Book I, Chapter 9) “is either against a man’s own life or that of another.”

He further stated that Section 299 of the Indian Penal Code, which deals with the crime of Culpable Homicide is “sufficiently wide to cover deliberate suicide”, however he accepted that punishment for culpable homicide can only been given to a living offender, a scenario which is not possible in the case of suicide. He mentioned how abetment and attempting to commit suicide are both punishable under Section 304 and Section 309 of the Indian Penal Code respectively.

He concluded his reasoning by stating  “I find it impossible to hold that an act which it is made punishable to abet or attempt is not forbidden by the Indian Penal Code , especially as the absence of any section punishing the act itself is due to the fact that the suicide is in the nature of things beyond the jurisdiction of the Court.” Seshagiri Aiyar, J. agreed with the opinion of the Chief Justice. He added to the majority opinion by mentioning how the only reason the law cannot punish a person who commits suicide is because the law cannot reach him. He added how if a person either abets or attempts to commit the same crime, he is appropriately given punishment.

However, it must be noted that even Seshagiri Aiyar J. agreed to the fact that “there is no provision in the Indian Penal Code which forbids in terms commission of suicide.” He stated how the term forbidden by the Indian Penal Code is wider in meaning than the term “punishable by the law”, therefore he concluded that a threat to commit suicide can be construed to mean an act forbidden by the Indian Penal Code.

Both the judges agreed upon the fact that the act of the husband dying leaving a widow and a child with no father, in case a deed is not furnished which deprives the wife and child of their rightful property amounts to sufficient coercion under the eyes of the law.

Justice Oldfield was the sole dissenting judge, forming the minority opinion. He argued that since the act of suicide is not explicitly forbidden by the law, the only way to consider it as forbidden is by implication. He opted for a strict interpretation of the statutes. He argued that if the word ‘attempt’ is used in the legal sense, a threat to commit suicide is completely different from an attempt to suicide. He stated that “an attempt in the legal sense can be recognised as such only after the criminal’s intention has been frustrated, not when it is expressed; that is, when the threat is made.” [10]

He then turned to Section 16 which deals with Undue Influence and found no relevance to the case, as Section 16 clearly states that “one of the parties is in a position to dominate the will of the other” [11] . Since the husband was the one making the ‘threat’ and he was not one of the parties to the contract, the question of undue influence being used fails.

In relation to the above, it is important to point out how in the first appeal against the judgement of the lower court, the same issue was at hand. At that time, Justice Moore stated the example of a landmark case – Ranganayakamma vs. Alwar Setti [12] . In this case the dead body of the husband was not allowed to be removed from the house by the relatives of a boy, unless the widow legally adopted the boy aged thirteen years.

Justice Moore wrote in his judgement “In the Madras case however there would have been no difficulty in finding that the widow’s consent was obtained by “undue influence,” within the meaning of Section 16 of the Contract Act. As regards the question whether the release deed was brought about by “undue influence,” it may be that Swami was in a position to dominate his wife’s will but he was not a party to the contract and Section 16(2)(b) of the Contract Act consequently does not apply.” Simply meaning that since he was not a party to the contract, no question of undue influence arises in this case as opposed to Ranganayakamma vs. Alwar Setti . [13]

III. LAW COMMISION REPORT:

The Law Commission of India, in its 13th Report [14] , suggested amendments to Section 15 of the Indian Contract Act. This suggestion was made with the aim of overcoming the lacunae that exists in the language of the law. Section 15 of the Act mentions only those acts which are “forbidden by the Indian Penal Code”, this however means that acts which are forbidden by other penal provisions prevalent at the time are not covered under Section 15. The recommendations of the Law Commission were as follows

“The proper function of the Indian Penal Code is to create offence and not merely forbid. A penal code forbids only what it declares punishable. There are laws other than the Indian Penal Code performing the same function. We suggest that the words “Any act forbidden by the Indian Penal Code” should be deleted and a wider expression be substituted therefore so that penal laws other than the Indian Penal Code may also be included. The explanation should also be amended to the same effect.”

IV. ANALYSIS:

In order to analyse, we must start again with what the provisions of the law state with regards to the crime of coercion. Coercion is defined as – ‘Coercion’ is the committing, or threatening to commit, any act forbidden by the Indian Penal Code (45 of 1860) or the unlawful detaining, or threatening to detain, any property, to the prejudice of any person whatever, with the intention of causing any person to enter into an agreement. [15] We are concerned with the first part of the section which expressly states that for an act to be considered coercive, the wrongdoer must either commit or threaten to commit an act forbidden by the Indian Penal Code, 1860. The language of the provisions is very clear and therefore must be interpreted in that sense. The assistance of the Law Commission Report can be taken to understand this argument further.

The 13th Report clearly states that “ The proper function of the Indian Penal Code is to create offence and not merely forbid. A penal code forbids only what it declares punishable.” It is clear from the assertion of the commission too, that if an act for which there is no express punishment in the Indian Penal Code (as is the case for committing suicide), cannot be said to be forbidden by the Indian Penal Code.

Flowing from this argument, if the act is not forbidden, no action for coercion arises as a consequence. The moot question therefore arises, how can the court treat something as forbidden which is not expressly punishable under law.

Pollock and Mulla are also of the view that in the particular case, the minority view is correct, opting for a strict interpretation of the provisions, rather than interpreting them by implication, as had been done by the assenting judges. They too recommend an amendment in the language of Section 15 to cover such threats too.

Moreover, in the case of Palaniappa Mudaliar vs. Kandaswamy Mudaliar [16] , it was held that, “In order to constitute coercion, the threat must be unlawful and it must be shown that it was effected with the intention of coercing the other party to enter into an agreement.” Thus, even though the threat of the husband could be attributed to have been an act that is coercive, it does not fulfill the first condition of the statement, i.e., ‘the threat must be unlawful’ . As there are no provisions under the Indian Penal Code expressly declaring a threat to commit suicide or committing suicide unlawful, the act could not have been said to be coercive under Section 15 of the Contract Act.

The view of the dissenting judge appears to be more appropriate in this particular case. The learned judge very lucidly opposes the argument, that only because the act of abetting or attempting suicide is punishable by law, a threat to commit suicide must also be punishable through implication. He wrote in his dissenting judgement

“It does not follow that the failure to employ the other direct prohibition, or to make provision for the case of suicide in the Contract Act was due to inadvertence and that the omission should be supplied by inference. For it is possible that provision was omitted deliberately, because cases for its application would be rare and their truth difficult to establish, the party alleged to be coerced having usually easier means of preventing the accomplishment of the threat than by entering into the agreements sought to be avoided.”

As can be understood from the above, the majority judgement justified the absence of any provision explicitly forbidding the act of threatening to commit suicide by using only implication. The absence of the provision was justified by assigning meaning to the ‘aim of the legislature’. Meaning should not have been attributed to the aim of the legislature as that deviates from the express provisions stated in the law.

Therefore, the minority judgement seems more appropriate, solely for the reason that if a simple reading of Section 15 is done, without assigning any implied meaning to the provisions, it is clear that the act of ‘threatening to commit suicide’ would not be considered as coercion under the Indian Contract Act.

Cite this article as:

Mr. Sarthak Das, The Concept of Free Consent With Special Reference to a Landmark Case of Section 15 of The Indian Contract Act, 1872, Vol.3 & Issue 3, Law Audience Journal (e-ISSN: 2581-6705), Pages 55 to 64 (11 th January 2022), available at https://www.lawaudience.com/the-concept-of-free-consent-with-special-reference-to-a-landmark-case-of-section-15-of-the-indian-contract-act-1872/ .

Footnotes & References:

[1] Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr (1917) 32 MLJ 494.

[2] The Indian Contract Act, 1872, § 2(h).

[3] The Indian Contract Act, 1872, § 2(h).

[4] The Indian Contract Act, 1872, § 14.

[5] Dr R.K Bangia, CONTRACT-I 140 (7th Edn, 2017).

[6] The Indian Contract Act, 1872, § 15.

[7] Indian Law Reports: Madras (1918) Volume 41 .

[8] Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr (1917) 32 MLJ 494.

[9] Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr (1917) 32 MLJ 494.

[10] Chikkam Ammiraju And Ors. vs Chikkam Seshamma And Anr (1917) 32 MLJ 494.

[11] The Indian Contract Act, 1872, § 14.

[12] Ranganayakamma v. Alwar Setti (1889) 13 ILR 214.

[13]  Ranganayakamma v. Alwar Setti (1889) 13 ILR 214.

[14]   Law Commission of India, 13th Report on Contract Act, 1872 (1958).

[15] The Indian Contract Act, 1872, § 15.

[16] Palaniappa Mudaliar v Kandaswamy Mudaliar (1971) 1 Mys LJ 258 (India).

You Might Also Like

Read more about the article Loopholes in Electronic Evidence Law in India

Loopholes in Electronic Evidence Law in India

Read more about the article Ensuring The Right To Access Vaccine of Citizens During The Covid-19 Pandemic: Experiences From Vietnam

Ensuring The Right To Access Vaccine of Citizens During The Covid-19 Pandemic: Experiences From Vietnam

Read more about the article How Data Protection Laws Helps in Bridging the Gap of Data Breach?

How Data Protection Laws Helps in Bridging the Gap of Data Breach?

Leave a reply cancel reply.

You must be logged in to post a comment.

  • EDITORIAL BOARD MEMBERS
  • PUBLICATION ETHICS AND PUBLICATION MALPRACTICE STATEMENT
  • PUBLICATION POLICY
  • Article Processing Charges (APC)
  • WITHDRAWAL POLICY
  • PUBLICATION TIME TABLE
  • SUBMIT YOUR PAPER
  • PUBLISHER DETAILS
  • Volume 5 & Issue 5
  • ONLINE INTERVIEW SERIES
  • POLICY/LEGISLATION ANALYSIS
  • CASE ANALYSIS
  • ADVISORY BOARD MEMBERS

WhatsApp us

  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Informed Consent—We Can and Should Do Better

  • 1 Wake Forest Baptist Comprehensive Cancer Center, Section on Hematology and Oncology, Department of Medicine, Wake Forest School of Medicine, Winston Salem, North Carolina
  • Original Investigation Assessment of Length and Readability of Informed Consent Documents for COVID-19 Vaccine Trials Ezekiel J. Emanuel, MD, PhD; Connor W. Boyle, BA JAMA Network Open

Informed consent is fundamental to the ethical and legal doctrines respecting research participants’ voluntary participation in clinical research, enshrined in such documents as the 1947 Nuremberg Code; reaffirmed in the 1964 Declaration of Helsinki, revised in 1975, and the 1978 Belmont Report; and codified in the United States in the 1981 Common Rule, revised in 2018 and implemented in 2019. 1

Informed consent generally is understood to represent a process, with the informed consent document having a central role. The characteristics of a well-designed consent form are well known: the document must contain information, some statutorily defined, necessary to allow a participant to make an informed decision; be written at a reading level appropriate for its audience; and be of a length that enables complete and thorough reading. Yet, the content and structure of this document has been the subject of discussion for at least 3 decades, with a consistent consensus throughout this time that these documents are too difficult to read, too complex, and too long and, as a result, frequently fail to facilitate truly informed consent by study participants. While much of the blame for the failure to provide sufficiently detailed, readable, and brief consent forms has been laid at the feet of sponsors and investigators, the reality is that, while it is possible to incorporate 2 of these 3 elements into a consent form, it is all but impossible to incorporate all 3, ie, concise, sufficiently detailed yet easily readable, for anything but the simplest of clinical trials.

The study by Emmanuel and Boyle 2 reviews the consent forms for the COVID-19 vaccine phase III randomized clinical trials conducted by 4 major pharmaceutical companies that resulted in US regulatory approvals for 3 of the 4 vaccines, in the context of these issues. The study by Emmanuel and Boyle 2 highlights the deficiencies of the COVID-19 vaccine trial consent forms in these areas, proposes revised consent form language to improve readability, understanding, and length, and underscores how the medical community has not responded adequately to the decades-long valid criticisms concerning informed consent forms. The revisions proposed by Emmanuel and Boyle 2 to the relatively straightforward COVID-19 vaccine trials’ consent forms yielded a document that was substantially longer than ideal, with an overall higher-grade reading level than optimal, underscoring the fundamental inability to successfully incorporate all 3 of the desirable qualities for a consent form into a single document.

Consent forms should be written at a level understandable to the average prospective participant. Many authorities, including the National Cancer Institute, 3 relying on the 2015 Institute of Medicine report “Informed Consent and Health Literacy,” recommend an eighth-grade reading level or lower for informed consent forms, but this may be too generous a standard. The average American reads at the seventh to eighth grade level, with half of US adults unable to read a book written at the eighth grade level. The most recent study of literacy among US adults, the Survey of Adult Skills conducted through the Program for the International Assessment of Adult Competencies (PIAAC), supports this, indicating that more than half of US adults would struggle to fully comprehend current consent forms, and among self-declared individuals in fair or poor health—those most likely to participate in clinical trials with greater than minimal risk—31% have PIACC Level 1 (ie, basic sight vocabulary and can read short texts on familiar topics to locate a single piece of information) or lower literacy skills. 4 Therefore, it is reasonable to conclude, as Emmanuel and Boyle 2 and many others have, that a sixth grade reading level is more appropriate, noting that even this level would not address the substantial proportion of the population with literacy levels below this. 5

Consent forms also should be of a length that can be easily read by the average study participant. Evidence exists that the longer a document is, the less likely people are to read it fully. In the educational context, people are unlikely to read an entire document containing more than 1000 words (ie, approximately 4 pages), and it has been proposed that consent forms should be limited to no more than 1250 words. 6 Yet consent form lengths have increased steadily over the past 4 decades, with few consent forms fewer than 10 pages in length, and most substantially longer. The COVID-19 consent forms reviewed by Emmanuel and Boyle 2 were a mean of more than 8000 words (range, 7821 to 9340 words), and despite their best efforts, Emmanuel and Boyle 2 were only able to reduce the length to just under 3000 words.

Finally, there is the issue of the actual content of the consent form. The list of mandatory items alone runs to more than 270 words in the Revised Common Rule, highlighting the challenge of writing a consent form that is complete and understandable in fewer than 1000 or even 1250 words. Compounding this is the perception that many sponsors and institutions appear to want to use consent forms primarily as legal instruments to protect against civil litigation, undermining both the primary function of the document, as well as its accessibility, to study participants.

The study by Emmanuel and Boyle 2 should be recognized as a wake-up call to sponsors, investigators, institutional review boards, and regulators to reevaluate how consent forms are drafted, reviewed, and used, along with a reappraisal of the entire consenting process. After decades of largely fruitless effort, an acknowledgment of the seemingly insurmountable challenge of drafting sufficiently detailed but easily readable and not overly lengthy documents would allow the reimagining of the entire consenting process. Considerations could include placing even greater emphasis on the discussion component of the consent process while deemphasizing the role of the consent form, a greater use of multimedia and other technology, more formal scripting of consenting discussions, mandatory documentation of confirmation of adequate comprehension by study participants, and even regulatory reform, among other improvements. Such an appraisal and revision to the process would be neither simple nor without cost, but if history is any guide, failure to act is likely to lead to having the exact same conversation a decade from now.

Published: April 28, 2021. doi:10.1001/jamanetworkopen.2021.10848

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2021 Grant SC. JAMA Network Open .

Corresponding Author: Stefan C. Grant, MD, JD, MBA, Wake Forest Baptist Comprehensive Cancer Center, Section on Hematology and Oncology, Department of Medicine, Wake Forest School of Medicine, Medical Center Blvd, Winston Salem, NC 27157 ( [email protected] ).

Conflict of Interest Disclosures: Dr Grant reported having equity in and serving as General Counsel of TheraBionic.

See More About

Grant SC. Informed Consent—We Can and Should Do Better. JAMA Netw Open. 2021;4(4):e2110848. doi:10.1001/jamanetworkopen.2021.10848

Manage citations:

© 2024

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

E-Justice India

Free consent as an essential for a valid contract: an analysis with special reference to indian cases, author: bhavya dayal, introduction.

The Indian Contract Act of 1872 governs contracts in India. The Act states that no contract shall be made unless both parties are fully committed and willing to have the contract legally enforceable. As a result, the concept of free consent becomes extremely important. Section 12 [1] of the Indian Contract Act expressly states that a contract must have the free consent of both competent parties in order to be legally enforceable and binding.

The definition of consent in Indian contract law is given in Section 13 [2] , which states that “it is when two or more persons agree on the same thing and in the same sense.” As a result, the two people must agree on the same thing in the same way. Giving consent is not enough to make a contract enforceable. Consent must be freely and voluntarily given.

Section 14 [3] of the Indian Contract Act defines free consent as consent that is free of coercion, undue influence, fraud, misrepresentation, or mistake. When consent would have been given in the absence of such factors, it is said to be so caused. The goal of this concept is to ensure that the contracting parties’ decisions have been clear since the contract’s inception. As a result, consent obtained through coercion, undue influence, fraud, misrepresentation, or mistake has the potential to render the contract void.

REASERCH QUESTIONS

Several questions pop up in our minds when we study about free consent in contracts,

  • What is free consent?
  • What is coercion?
  • What is the meaning of misinterpretation?
  • What are some of the important Indian cases on free consent?

RESEARCH METHODOLOGY 

The research methodology used is secondary research, an approach which includes gathering information from secondary sources such as academic papers, journals, and reports available for public use both online and offline.  Majority of the data is collected from the internet, through multiple websites, blogs, and channels, with the sources being duly cited at the end.  After data collection, it has been combined and collated in a comprehensive and understandable format, to increase the overall effectiveness of this paper.  And the research is compiled in an impartial manner.

H 0 – There are no need of free consent for a contract to be valid.

H 01 – Section 13 of the Indian Contract Act, which states that “it is when two or more persons agree on the same thing and in the same sense”, if there is not free consent the contract won’t be valid.

The essence of these acts is the Latin phrase consensus ad idem, which means that the parties to the contract must agree. The purpose of a contract as a two way deal is defeated if neither party’s consent is free. A contract formed through free consent safeguards the validity of an agreement, providing the parties with a protective shield. It allows the parties to maintain their autonomy in defining their running policy or principle.

Concept of Consent

According to Section 13 of the Indian Contract Act, consent exists only when the parties to a contract agree on the same thing in the same sense. Consensus ad idem, or a meeting of the minds, must be at the heart of all legal contracts. This idea was developed further in the cases of Raffles vs. Wichelhaus (1864) and Smith vs. Hughes (1871).

In the case of Raffles v. Wichelhaus [4] two parties, ‘A’ and ‘B,’ entered into a contract for the sale of 125 cotton bales from Bombay by a ship named “peerless.” There were two ships with the same name, and while Party ‘A’ was thinking about one, Party ‘B’ was thinking about the other. The court ruled that neither party had reached an agreement. As a result, the contract was null and void.

In the case of the Smith vs. Hughes [5] Queen’s Bench decided that even if a party did not express his or her assent explicitly, but acted in such a way that a reasonable man would believe him assenting to the terms proposed by the promisee, he or she would be bound by the contract as if he had expressly entered into it.

Kinds of violations of Free Consent

According to the Indian Contract Act of 1872, free consent of contract parties is not obtained if it is tainted by:

In accordance with the Indian Contract Act, 1872, coercion means, “Coercion is the committing, or threatening to commit, any act is forbidden by the Indian Penal Code (45 of 1860) or the unlawful detaining, or threatening to detain, any property, to the prejudice of any person whatever, with the intention of causing any person to enter into an agreement.”. [6]

A significant factor is that the Indian Penal Code is not always applicable at the location where consent was obtained. The phrase “to the prejudice of any person whatever” is an important part of the law because it means that coercion can be directed against the prejudice of anyone, not just the party to the contract. Surprisingly, parties other than the contracting parties can also impose coercion.

Even a third party to the contract can use coercion to obtain consent,  as seen in the case of Ranganayakamma v. Alwar Setti [7] where a widow was coerced into adopting a boy and was not allowed to remove her husband’s body until the adoption was completed.

It is the responsibility of the party whose consent was allegedly coerced to demonstrate that the consent obtained from the aggrieved party was obtained through coercion.  When a party’s consent is obtained by coercion, the contract becomes voidable at the will of the aggrieved party.

In the case of Chikkam Ammiraju and Ors. v. Chikkam Seshamma and Anr [8] 1916 MWN 368 the Madras High Court ruled that threatening to commit suicide is also coercion, and the aggrieved party has the right to terminate the contract. In this case, the husband threatened his wife and son with suicide if they did not sign a sale deed in favour of his younger brother. They carried out the deed but later pleaded guilty to coercion. Because the act of committing suicide is prohibited under the IPC, the husband’s act was found to be illegal, and the consent obtained was found to be obtained through coercion.

Undue Influence

According to Section 16 of the Indian Contract Act, 1872 an influence will be considered as Undue Influence when,

  • One party to the contract is in a position of trust and controls the other party wrongfully.
  • Such a person uses his dominant position to gain an unfair advantage over the other.

There are two key elements of undue influence,

  • The relationship trust, confidence, authority.
  • Unfair persuasion careful examination of the terms of the contract. [9]

A contract is said to be induced by “undue influence” when the parties’ relations are such that one of the parties is in a position to dominate the will of the other and uses that position to gain an unfair advantage over the other. The existence of a position of authority, trust and confidence of one party over another, and the use of unfair persuasion by the party with authority are the elements that constitute coercion.

In the case of Lingo Bhimrao Naik v. Dattatrya Shripad Jamadagni [10] a mother was accused of exerting undue influence on his adopted son when he reached the age of majority in order for him to ratify the gift deeds regarding non watan property made to her daughters, as well as obstructing his ability to consult his natural father. The court ruled that the adoptive mother abused her position of authority to exert undue influence over his son in order to gain an unfair advantage in having the gift deeds ratified. Furthermore, because the adoptive son was unaware of his legal rights, the case was adjourned.

Fraud, according to Section 17 of the Indian Contract Act, is defined as any of the following acts committed by a contracting party or its connivance or agent in order to deceive or induce a party or its agent to enter into the contract:

  • The effective concealment of a fact by a person who is aware of it.
  • A promise made with no intention of keeping it.
  • Any other act that has the potential to deceive.
  • Any act or omission that the law considers fraudulent. [11]

Silence on facts likely to affect a person’s willingness to enter into a contract is not fraud unless the circumstances of the case are such that, having regard to them, the silent person is obligated to speak or unless his or her silence is equivalent to speech in and of itself.

In the case of Bimla Bai vs Shankarlal [12] a father referred to his illegitimate son as “son” in order to save his marriage. It was determined that the father knowingly concealed the son’s illegitimacy with the intent of defrauding the bride’s parents, which amounted to fraud.

Misrepresentation

Misrepresentation can be classified into three types, according to Section 18 of the Indian Contract Act

  • When a false statement of fact is made but it is believed to be true.
  • When the person making the false statement violates duty and gains an unfair advantage, even if this was not the party’s intention.
  • When one contracting party acts in an innocent manner, causing the other party to make a mistake(s) regarding the contract’s contents.

The three types of misrepresentation have one thing in common, misrepresentation is defined as an unintentional mistake. The burden of proof is on the party alleging misrepresentation in order to avoid the contract to demonstrate that misrepresentation was used to obtain consent. When consent is obtained by deception, it is voidable at the option of the aggrieved party.

In the case of Bhagwani Bai v. LIC [13] , The court held that non-disclosure of lapsed policies could not have influenced the defendant corporation from not entering into a new policy. The court held for the plaintiff saying that it was not misrepresentation or undue advantage and order the defendant to pay the amount at the rate of 6%.

Section 20 of the Indian Contract Act states that if both parties to an agreement are mistaken about a fact essential to the agreement, the agreement is null and void. A mistake does not invalidate consent; rather, it misleads the party, causing the consent to no longer be considered free.

Mistake of Law

When legal provisions are misunderstood by contracting parties, this is referred to as a Mistake of Law. Now, the party may be perplexed as to whether the law of the home country or the law of a foreign country applies. When a contracting party claims ignorance of the laws of the home country, the contract cannot be avoided because such an excuse is not considered valid. However, if the source of the confusion is foreign law, the contracting party may be excused from the contract due to ignorance of such laws.

Mistake of Facts

When the subject of the misapprehension is the contract’s clauses or terms, it is referred to as a factual error. The misunderstanding could be on the part of one or both parties.

  • Bilateral Mistake: When a fact is the source of misunderstanding for both contracting parties, the agreement is said to be null and void.
  • Unilateral Mistake: When a fact is the source of misunderstanding for one of the contracting parties, the agreement remains valid. Only when a party makes a mistake about the parties to the agreement or the nature of the transaction does the agreement become null and void.

The plaintiff in Ayekpam Angahl Singh and Others vs. Union of India and Others [14] was the highest bidder in a fishery auction. The annual rent was 40,000, and the rights were auctioned off for three years. The plaintiff claimed that he assumed the rent amount would be the same for all three years. As a result, he claimed that he had made the same mistake. Because the mistake was made unilaterally in this case, the contract could not be avoided.

The plaintiff in Dularia Devi v. Janardan Singh [15] was an illiterate woman who wanted to leave her properties to her daughter. The defendants took her thumb prints on two documents that she thought were in her daughter’s favour, but the second document was in the defendants’ favour, who were only supposed to execute the deed. She later filed a suit to cancel the sale deed, and it was determined that because the woman was unaware of the nature of the second document, it was void.

  • The burden of proof lies with the party defending the coercion.
  • If the plaintiff wants to bring an action to stop a contract entered into on the grounds of undue influence, two issues must be kept in mind.
  • In a large majority of cases, fraud cannot be proved by concrete and observable proof.
  • The burden of proof is on the defendant to show that the misrepresentation was not rendered fraudulently.

SUGGESTIONS

  • Not many people are aware of the difference between consent and free consent.
  • More research papers must be written covering this concept.
  • People must be informed about the difference in mistake of fact and mistake of law.
  • Lastly, the most important thing everyone must be made aware of their rights when their right to free consent is infringed.

Consent is an essential component of any decision making process and serves as the foundation for contract formation. However, in recent years, obtaining free consent has become extremely difficult.  As a result, methods for determining whether consent was freely given are required.  When people are charged with coercion, undue influence, or other crimes, they tend to rely on their defences. The various methods for influencing consent are discussed, and in cases of coercion, undue influence, fraud, and misrepresentation, the contract is usually voidable at the option of the aggrieved party.  However, in the event of a mistake, the parties can only avoid the contract if there is a bilateral mistake by one of the parties regarding the important facts of the agreement or if there is a problem with knowledge of foreign law.

[1] Indian Contract Act, 1872

[4] (1964) 2 H&C 906

[5] (1871) LR 6 QB 597

[6] Section 15 India Code, https://www.indiacode.nic.in/show-data?actid=AC_CEN_3_20_00035_187209_1523268996428§ionId=38618§ionno=15&orderno=15#:~:text=India%20Code%3A%20Section%20Details&text=%22Coercion%22%20is%20the%20committing%2C,to%20enter%20into%20an%20agreement. (last visited Jul 10, 2021)

[7] (1890) ILR 13 Mad 214

[8] 1916 MWN 368

[9] Section 16 India Code, https://www.indiacode.nic.in/show-data?actid=AC_CEN_3_20_00035_187209_1523268996428§ionId=38619§ionno=16&orderno=16#:~:text=%2D%2D%20(1)%20A%20contract%20is,unfair%20advantage%20over%20the%20other. (last visited Jul 10, 2021)

[10] AIR 1938 Bom 97

[11] Section Details India Code, https://www.indiacode.nic.in/show-data?actid=AC_CEN_3_20_00035_187209_1523268996428§ionId=38620§ionno=17&orderno=17 (last visited Jul 10, 2021)

[12] AIR 1959 MP 8

[13] AIR 1984 MP 126

[14] 1980 AIR 1447, 1980 SCR (3) 485

[15] AIR 1990 SC 1173

Related Posts

research paper on free consent

Ethics & Compliance

  • eResearch IRB NextGen Project
  • Class Assignments & IRB Approval
  • Operations Manual (OM)
  • Authorization Agreement Process
  • ORCR Policies and Procedures
  • Self-Assessment Tools
  • Resources and Web Links
  • Single IRB-of-Record (sIRB) Process
  • Certificate of Confidentiality Process
  • HRPP Education Resources
  • How to Register a Clinical Trial
  • Maintaining and Updating ClinicalTrial.gov Records
  • How to Report Clinical Trial Results
  • Research Study Participation - FAQ
  • International Research
  • Coordinated Services & Practices (CSP)
  • Collaborative Research: IRB-HSBS sIRB Process
  • Data Security Guidelines
  • Research Incentive Guidelines
  • Routine fMRI Study Guidelines
  • IRB-HSBS Website Directory and Guidance
  • Waivers of Informed Consent Guidelines
  • IRB Review Process
  • IRB Amendment Process
  • Continuing Review Process
  • Incident Reporting (AE/ORIO)
  • IRB Repository Application
  • IRB-HSBS Education
  • Newsletter Archive

You are here

  • Human Subjects
  • IRB Health Sciences and Behavioral Sciences (HSBS)

Informed Consent Guidelines & Templates

U-m hrpp informed consent information.

See the HRPP Operations Manual, Part 3, Section III, 6 e .

The human subjects in your project must participate willingly , having been adequately informed about the research.  

  • If the human subjects are part of a vulnerable population (e.g., prisoners, cognitively impaired individuals, or children), special protections are required.
  • If the human subjects are children , in most cases you must first obtain the permission of parents in addition to the consent of the children.

Contact the IRB Office for more information .

See the Waiver Guidelines  for information about, and policies regarding, waivers for informed consent or informed consent documentation. 

Now Available!

See the updated  Basic Informed Consent Elements document  for a list of 2018 Common Rule basic and additional elements. 

Informed Consent Process

Informed consent is the process of telling potential research participants about the key elements of a research study and what their participation will involve.  The informed consent process is one of the central components of the ethical conduct of research with human subjects.  The consent process typically includes providing a written consent document containing the required information (i.e., elements of informed consent) and the presentation of that information to prospective participants.  

In most cases, investigators are expected to obtain a signature from the participant on a written informed consent document (i.e., to document the consent to participate) unless the IRB has waived the consent requirement or documentation (signature) requirement .

  • Projects which collect biospecimens for genetic analysis must obtain documented (signed) informed consent.
  • It is an ethical best practice to include an informed consent process for most exempt research .   IRB-HSBS reviews, as applicable, the IRB application for exempt research, but not the informed consent document itself.  A suggested consent template for exempt research can be found below under the References and Resources section.  A companion protocol template for exempt research may be found in the feature box, Related Information (top right).

Blue info icon

Informed consent documents

An  informed consent document  is typically used to provide subjects with the information they need to make a decision to volunteer for a research study.  Federal regulations ( 45 CFR 46.116 ) provide the framework for the type of information (i.e., the "elements") that must be included as part of the consent process.  New with the revised 2018 Common Rule is the requirement that the consent document begin with a "concise and focused" presentation of key information  that will help potential participants understand why they might or might not want to be a part of a research study.  

Key Information Elements

The image below displays the  five elements identified in the preamble to the revised Final Rule  as suggested key information.  

Key Information are: research with voluntary participation, summary of the research, risks, benefits, and alternatives

Note: Element number 5 (alternative procedures) applies primarily to clinical research.

General Information & Tips for Preparing a Consent Document

Reading level.

Informed consent documents should be written in plain language at a level appropriate to the subject population, generally at an 8th grade reading level .  A best practice is to have a colleague or friend read  the  informed consent document for comprehension before submission with the IRB application.  Always:

For guidance on using plain language, examples, and more, visit: http://www.plainlanguage.gov/

  • Tailor the document to the subject population.
  • Avoid technical jargon or overly complex terms.
  • Use  straightforward  language that is understandable.

Writing tips

The informed consent document should succinctly describe the research as it has been presented in the IRB application.

  • Use the second (you) or third person (he/she) to present the study details.  Avoid use of the first person (I).  
  • Include a statement of agreement at the conclusion of the informed consent document. 
  • The consent doucment must be consistent with what is described in the IRB application.

Document Formating for Uploading into eResearch

  • Remove "track changes" or inserted comments from the consent documentation prior to uploading the document into the IRB application (Section 10-1) for review.
  • Use a consistent, clearly identified file naming convention for multiple consent/assent documents.

Informed Consent Templates

IRB-HSBS strongly recommends that investigators use one of the informed consent templates developed to include the required consent elements (per  45 CFR 46.116 ), as well as other required regulatory and institutional language.  The templates listed below include the new consent elements outlined in the 2018 Common Rule.

References and Resources

Informed consent guidance.

PDF.  Lists the basic and additional elements required for inclusion or to be included, as appropriate to the research, in the informed consent documentation, along with the citiation number [e.g., _0116(b)(1)] within the revised Common Rule.  New elements associated with the 2018 Common Rule are indicated in bold text.

Informed Consent Templates (2018 Common Rule)

Strongly recommended for studies that involve the collection of biospecimens and/or genetic or genomic analysis, particularly  federally sponsored clinical trials that are required to post a consent document on a public website.  Last updated:  04/10/2024.

(Word) Blank template with 2018 revised Common Rule key information and other required informed consent elements represented as section headers; includes instructions and recommended language.  It is strongly advised that you modify this template to draft a project-specific informed consent document for your study for IRB review and approval.  Last updated: 04/10/2024

Other Templates

Informed Consent documents are not reviewed by the IRB for Exempt projects.  However, researchers are ethically bound to conduct a consent process with subjects.  This template is suggested for use with Exempt projects. Last updated 4/17/24

(Word) General outline to create and post a flyer seeking participation in a human subjects study.  Includes instructions.

(Word) Two sample letters for site approval cooperation between U-M and other institutions, organizations, etc.  Letters of cooperation must be on U-M letterhead and signed by an appropriate official.  These letters are uploaded into the Performance Site section of the eResearch IRB application.

For use by U-M Dearborn faculty, staff, and students conducting non-exempt human subjects research using subject pools. Last updated 4/10/24

For use by U-M Dearborn faculty, staff, and students conducting exempt human subjects research using subject pools

Researchers who will conduct data collection that is subject to the General Data Protection Regulation (GDPR) must use this template in tandem with a general consent for participation template/document.

  • Brief protocol for exempt research including data management and security questionnaire

Child Assent and Parental Permission

  • Child assent ages 3-6
  • Child assent 7-11
  • Parent permission
  • Child assent 12-14

IRB-Health Sciences and Behavioral Sciences (IRB-HSBS)

Phone: (734) 936-0933 Fax: (734) 936-1852 [email protected]

  • Human Subjects Protections

Privacy Behaviour: A Model for Online Informed Consent

  • Original Paper
  • Open access
  • Published: 14 July 2022
  • Volume 186 , pages 237–255, ( 2023 )

Cite this article

You have full access to this open access article

research paper on free consent

  • Gary Burkhardt 1 ,
  • Frederic Boy 1 ,
  • Daniele Doneddu 1 &
  • Nick Hajli 1  

9472 Accesses

4 Citations

11 Altmetric

Explore all metrics

An online world exists in which businesses have become burdened with managerial and legal duties regarding the seeking of informed consent and the protection of privacy and personal data, while growing public cynicism regarding personal data collection threatens the healthy development of marketing and e-commerce. This research seeks to address such cynicism by assisting organisations to devise ethical consent management processes that consider an individual’s attitudes, their subjective norms and their perceived sense of control during the elicitation of consent. It does so by developing an original conceptual model for online informed consent, argued through logical reasoning, and supported by an illustrative example, which brings together the autonomous authorisation (AA) model of informed consent and the theory of planned behaviour (TPB). Accordingly, it constructs a model for online informed consent, rooted in the ethic of autonomy, which employs behavioural theory to facilitate a mode of consent elicitation that prioritises users’ interests and supports ethical information management and marketing practices. The model also introduces a novel concept, the informed attitude , which must be present for informed consent to be valid. It also reveals that, under certain tolerated conditions, it is possible for informed consent to be provided unwillingly and to remain valid: this has significant ethical, information management and marketing implications.

Similar content being viewed by others

research paper on free consent

Questionnaire Design

research paper on free consent

Research Methodology: An Introduction

Surveys and questionnaires.

Avoid common mistakes on your manuscript.

Introduction

The overreaching aim of this paper is to support ethical personal information management and marketing practices by developing a conceptual model for online informed consent decision-making, situated in normative ethics theory, based upon the unification of the autonomous authorisation (AA) model of informed consent and the theory of planned behaviour (TPB), which prioritises users’ interests. It seeks to equip organisations with a behavioural-focussed, theoretical framework that describes the mechanism of online consent provision, as experienced by the web user. This user-focussed frame of reference has the potential to substantially address many of the privacy concerns and the scepticism of the online citizen.

Informed consent to personal data processing has its roots in and derives its core meaning from the domains of medicine and research (Beauchamp, 2011 ). That core meaning concerns the primacy of human autonomy: people have the right to make decisions for themselves (Kirby, 1983 ; Tymchuk, 1997 ). Autonomy is a foundational ethical principle in Kantian deontology, which is often described as moral autonomy—the capacity of rational persons to impose upon themselves moral laws, free from external influences. However, it has also been argued to play a key role in consequentialist ethical theory via its association with well-being—and resulting utilitarian value. Consent and autonomy are also central aspects of the contractualist tradition in which individual interests are pursued for effective mutual advantage via autonomous agreement to achievable self-imposed obligations and constraints (Heugens et al., 2006a , 2006b ). Autonomy, in business-consumer interactions, essentially requires organisations to respect the right of consumers to make rational decisions. This means furnishing consumers with all relevant information, and to not subject them to manipulation or coercion, which are core principles of the AA model of informed consent.

Consent also possesses “moral force”: it can transform a wrong into a right and it has the ethical power to recast the normative expectations that exist between individuals (Hurd, 1996 ). Kant’s conception of autonomy is ineliminably linked with morality being a form of self-governance, as opposed to earlier interpretations of morality as obedience to the state, church or others professing to be wiser than us, and it heralded the emergence of the Western liberal view of society (Campbell, 2017 ). More recently, the concept has been transferred to the digital self in the online world.

The digital self is founded upon personal information stored about an individual in online databases. Concerns have been expressed for some time about the manner in which this information is collected and aggregated (Bashir et al., 2015 ; Borgesius, 2015 ; Cate & Mayer-Schönberger, 2013 ; Solove, 2013 ) and the power that it is handing to multi-national corporations (Lanier & Weyl, 2018 ). Revelations such as the 2018 Facebook-Cambridge Analytica data scandal where it was reported that Facebook user data was used to influence voters’ choices at the US ballot box (Cadwalladr & Graham-Harrison, 2018 ) and a multitude of reports of data security breaches (Hajli et al., 2017 ), have led to a general diminishment of trust in technology companies (O'Flaherty, 2018 ). A recent global survey (of Millennials) indicated a pervasive, deep disillusionment in governments and corporations, with widespread scepticism of business’ motives and dissatisfaction with the way in which personal data are being used (Deloitte, 2019 ).

Linked to this are growing privacy concerns which are becoming more significant in the big data era and are requiring increased organisational focus (Hong et al., 2021 ; McAfee et al., 2012 ). These concerns frequently inhibit the adoption and exploitation of big data analytics (Alharthi et al., 2017 ; Pantano et al., 2021 ), which has consequences for innovation (Mikalef et al, 2019 ) and, ultimately, long-term business sustainability. Indeed, organisations that take advantage of the business benefits that big data promises, but fail to appropriately reconcile these concerns, risk repercussions that could cause serious detriment to their reputation, capabilities and overall competitive advantage (Hajli et al., 2021 ). To address these challenges, the concept of privacy by design (PbD) is gaining traction (Romanou, 2018 ), and is a key concern for the general data protection regulation (GDPR) (Andrew & Baker, 2021 ). PbD involves embedding privacy principles into the design, operation and management of information systems, in which the individual’s free and specific consent is required for the collection, use or disclosure of their personal data (Cavoukian, 2009 ).

In the UK and the EU, disclosed personal data may be processed under several legal bases, one of which is consent [GDPR Article 6(1), Data Protection Act 2018]. Consent requests often form part of a "secondary exchange" in relation to the "primary exchange" of purchase of or subscription to mainstream goods or services that have become part of the everyday digital economy (Culnan & Bies, 2003 ; Obar, 2020 ). A system exists at present in which many businesses satisfy their legal duties regarding the seeking of consent for personal data processing using “clickwraps”. Clickwraps are digital prompts that enable web users to effortlessly signify their consent by checking a box, clicking a button or employing some other similar means (Obar, 2020 ). They also provide a means to opt out of all or some of the data collection activities, but this is commonly a more inconvenient process, with more buttons to click (Obar, 2020 ) and with access to service often being denied if consent is not provided (Schermer et al., 2014 ; Tsohou & Kosta, 2017 ). Indeed, if a user wishes to check the privacy policy associated with a consent request, they usually find a lengthy document, often written in legalistic language that they do not understand (Acquisti et al., 2015 ; Mai, 2016 ; Schermer et al., 2014 ; Wright & Xie, 2019 ). Moreover, in the context of data processing, due to the complexity of data-sharing arrangements, the individual is no longer able to make rational, conscious or autonomous decisions (Marwick & Hargittai, 2019 ; Obar, 2020 ; Schermer et al., 2014 ). Cumulatively, this leads to consent desensitisation or fatigue in which people do not make active, informed choices, become disinterested, or feel powerless when confronted with the consent request (Obar, 2020 ; Tene & Polonetsky, 2012 ).

Understanding this dynamic is key to implementing practices that prioritise users’ interests in the domain of consent elicitation. However, despite informed consent being intrinsic to much of the personal data collection that takes place online, the nature of online consent remains ill-understood (Solove, 2013 ). Models have been proposed which seek to explain online personal information disclosure behaviours across a variety of situations (see e.g. Dinev & Hart, 2006 ; Li et al., 2011 ; Malhotra et al., 2004 ; Smith et al., 1996 ; Van Slyke et al., 2006 ; Xu et al., 2013 ), and studies have investigated an increasing number of antecedents and outcomes for privacy concerns in an ever-growing number of contexts (Yun et al., 2019 ). This has led to calls for a consolidation of privacy-related constructs that can cater for disparate contexts to allow the development of a robust underlying theoretical model that explains consent behaviours (Yun et al., 2019 ). However, no such models have been produced to date and a gap certainly exists in the sense that no attempt has been made to build a macro-level model for online informed consent that is situated in behavioural modelling theory. This paper proposes a model that connects behavioural theory to informed consent theory, to promote better understanding of the behavioural mechanisms that are congruent with the principle of informed consent in the context of personal data collection and subsequent processing. A corollary of the model allows situations or behaviours that are not concordant with an individual’s informed consent for personal data processing to be discerned, for example, to identify unethical data collection practices in the marketing domain.

In the present research, the TPB is the chosen theoretical behavioural lens. While the scope for its improvement has been acknowledged (Ajzen, 1991 ) and it has received criticism in some quarters (see Bagozzi, 1992 , 2007 ; Benbasat & Barki, 2007 ), it remains extensively used in contemporary studies to analyse a profusion of behaviours in numerous contexts (including online contexts) either in its original form, in an extended form, or in combination with other theories or models (see e.g. Apau & Koranteng, 2019 ; Ho et al., 2017 ; Li et al., 2019 ; Sharif & Naghavi, 2020 ). Among several advantages that the TPB has over competing behavioural models is its parsimony, and a set of explanatory variables that have distinct conceptual foci (Crespo & del Bosque, 2008 ). Moreover, it has commonly been used to underpin investigations concerning online personal information disclosure (Smith et al., 2011 ).

A number of models exist for informed consent, one of which considers informed consent as a form of autonomous authorisation (AA) (Faden & Beauchamp, 1986 ). Although the AA model originally derives from the medico-legal domain, Faden and Beauchamp ( 1986 ) did not preclude its use in other contexts. Moreover, this mode of autonomous consent provision is particularly applicable to the present research because it forms the basis for providing consent for personal data processing under the GDPR (Carolan, 2016 ; Schermer et al., 2014 ). Moreover, as is demonstrated in the section “ Relationships between AA and TPB constructs ”, the variables within the TPB model align with those of the AA model of informed consent. Accordingly, combining these models offers the potential to address the gap regarding macro-modelling of online informed consent through a behavioural lens. In addition to addressing this gap, this paper also serves an inter-disciplinary function by extending the use of the AA model from a tradition within the medico-legal ethical domain into a new domain of business ethics and, in particular, the ethics of online subscription to goods and services and associated marketing activities.

Literature Review and Theoretical Framework

Various theoretical constructions have been proposed for informed consent. On one level, there are those constructions that seek to characterise the provision of consent in terms of the extent to which the consent of an individual may be purposed or re-purposed. These models include broad consent, blanket consent (Ploug & Holm, 2015a ), presumed consent (Hofmann, 2009 ), express consent (Win, 2005 ) and implied consent (Hofmann, 2009 ). On an entirely different level, there are models that seek to address the ontology of consent. These models, because they seek to address the intrinsic nature of consent, are of particular interest in the present paper. They include the disclosure model (Faden & Beauchamp, 1986 ; Friedman et al., 2000 ; Marta, 1996 ; Sim & Wright, 2000 ), the effective consent model (Faden & Beauchamp, 1986 ), the AA model (Faden & Beauchamp, 1986 ) and the fair transaction model (Miller & Wertheimer, 2011 ). This paper is especially concerned with the AA model, because of its direct relationship with data protection legislation, as discussed in the section “ Introduction ”. A brief description of this and the other ontological models is provided in this section to place the AA model in context.

The disclosure model is the traditional medico-legal model, and it identifies five constituents of consent: disclosure, comprehension, voluntariness, competence and agreement (Faden & Beauchamp, 1986 ; Friedman et al., 2000 ; Marta, 1996 ; Sim & Wright, 2000 ). Disclosure refers to the adequacy of the information provided to the participant. Comprehension concerns the participant’s understanding of the information provided. Competence concerns the participant’s ability to make a rational decision, and includes psychological as well as social and legal criteria (e.g. age thresholds) (Faden & Beauchamp, 1986 ). Voluntariness relates to the absence of control regarding the decision. The final element, agreement, is sometimes omitted as an element and, in other analyses, it is given a different label, being variously referred to as consent, decision, collaboration or agreement (Faden & Beauchamp, 1986 ; Friedman et al., 2000 ; Marta, 1996 ; Sim & Wright, 2000 ).

The effective consent model, proposed by Faden and Beauchamp ( 1986 ), closely resembles the disclosure model, in which a framework of organisational and institutional rules, policies and procedures shape the seeking of consent. It does not rely upon the autonomy of the person. Rather, it is concerned with legally and institutionally effective systems of processes and regulations that govern the seeking of consent and regulate the behaviour of the consent seeker (Faden & Beauchamp, 1986 ).

In the fair transaction model (Miller & Wertheimer, 2011 ), disclosure, comprehension, competence, voluntariness and absence of deception are key aspects but, unlike the disclosure model, they are context sensitive. What comprises fairness is dependent upon the risk–benefit profile; greater efforts are required to promote and verify comprehension as the negative consequences to individuals increase.

The AA model of consent, proposed by Faden and Beauchamp ( 1986 ), is purely logical in concept and free from normative conditions which may be applied for practical or policy reasons. The model submits that informed consent is synonymous with autonomous authorisation, i.e. that autonomy and authorisation are its constituent elements. They define autonomy as consisting of substantial understanding, non-control and intentionality.

According to Faden and Beauchamp ( 1986 ), substantial understanding requires “apprehension of all the material or important descriptions—but not all the relevant (and certainly not all possible ) descriptions”. They describe how the importance of a description may largely be determined by “the extent to which the description is material to the person’s decision to authorize” (p. 302), which they say is entirely subjective. An intentional action, according to Faden and Beauchamp ( 1986 ), is one “willed in accordance with a plan” (p. 243), but it also includes tolerated acts. Tolerated acts are those that may be unwanted or undesirable. Non-control refers to there being no external controls on the action: an external controlling influence would negate autonomy.

Autonomy is a principle that is key to deontological theories and has been argued to also play a pivotal role in consequentialist ethical theories. It is a fundamental principle of ethics in Kantian deontology that lies at the heart of his fundamental principle of morality—the Categorical Imperative, which states that you should act only according to that maxim that you would wish all other rational people to follow as if it were a universal law (Kant, 1785 ). Kant’s formulation of autonomy is based upon the principle that a person is obliged to follow the Categorical Imperative because of their use of reason, rather than any external influence (White, 2004 ). This proposition requires people to recognise the right of others to also act autonomously. In the commercial context, this translates to businesses furnishing consumers with material information relevant to their decision and respecting their right to be free from external control or influence.

To some extent, this resonates with the stakeholder theory approach to marketing ethics. As a normative theory, the stakeholder theory contends that managers have a fiduciary relationship with all stakeholders and when the interests of stakeholders conflict, the optimal balance must be achieved (Hasnas, 1998 ). In its empirical form, it effectively asserts that a business's financial success requires all stakeholders' interests to be given proper consideration and that policies should be adopted to effect the best balance among them (Hasnas, 1998 ). Cohen ( 1995 ) suggests that consent is intrinsically related to the concept of stakeholdership—that what an individual or a group of individuals would consent to is an important aspect of stakeholder interest and that notions of stakeholdership would benefit from the perspective of consent theory—for which agent autonomy is a central principle.

However, in stakeholder theory, the interests of the individual can be subdued to the interests of the wider collective group of stakeholders to achieve the optimal balance so while their autonomy may be respected, their interests may not actually be served in the stakeholder paradigm (Ambler & Wilson, 1995 ; Hasnas, 1998 ). At a more fundamental level, the prioritisation of competing stakeholder claims also encumbers normative stakeholder theorisation (Van Oosterhout et al., 2006 ).

The approach of contractualism is very different. Contractualism does not aggregate interests, but rather centres on the interests of individuals and captures “the separateness of persons” (Rawls, 1971 ). Parties to an agreement must have (a) interests that largely align, (b) the ability to abide by the terms of the agreement and (c) have sufficient autonomy to adhere to self-imposed obligations and constraints (Heugens et al., 2006a , 2006b ).

The principle of autonomy is not generally associated with consequentialism because it allows for aggression against individuals to aid others (Cummiskey, 1990 ). However, Mill’s ( 1859 ) view of autonomy is actually rooted in consequentialist ethical theory, claiming that it is an essential element of well-being and, therefore, has utilitarian value. In the commercial context, a sense of autonomy has been argued as being vital to a consumer’s well-being, with consumers experiencing utility from the attribution of positive outcomes to the self when they feel in control of their behaviours or choices (André et al., 2018 ).

Making informed choices is key to Faden and Beauchamp’s ( 1986 ) AA model but they refused to generalise the model beyond the medical and research settings and even acknowledged that it might not be possible to apply the AA model to some environments. However, more importantly, they did not preclude its use in other contexts, and subsequent literature has since recognised the AA model as valid within the personal data processing context (Schermer et al., 2014 ). Within this context, automation is, for example, facilitating micro-targeted marketing practices, based upon behavioural observations, which, on the one hand, facilitate easier consumer choices and enhance well-being but, on the other hand, could undermine their sense of autonomy and undermine well-being (André et al., 2018 ). The model presented in this paper helps to dissect the factors at play in such decision-making processes.

Table 1 summarises the consent models described in this section.

The protection of personal data falls to the general data protection regulation (GDPR) in the EU and, in the UK, its incorporation into post-Brexit UK law as the UK GDPR. Schermer et al. ( 2014 ) point to the GDPR as strongly influenced by the AA model but they argue that AA does not consider the realities of the human decision-making process concerning personal data processing. Therefore, a model that also considers human behaviour would be particularly advantageous.

As far as online consent behaviour is concerned, in the EU and the UK, the regulatory framework dictates that signification of consent is required “by a statement or by a clear affirmative action” [GDPR, Article 4(11)], typically by ticking a box or clicking “I agree”. In this sense, it has a discernible behavioural component. Several behavioural models exist to explain human behaviour in various contexts, of which the main ones of relevance to consent behaviours are shown in Table 2 .

Of the theories described in Table 2 , the TPB is the theory of choice to advance the development of a model for online consent to personal data processing. Some justification is provided in Table 2 , but the merits or otherwise of competing models shown in Table 2 is not the focus of this paper—for fuller details pertaining to each of the models, the reader may consult the referenced articles included within the table.

The TPB is a psychological theory that connects beliefs with behaviour. It states that attitudes towards the behaviour, subjective norms and perceived control over the behaviour are predictors of behavioural intentions. In turn, these behavioural intentions, in conjunction with perceived behavioural control (PBC), are predictors of the behaviour (Ajzen, 1991 ). The TPB is designed to predict and explain human behaviour in specific contexts. Figure  1 illustrates this relationship.

figure 1

Theory of planned behaviour [adapted from Ajzen ( 1991 )]

Upon initial inspection, as explained next, there is a high degree of commonality between the constructs of the AA model of consent (understanding, non-control, intention and authorisation) and those of the TPB (attitude, PBC, subjective norm, intention and behaviour). In brief, an attitude is formed from one’s understanding of something in much the same way as one’s control over something also derives from one’s understanding of it. In this regard, both the attitude construct and the PBC construct in the TPB are related to the understanding construct in the AA model. Subjective norms relate to the perceived social pressure to engage or not to engage in a behaviour (Ajzen, 1991 ) and such pressure ostensibly relates to how much individual control a person feels they have in relation to that behaviour. In this manner, the subjective norm construct in the TPB is related to the non-control construct in the AA model. An act of authorisation is, essentially, a behavioural act, so the TPB construct of behaviour and the AA construct of authorisation, have a clear association. Furthermore, the intention construct is common to both models. These associations are summarised in Table 3 and are discussed more fully in the section “ Relationships between AA and TPB constructs ”.

Methodology

This research uses a qualitative method (logical reasoning) supported by an illustrative example. First, it employs logical reasoning to develop an original conceptual model for online informed consent decision-making based upon the unification of the AA model of informed consent and the TPB: each of the constructs of each of the theories is dissected and conditions are highlighted under which a TPB construct aligns with an AA construct. Second, an illustrative example, consisting of an analysis of web users’ behaviour regarding tracking is employed to demonstrate various aspects of the model. The illustrative example is based upon extant research.

The use of an illustrative example to demonstrate the empirical relevance of a theoretical model was advanced by Eckstein ( 1975 ) in his seminal paper which considered how case studies could be used to facilitate research in the social sciences domain. More recently, Yin ( 1994 ) has similarly argued that the illustration of certain topics within a research domain, by way of case study or example, can greatly help to understand real-life phenomena in depth.

The use of an illustrative example is beneficial for three primary reasons: (i) it allows for closer inspection of constructs and empirical illustration of causal relationships by studying complex phenomena within their contexts (George & Bennett, 2005 ; Siggelkow, 2007 ); (ii) it can attain high levels of conceptual validity by providing a mechanism to refine concepts (George & Bennett, 2005 ); (iii) it allows for the study of phenomena that would otherwise be difficult to quantify or study outside of their natural setting (Bonoma, 1985 ).

More specifically, in the present research, using tracking cookies to illustrate specific aspects of the proposed model has the advantage of being able to leverage considerable prior research in the domain of web user behaviour in relation to cookie notices.

Relationships Between AA and TPB Constructs

This paper is primarily concerned with eliciting the relationships between the AA and the TPB constructs to facilitate a mode of consent elicitation that supports ethical information management and marketing practices. It uses the definitions of the constructs detailed within each of the models, as articulated by each of their architects, Faden and Beauchamp ( 1986 ) and Ajzen ( 1991 ) respectively. It is not concerned with scholarly debates surrounding the nature of these constructs. This section draws heavily upon the description of the TPB as per Ajzen ( 1991 ) and the interpretation of consent as per the AA model of Faden and Beauchamp ( 1986 ).

The Relationship Between Understanding and Attitude, and Between Understanding and PBC

This section demonstrates that the understanding construct embedded within the AA model of informed consent (Faden & Beauchamp, 1986 ) and the attitude construct in the TPB (Ajzen, 1991 ) are concomitant, to the extent that one overlaps, either wholly or partially, with the other. A similar demonstration is presented for the relationship between understanding and PBC.

The AA model of consent defines the understanding that is required for the validity of informed consent. This is wholly independent of what an individual actually understands. Therefore, required understanding and actual understanding may not be congruent.

Understanding is a highly nuanced phenomenon with numerous interpretations. The AA model, however, is concerned with two particular categories of understanding in relation to the consent process. These are (i) the requirement that an individual “understands that” they are authorising and (ii) that they “understand what” they are authorising (Faden & Beauchamp, 1986 ). Both are explored in this section.

The Relationship Between Understanding that they are Authorising and PBC Over the Authorisation Process

The PBC construct in the TPB owes much to Bandura’s work on self-efficacy (Ajzen, 2002 ). Perceived self-efficacy is concerned with “people’s beliefs about their capabilities to exercise control over their own level of functioning and over events that affect their lives” (Bandura, 1991 , p. 257). The difference between the two is that PBC is concerned with control over the behaviour, whereas perceived self-efficacy is concerned with control over outcomes (Ajzen, 2002 ). In the context of providing authorisation for the sharing of personal data, this distinction is important: an individual may have control over the authorisation behaviour (i.e. they can choose to share their personal data or not), but their control over the authorisation outcome (i.e. precisely how their personal data is used) is an entirely separate consideration.

The PBC construct in the TPB comprises control beliefs and the strengths of those beliefs in respect of their capacity to influence the behaviour in question. These beliefs combine to create a “perceived ease or difficulty of performing the behaviour”, which serves as the definition of PBC (Ajzen, 1991 ). The construct is given by Eq. ( 1 ):

where n is the number of salient control beliefs, and p i is the perceived power of the control belief c i to facilitate or inhibit the behaviour.

In the course of authorising, an individual uses any “right, power, or control” that they possess in order to bestow another with the right to act (Faden & Beauchamp, 1986 ). This Faden and Beauchamp describe as the “permission giving” and “transfer-of-control” function of authorisation.

To understand is to perceive. Therefore, an individual understanding that they are authorising, is synonymous with them perceiving that they are authorising, i.e. they perceive that they are giving permission and transferring control. If they perceive that they are transferring control, then it logically follows that they must also perceive that said control lies within their means in the first place. They may have one or more beliefs concerning their control over the authorisation process, and each such control belief ( c i ) they may hold to a greater or lesser extent ( p i ). The totality of these beliefs constitutes their understanding that they are authorising and it can be represented by the PBC construct in the TPB, if authorisation is considered as a form of behaviour.

Whether perceived control is transferred voluntarily is an entirely separate consideration. Voluntariness is a distinct construct that is related to the non-control construct in the AA model and the subjective norm construct in the TPB model, as discussed in the section “ Non-control (and its relationship with subjective norms) ”.

Whether an individual has the power or control to authorise is a matter of fact, not belief or understanding. It is a binary construct (i.e. they either have control or not), and it adds a third dimension (the first two being what is required to be understood and what an individual actually understands) to the consent process. Ajzen ( 1991 ) recognises this as actual control over the behaviour in question.

An individual’s beliefs regarding their control over the authorisation process may correspond with the fact of their control, or it may not. If it does not, it is a false belief. Faden and Beauchamp ( 1986 ) adopt a “justified belief” standard to evaluate the quality of an individual’s understanding: is the person holding the belief justified in believing that it is true?

The understanding of an action and the performance of it, based solely upon a demonstrably false belief, is less than fully autonomous (Faden & Beauchamp, 1986 ). Where an individual holds several beliefs, some justified and some demonstrably false, substantial understanding may still be possible, depending upon the extent to which the false beliefs affect their understanding.

There are two types of false beliefs concerning control over the authorisation process. Firstly, when an individual believes that they have control over the authorisation process when, in fact, they do not. Secondly, a false belief can also arise when an individual believes that they do not have control over the authorisation process when, in fact, they do. In both cases, the individual misunderstands their capabilities in terms of the authorisation. In the latter case, in terms of PBC constructs, this could be considered an individual holding a “negative control” belief (i.e. p i has a negative value).

In their discussion of false beliefs, Faden and Beauchamp ( 1986 ) declare, “some false beliefs are more important…than others, and these must be given more weight” (p. 253). This assertion accords with how beliefs are framed in the TPB: in the TPB, false control beliefs c i and c j are not necessarily equal: they have different weights or powers, p i and p j .

Furthermore, an individual may hold “mixed beliefs”, partly believing that they have control and, partly believing that they do not. This ambivalence, in terms of PBC constructs, may be viewed as an individual holding one or more control beliefs with a positive p i and one or more control beliefs with a negative p i . Depending upon the strength of each control belief and whether it is positive or negative will determine, on balance, whether they believe they have control or not. Table 4 summarises this discourse.

Requiring an understanding that one is authorising and actually understanding that one is authorising are not synonymous. Given that a requirement of informed consent is the individual understanding that they are authorising, then they are required to hold a set of control beliefs that are compatible with that understanding. If they do, then their PBC over authorisation equates with their understanding that they are authorising. Conversely, in the case where an individual has the power or control to authorise a particular course of action, but they hold no belief that they do, then their PBC over authorisation does not match with the required understanding that they can authorise, and they cannot provide valid consent.

Further, an individual may not have the power or control to authorise a particular course of action, and hence they cannot provide valid consent, but they believe that they do. Here, they hold a demonstrably false belief. The belief of their control over the authorisation process does not correspond with the fact of their control, so any authorisation would be less than fully autonomous, and informed consent is impossible. Here, their PBC over authorisation equates with a false understanding that they can authorise.

If they do not have the power or control to authorise a particular course of action and this is what they believe, then, clearly, they do not possess an understanding that they can authorise, so informed consent is not possible. In this scenario, their PBC over authorisation corresponds with the understanding that they cannot authorise (see Table 5 ).

It is only when the required understanding, the actual understanding and the factual power/control over authorisation are all positive that informed consent is possible (see Fig.  2 ).

figure 2

Conceptual representation of the relationship between the required understanding of power/control to authorise, the actual understanding of power/control to authorise, the factual control over authorisation, and where informed consent is possible

Understanding what they are Authorising (and Its Relationship with Attitudes)

Attitude is defined as the “degree to which a person has a favourable or unfavourable evaluation or appraisal of the behaviour in question” (Ajzen, 1991 ) and it is expressed by Eq. ( 2 ). According to Ajzen ( 1991 ), beliefs concerning the consequences of a behaviour determine attitudes towards the behaviour. Beliefs about something are formed by associating it with attributes. In the case of attitude towards a behaviour, those attributes may be the outcome or the cost of performing the behaviour. The attitude construct in the TPB is constituted of salient belief strengths (assessed usually on a scale ranging from likely to unlikely), and a subjective outcome evaluation of each belief held about the object of the attitude (assessed usually on a scale ranging from good to bad).

where n is the number of salient beliefs and b i is the strength of each salient belief, and e i is the subjective evaluation of the belief’s attribute.

In their discussion of what it means to understand an action, Faden and Beauchamp ( 1986 ) observe that it colloquially corresponds to having “justified beliefs” about the consequences of what one is doing (Faden & Beauchamp, 1986 ). Regarding the concern with “beliefs”, already apparent, therefore, is some similarity between the attitude construct in the TPB and the understanding construct in the AA model.

The AA model of consent requires a substantial understanding of what one is authorising. Substantial understanding lies on the continuum between full understanding and full ignorance (Faden & Beauchamp, 1986 ). As discussed earlier, substantial understanding requires apprehension of all material or important descriptions.

A material description is one that would be viewed, by the individual concerned, as worthy of consideration in deciding whether to perform a proposed action (Faden & Beauchamp, 1986 ). If an individual regards a description as worthy of consideration regarding their decision to authorise, they must be able to form some belief concerning that description and accord some positive or negative attributes to that belief. The individual may possess several beliefs concerning the material description, and each belief they may hold to a greater or lesser extent. The totality of these beliefs represents their apprehension of the material descriptions. This corresponds with the summative salient belief index in the attitudinal construct in the TPB. According to the TPB, an attitude will be formed towards what is being authorised, based upon the material descriptions. For substantial understanding to be achieved, it is necessary to form an attitude towards what is being authorised based upon all material descriptions.

Substantial understanding also requires an “extra subjective component”. The extra subjective component essentially comprises some objective facts that must be understood. These are “the essential elements of the arrangement” (Faden & Beauchamp, 1986 ). The attitude towards what is being authorised will only be affected if the individual forms salient beliefs concerning the objective facts that must be understood. Salient beliefs will only be formed in respect of material descriptions. If these objective facts are material to the individual, then salient beliefs will be formed which will affect attitude formation. If the individual has an apprehension of all the material descriptions and the objective facts are material to the individual, then, for the purposes of the present paper, it will be called an “informed attitude”. If the objective facts are not material to the individual, then salient beliefs will not be formed regarding them, and they will not affect attitude formation: this will be called an “uninformed attitude”. An uninformed attitude is also formed if individuals form an attitude towards what they are authorising based upon some, but not all, material descriptions.

The apprehension of non-material or unimportant descriptions of what one is authorising have no bearing upon informed consent. Given that they are not material to the individual, the individual will not form salient beliefs regarding those descriptions. The attitude construct in the TPB is concerned with salient beliefs, so these non-material descriptions will not contribute to the formation of the attitude towards what they are authorising.

In summary, informed consent requires attitudes to be formed towards all material descriptions of what is being authorised. The material descriptions must include the objective facts required to be understood; attitudes towards non-material descriptions have no bearing upon informed consent. Figure  3 illustrates the composition of informed and uninformed attitudes.

figure 3

Conceptual representation of the factors at play in the definition of a informed and b uninformed attitudes

Note that it may be the case that a description that is material to the person’s decision to authorise is a false description which leads to a false (but salient) belief formation. The false belief will affect the attitude towards what is being authorised. Where an individual holds several beliefs, some justified and some demonstrably false, but salient, substantial understanding may still be possible, depending upon the extent to which the false beliefs affect their understanding. An informed attitude is still possible, but it will only be formed if substantial understanding is achieved and the objective facts that must be understood are material to the individual.

Non-control (and Its Relationship with Subjective Norms)

This section highlights the relationship that exists between the subjective norm component of the TPB and the non-control component of the AA model of informed consent.

For valid informed consent, there must be an absence of external control on the individual’s decision to authorise. Faden and Beauchamp ( 1986 ) call this criterion “non-control”. Control is applied through influencing the individual’s analysis of a situation. Faden and Beauchamp ( 1986 ) describe three types of external influence: persuasion, manipulation and coercion. In their description, persuasion is never controlling, while coercion is always controlling. Manipulation occupies the grey region between the two, and it may be persuasive or coercive, depending upon the degree to which the individual’s decision is affected by the influence. If the influence on action is substantially controlling, then the action cannot be autonomous. Conversely, if an influence on action is substantially non-controlling, then the action can be autonomous. Therefore, non-control reflects a spectrum of allowable influence upon an informed consent decision that does not invalidate it. Consequently, the relationship between subjective norm and non-control is demonstrated by illustrating that a subjective norm is a form of influence and that it can exist on the same spectrum of influence as that of non-control.

Norms have a powerful and consistent impact on behaviour (Cialdini et al., 1991 ). With reference to a particular social group, two types of norms exist: descriptive norms and injunctive norms. A descriptive norm is an individual’s perception of what most people do, and its motivational element is characterised by informational social influence (Cialdini et al., 1991 ). Informational social influence is defined as “an influence to accept information obtained from another as evidence about reality” (Deutsch & Gerard, 1955 ). An injunctive norm is an individual’s perception of what most people would approve or disapprove, and its motivational element is characterised by normative social influence (Cialdini et al., 1991 ). Normative social influence is defined as the influence to conform to another person's positive expectations or group or even to one’s “self” (Deutsch & Gerard, 1955 ). The subjective norm component in the TPB is an injunctive norm (White et al., 2009 ). It is, therefore, concerned with normative social influence so it may be substantially controlling or substantially non-controlling.

Figure  4 illustrates the relationship between coercive influences (which are controlling) and persuasive influences (which are not controlling). Manipulation is depicted as occupying the region between coercion and persuasion and it can either be substantially controlling or substantially non-controlling depending upon where it lies on the influence scale. Figure  4 also illustrates the influence spectrum along which normative social influences (NSI) and informational social influences (ISI) may operate. Only when the combination of both of these influences is not substantially controlling is informed consent possible. If their combination is substantially controlling, then informed consent is not possible.

figure 4

Relationship between autonomy and norm formation showing a the absence of or b the presence of controlling influences, building upon Faden and Beauchamp ( 1986 )

Relationship Between AA Intention and TPB Intention

The intention construct is common to the AA model of consent and the TPB. This section examines the interpretation of intention in each of the models and then proceeds to highlight the relationship that exists between both manifestations.

In the TPB, intentions are “indications of how hard people are willing to try, of how much of an effort they are planning to exert, in order to perform the behaviour” and they “capture the motivational factors that influence a behaviour” (Ajzen, 1991 , p. 181). The TPB deliberates very little over the nature of intention. However, it is clear that intention in the TPB is a motivational construct and is associated with a willingness to expend effort to try to enact the behaviour (Rhodes et al., 2006 ).

The model of intentional action used by Faden and Beauchamp ( 1986 ) is somewhat different from that used in the TPB. In their model, as well as acts that are willed in accordance with a plan (as per the TPB formulation) they also include “tolerated acts”. Tolerated acts are not undertaken willingly: they may be undesirable or unwanted but follow from the willed acts (Faden & Beauchamp, 1986 ), possibly as a “side effect”. Therefore, according to the TPB, given the lack of willingness, tolerated acts could not be considered as intentional. Hence, for the act of providing informed consent, intention, as per the AA model of informed consent, is a superset of the intention with which the TPB is concerned. Figure  5 illustrates this relationship.

figure 5

Relationship between intention as per the AA model and intention as per the TPB

Relationship Between Authorisation and Behaviour

There is no generally accepted definition of the phenomenon that we call “behaviour”. For the purposes of the present analysis, the version that descriptive psychology offers is adopted. This states that behaviour is an attempt on the part of an individual to bring about some state of affairs—either to change that state of affairs or to maintain it (Bergner, 2011 ; Ossorio, 2006 ).

The act of authorising provides official permission for or formal approval to an action or an undertaking (Oxford English Dictionary, 2019 ). Authorisation brings about a state change. This state change concerns permission or approval: what once did not have permission or approval, following authorisation, gains permission or approval. It follows that authorisation is a form of behaviour.

Relationship Combination

Sections “ The relationship between understanding and attitude, and between understanding and PBC ”, “ Non-control (and its relationship with subjective norms) ”, “ Relationship between AA intention and TPB intention ” and “ Relationship between authorisation and behaviour ” related the constructs of the TPB to those of the AA model of informed consent. Figure  6 shows the resultant original conceptual model which overlays the AA constructs of understanding, non-control, intention and authorisation onto the TPB model.

figure 6

Conceptual model of the relationship between TPB constructs and AA model of informed consent constructs [building upon Ajzen ( 1991 )]

Illustrative Example: Tracking Cookies

Cookies are short text strings sent by web servers to the browser of an internet user. They initially emerged as a means for web users to revisit web sites without re-identifying themselves and their preferences with each visit. (Millett et al., 2001 ). However, they have evolved to be capable of collecting information about user’s browsing habits, information that can be distributed extensively and with relative ease to companies that can utilise it in marketing campaigns (Palmer, 2005 ; Stead & Gilbert, 2001 ). The GDPR requires that acceptance of these so-called tracking cookies is accompanied by an associated cookie policy or notice which explains how the user’s data is to be used (Bornschein et al., 2020 ). However, research has shown that many web users routinely accept these cookies with little or no reflection (Choi et al., 2018 ; Utz et al., 2019 ) or how they provide a mechanism through which their online practices may be monitored (Bauer et al., 2021 ; Lin & Loui, 1998 ). The unified model presented in this paper provides a means through which this form of web user behaviour can be better understood.

This section briefly explores the case of tracking cookies and how they provide a means to illustrate and contextualise the relationships between the constructs of the TPB and the AA model of consent presented in the section “ Relationships between AA and TPB constructs ”.

The Relationship Between the Web User Understanding that they are Authorising and Their PBC

When a person browses a website for access to goods or services, permission to share their personal data via a cookie notice is frequently requested (Bornschein et al., 2020 ). In this regard, they are presented with an authorisation request. This is consistent with the predominant “notice and choice” paradigm of privacy self-management, in which individuals act as their own gatekeeper for access to their personal information (Hoofnagle & Urban, 2014 ; Milne & Rohm, 2000 ; Solove, 2013 ).

The person may not understand that they are being asked to authorise something (Plaut & Bartlett, 2012 ) and, even if they do, it may not be entirely clear to them precisely what they are authorising (Acquisti & Grossklags, 2005 ; Solove, 2013 ). However, when they do understand that their authorisation is being requested (via the cookie notice), it is associated with increased perceived power, or control, over the process (Bornschein et al., 2020 ). Conversely, if the person does not perceive that they have control over their own act of authorisation (i.e. that they must accept the cookie), then it is most unlikely that they consider that they are performing a legitimate act of authorisation.

The Relationship Between the Web User Understanding what is Being Authorised and Their Attitude Towards Authorisation: Informed and Uninformed Attitudes

The web user’s attitude towards authorising the processing of their personal data via a cookie notice would, in line with the TPB, be shaped by their beliefs concerning the consequences of clicking “I accept”. It is evident that many people do not make active, informed choices (Ploug & Holm, 2012 , 2015b ; Schermer et al., 2014 ), and are simply unaware of the consequences of their acceptance. In this case, their lack of understanding of the consequences translates directly to a lack of justified belief formation (Faden & Beauchamp, 1986 ). Cookie notices generally provide too many or too few options, fuelling the belief that the choices are not meaningful, resulting in a lack of engagement with the notice (Utz et al., 2019 ). A lack of interest in the notice would correspond to the web user deeming the notice as being unworthy of consideration because they regard it as immaterial [i.e. not a “material description”, as per Faden and Beauchamp ( 1986 )]. Therefore, any objective facts that must be understood for substantial understanding to be achieved would likely remain unknown to the web user, and an uninformed attitude would be established. Conversely, let us suppose that the web user takes an active interest in the cookie notice. In this case, their interest logically derives from the wish to reach a (substantial) understanding of any clauses that may be of concern to them (i.e. “material descriptions”), because they have particular attitudes towards the consequences of sharing their personal data. If the clauses of interest also correspond to the essential elements of the agreement, then an informed attitude would be formed.

The Relationship Between Non-control and Subjective Norms

It has been argued that companies appear to abuse their power as cookie policy authors by using linguistic techniques to obfuscate reality and to confuse and deceive the user (Pollach, 2005 ). For example, cookies are often described as small files which are used as standard practice , suggesting that they are innocuous and of no cause for concern (Pollach, 2005 ). If a cookie banner is designed to manipulate web user acceptance, then (following the logic of Faden and Beauchamp ( 1986 )) there is a substantially controlling influence on the user’s decision to accept a tracking cookie, and it cannot be legitimately asserted that the web user can act autonomously (Bauer et al., 2021 ). Moreover, in the case of the tracking cookie, non-acceptance may, for example, result in a web user’s exclusion from online social activity in which significant others partake. This engages the subjective norm construct within the TPB. Social injunctive norms are subjective norms which are concerned with perceived social pressures from significant others to perform a particular behaviour (White et al., 2009 ). While subjective norms can motivate action, research has highlighted their weakness as a predictor of behaviour (Ajzen, 1991 ). Notwithstanding this, perceived social pressure, which is characterised by the normative social influence to conform to another’s or even one’s expectations (Cialdini et al., 1991 ; Deutsch & Gerard, 1955 ) (subjective norm) has been shown to be the most important TPB factor in predicting the intention to disclose personal information for incentives offered by commercial websites (Heirman et al., 2013 ). Such norms can exert a strong influence upon the individual’s decision to accept a cookie, to such an extent that they may be regarded as being substantially controlling.

Intention: Secondary Exchanges and Tolerated Acts

The web user’s intention to share personal data via the acceptance of a tracking cookie is often part of a secondary exchange in relation to the primary exchange of their access to mainstream goods or services (Obar, 2020 ). This secondary exchange, which is central to the understanding of a user’s privacy concerns, provides the information required to support a marketing relationship with the user (Culnan & Bies, 2003 ), and may be regarded as a tolerated act, as per Faden and Beauchamp’s ( 1986 ) AA model of consent. Clear evidence of the toleration effect was seen in a 2016 study which found that participants, when engaging with a social networking site, considered notices as an “unwanted impediment” to the real purpose of accessing the site (Obar & Oeldorf-Hirsch, 2020 ).

The present research addresses the identified gap that exists concerning the absence of a macro-level behavioural model for online informed consent that consolidates privacy-related constructs across disparate contexts. It does so by developing a parsimonious original conceptual model for online informed consent, rooted in normative ethics theory and argued through logical reasoning, which unites the AA model of informed consent and the TPB. This model facilitates the analysis of acts of online consent through a well-established behavioural theory. This is illustrated by way of an exemplar that applies the model to the case of tracking cookies and explains a variety of web user behaviours in the context of the relationships between AA constructs and TPB constructs.

While various theoretical perspectives on consent have been highlighted in this paper, it would appear that contractualism, with its particular focus on the autonomy and informed consent of the individual, would appear to be an appropriate lens through which to view personal data-sharing arrangements. The key word here is “personal”. The “personal” is the bedrock of contractualism, whereas stakeholder theory subsumes personal interest to the collective and consequentialism abrogates personal interest if greatest utility can be found by other means.

This paper also contributes to ethical information management research and ethical marketing practices, at a theoretical and a practical level, by shedding light on the operation of consent elicitation in online interactions and transactions between web users and businesses and how such elicitation can align with architectures that properly respect the interests of users, for example, PbD implementations.

Theoretical Contributions

The current research develops theory in the domain of ethical personal information management. The primary theoretical contribution of this paper is the unification of the TPB with the AA model of informed consent, which were hitherto only considered distinctly. The proposed conceptual model augments privacy practices (e.g. PbD) by assisting organisations and researchers to understand the mechanism of a web user’s consent provision across a variety of contexts, thus facilitating ethical online consent processes that prioritise users’ interests.

The proposed model also benefits inter-disciplinary theoretical practice by extending the AA model of informed consent, which has a tradition in the sphere of medico-legal ethics, to a new sphere within business ethics. This has particular relevance to domains concerning the collection and processing of personal data for management or marketing purposes.

Another theoretical contribution of this research, which derives from the unified model, is to demonstrate that informed consent for personal data processing can be possible under circumstances of unwillingness, if it is a tolerated side effect of some overarching objective, providing it is given voluntarily and with substantial understanding of the data-sharing arrangement.

The notion of a tolerated act (as per the AA model of consent) accords well with online behaviour in relation to tracking cookies, where the sharing of personal data often occurs as an exchange which is usually secondary in relation to the primary act of accessing online goods and services. Regarding the intention to consent, the addition of tolerated acts is key to understanding informed consent in the context of personal data sharing, i.e. in personal data-sharing contexts, often the intention to share data is situated in the “tolerated” space. According to the TPB, given the lack of willingness, tolerated acts could not be considered as intentional. However, this work provides an argument demonstrating that it is theoretically possible for informed consent to be valid in circumstances where personal data is shared unwillingly. Given that consent is frequently provided under circumstances in which there are asymmetries in knowledge and power (Solove, 2013 ), this has considerable ethical implications as it appears to shift the balance of power further in favour of the consent-requester. However, this is only the case if toleration is considered as a binary construct. As toleration is often measured in degrees (Crick, 2014 ), it should then be considered whether there is a level of toleration that is too great and beyond which consent can be nullified. Moving the focus of online personal data sharing from willingness to toleration has clear and significant ethical implications, but ostensibly represents a more authentic recognition of the online consent dynamic in many situations. One such implication, which follows directly from the analysis herein, concerns Mill’s ( 1859 ) consequentialist perspective of autonomy: tolerated acts that are unwanted or undesirable diminish a person’s sense of autonomy and can have detrimental consequences for well-being. Conversely, acts that are willed are positively attributed and heighten a person’s sense of autonomy and well-being.

A further theoretical contribution concerns the informed attitude : this construct is a new theoretical conception that especially cements the link between informed consent and behaviour. It sheds light on what it means, from a behavioural perspective, to understand an agreement for which informed consent is being sought. This construct frames informed consent in terms of people having established attitudes towards (i) all aspects of the agreement that are important to them (subjective components) and, (ii) the essential elements of the agreement (objective components). Informed consent is not possible if either of these criteria is not present.

The concept of the uninformed attitude is particularly helpful in explaining the disinterest and lack of understanding that is often exhibited by web users regarding cookie notices, and the associated criticism that online consent is rarely truly informed. Yet, operationalisation of the informed attitude construct presents challenges: descriptions that are important to one individual may not be important to another individual; they are subjective. However, if only objective facts that must be understood were required for consent to be valid, operationalisation would be simplified, but at the expense of violating an aspect of the AA model of consent (i.e. ignoring subjective elements of the agreement that are important to a particular individual); on balance, the practical benefits of such an implementation may outweigh the dilution of the ethical purity of a true AA model operationalisation.

Practical Contributions

With an increasingly regulated environment, effective management of consent processes for web user data collection is becoming increasingly important. While there is an abundance of guidance for organisations concerning data protection and management, there is a lack of guidance on what constitutes collection that considers the web user’s attitudes, their sense of control and subjective norms i.e. collection that is user-centric. This largely stems from the absence of a framework for online informed consent that considers peoples’ behaviour, and it is having a detrimental effect on public trust in personal data processing activities.

By dissecting user behaviour in relation to online consent through the creation of a high-level behavioural model, as presented in this paper, it is possible to build ethical consent management processes for businesses that leverage the TPB’s rich backdrop of methods and materials to better understand various online consent behaviours. For example, if there is a lack of understanding of what is being authorised, behavioural interventions that target attitudes towards what requires authorisation may be considered. Or, if the voluntariness of consent is in question, an avenue of redress may be to examine any subjective norms that may be at play. A lack of understanding may also deny a person the ability to make a rational decision. As well as being a clear violation of informed consent as per the AA model, in Kantian terms it also violates the Categorical Imperative because the person could not be considered as a rational agent applying the principle of reason in an autonomous fashion. Respecting autonomy and keeping the interests of individuals uppermost, as well as having benefits for well-being (as discussed), aligns with principles of user-centricity, such as in the PbD framework, and can assist in addressing the public’s growing scepticism in personal data collection procedures.

Conclusions

A model for online informed consent has been developed that can be employed to augment ethical personal information collection and associated marketing practices by creating a user-centric model for consent transactions which unites behavioural theory, specifically the TPB, with the AA model of informed consent. This unified model creates a novel theoretical platform that explains the internal mechanisms of online consent behaviours and which, depending upon one’s theoretical standpoint, can be viewed through the normative ethical lens of either consequentialism or deontology. A qualitative method has been adopted in which the model is constructed through logical reasoning and then illustrated using the example of tracking cookies. It is shown that, under certain conditions, (i) the understanding construct in the AA model of consent aligns with two constructs in the TPB: attitude and perceived behavioural control . (ii) Non-control within the AA model aligns with the subjective norm in the TPB. (iii) Intention is common to both models, albeit with subtle but significant differences in meaning. Finally, (iv) the authorisation element of the AA model equates to the behaviour component in the TPB model.

The model also introduces a novel construct, the informed attitude , which must be present for informed consent to be valid. An informed attitude to an agreement is formed if an individual understands (a) all aspects of the agreement that are important to them and (b) the essential elements of the agreement.

Of particular ethical significance in the information management and marketing domains is the determination that it is theoretically possible for a web user to share personal data unwillingly , and for consent to remain informed, if it is a tolerated side effect of some greater intended purpose, provided it is given voluntarily and with substantial understanding. However, the unwillingness of consent provision for data-sharing activities sits uneasily with the narrative of individual autonomy and privacy practices, although it may be tempered if toleration is measured on a continuum on which there are levels of toleration beyond which consent could not be considered authentic.

Limitations and Further Work

A limitation of this study is related to the inclusion of descriptive norms in the model. This has the consequence that the model does not provide a definitive substantially controlling/non-controlling outcome in the case where informational social influence (ISI) is substantially controlling and normative social influence (NSI) is substantially non-controlling (or vice versa). Extending the model to overcome this limitation will require a construct that allows for the weighting of a combined NSI/ISI. This scenario can be explored in future research.

Further research which analyses the willingness of consent in a variety of online personal information disclosure contexts, and to what extent such disclosure is simply tolerated, would clarify the extent to which greater research efforts should be directed towards analysing online consent through a new ethical lens of toleration versus the traditional lens of willingness.

As substantial understanding is a core component of informed consent, another fruitful avenue of future research would be to investigate ways in which actual understanding can reach substantial understanding by managing other components of the unified model.

Operationalisation of the informed attitude construct would require further research efforts at the theoretical level. If understanding an agreement would require only the understanding of the objective facts, the departure that this would represent from a pure AA model operationalisation may predicate a modified form of consent that lies between the effective model of consent and the AA model.

Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347 (6221), 509–514. https://doi.org/10.1126/science.aaa1465

Article   Google Scholar  

Acquisti, A., & Grossklags, J. (2005). Privacy and rationality in individual decision making. IEEE Security & Privacy, 3 (1), 26–33.

Google Scholar  

Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. Action control (pp. 11–39). Springer.

Ajzen, I. (1991). The theory of planned behaviour. Organizational Behavior and Human Decision Processes, 50 (2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-t

Ajzen, I. (2002). Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior 1. Journal of Applied Social Psychology, 32 (4), 665–683.

Alharthi, A., Krotov, V., & Bowman, M. (2017). Addressing barriers to big data. Business Horizons, 60 (3), 285–292.

Ambler, T., & Wilson, A. (1995). Problems of stakeholder theory. Business Ethics: A European Review, 4 (1), 30–35.

André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W., Huber, J., Van Boven, L., Weber, B., & Yang, H. (2018). Consumer choice and autonomy in the age of artificial intelligence and big data. Customer Needs and Solutions, 5 (1), 28–37.

Andrew, J., & Baker, M. (2021). The general data protection regulation in the age of surveillance capitalism. Journal of Business Ethics, 168 (3), 565–578.

Apau, R., & Koranteng, F. N. (2019). Impact of cybercrime and trust on the use of e-commerce technologies: An application of the theory of planned behavior. International Journal of Cyber Criminology, 13 (2), 228.

Bagozzi, R. P. (1992). The self-regulation of attitudes, intentions, and behavior. Social Psychology Quarterly, 55 , 178–204.

Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8 (4), 244–254.

Bandura, A. (1986). Social foundation of thought and action: A social-cognitive view . Englewood Cliffs.

Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes, 50 (2), 248–287.

Bashir, M., Hayes, C., Lambert, A. D., & Kesan, J. P. (2015). Online privacy and informed consent: The dilemma of information asymmetry. Proceedings of the Association for Information Science and Technology, 52 (1), 1–10.

Bauer, J. M., Bergstrøm, R., & Foss-Madsen, R. (2021). Are you sure, you want a cookie?–—The effects of choice architecture on users’ decisions about sharing private online data. Computers in Human Behavior, 120 , 106729.

Beauchamp, T. L. (2011). Informed consent: Its history, meaning, and present challenges. Cambridge Quarterly of Healthcare Ethics, 20 (4), 515–523.

Benbasat, I., & Barki, H. (2007). Quo vadis TAM? Journal of the Association for Information Systems, 8 (4), 211–218.

Bergner, R. M. (2011). What is behavior? And so what? New Ideas in Psychology, 29 (2), 147–155.

Bonoma, T. V. (1985). Case research in marketing: Opportunities, problems, and a process. Journal of Marketing Research, 22 (2), 199–208.

Borgesius, F. Z. (2015). Informed consent: We can do better to defend privacy. IEEE Security & Privacy, 13 (2), 103–107.

Bornschein, R., Schmidt, L., & Maier, E. (2020). The Effect of consumers’ perceived power and risk in digital information privacy: The example of cookie notices. Journal of Public Policy & Marketing, 39 (2), 135–154.

Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Campbell, L. (2017). Kant, autonomy and bioethics. Ethics, Medicine and Public Health, 3 (3), 381–392.

Carolan, E. (2016). The continuing problems with online consent under the EU’s emerging data protection principles. Computer Law & Security Review, 32 (3), 462–473. https://doi.org/10.1016/j.clsr.2016.02.004

Cate, F. H., & Mayer-Schönberger, V. (2013). Notice and consent in a world of big data. International Data Privacy Law, 3 (2), 67–73. https://doi.org/10.1093/idpl/ipt005

Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Information and Privacy Commissioner of Ontario, Canada, 5 , 12.

Choi, H., Park, J., & Jung, Y. (2018). The role of privacy fatigue in online privacy behavior. Computers in Human Behavior, 81 , 42–51.

Cialdini, R. B., Kallgren, C. A., & Reno, R. R. (1991). A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in experimental social psychology (Vol. 24, pp. 201–234). Elsevier.

Cohen, S. (1995). Stakeholders and consent. Business & Professional Ethics Journal, 14 , 3–16.

Crespo, Á. H., & del Bosque, I. R. (2008). The effect of innovativeness on the adoption of B2C e-commerce: A model based on the Theory of Planned Behaviour. Computers in Human Behavior, 24 (6), 2830–2847.

Crick, B. (1971). Toleration and tolerance in theory and practice. Government and Opposition, 6 (2), 143–171.

Culnan, M. J., & Bies, R. J. (2003). Consumer privacy: Balancing economic and justice considerations. Journal of Social Issues, 59 (2), 323–342.

Cummiskey, D. (1990). Kantian consequentialism. Ethics, 100 (3), 586–615.

Deloitte. (2019). The Deloitte Global Millennial Survey 2019: Societal discord and technological transformation create a "generation disrupted". Retrieved August 21, 2019, from https://www2.deloitte.com/global/en/pages/about-deloitte/articles/millennialsurvey.html

Deutsch, M., & Gerard, H. B. (1955). A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology, 51 (3), 629.

Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17 (1), 61–80.

Eckstein, H. (1975). Case study and theory in political science. In F. I. Greenstein & N. W. Polsby (Eds.), Strategies of inquiry, handbook of political science (Vol. 7, pp. 94–137). Addison-Wesley.

Eysenck, H. J. (1963). Biological basis of personality. Nature, 199 (4898), 1031–1034.

Faden, R. R., & Beauchamp, T. L. (1986). A history and theory of informed consent . Oxford University Press.

Friedman, B., Felten, E., & Millett, L. I. (2000). Informed consent online: A conceptual model and design principles. In: University of Washington Computer Science & Engineering Technical Report 00–12–2.

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences . MIT Press.

Goldberg, L. R. (1990). An alternative" description of personality": The big-five factor structure. Journal of Personality and Social Psychology, 59 (6), 1216.

Hajli, N., Shirazi, F., Tajvidi, M., & Huda, N. (2021). Towards an understanding of privacy management architecture in big data: An experimental research. British Journal of Management, 32 (2), 548–565.

Hajli, N., Sims, J., Zadeh, A. H., & Richard, M.-O. (2017). A social commerce investigation of the role of trust in a social networking site on purchase intentions. Journal of Business Research, 71 , 133–141.

Hasnas, J. (1998). The normative theories of business ethics: A guide for the perplexed. Business Ethics Quarterly, 8 (1), 19–42.

Heirman, W., Walrave, M., & Ponnet, K. (2013). Predicting adolescents’ disclosure of personal information in exchange for commercial incentives: An application of an extended theory of planned behavior. Cyberpsychology Behavior and Social Networking, 16 (2), 81–87. https://doi.org/10.1089/cyber.2012.0041

Heugens, P. P., van Oosterhout, J. H., & Kaptein, M. (2006a). Foundations and applications for contractualist business ethics. Journal of Business Ethics, 68 (3), 211–228.

Heugens, P. P., Kaptein, M., & van Oosterhout, J. H. (2006b). The ethics of the node versus the ethics of the dyad? Reconciling virtue ethics and contractualism. Organization Studies, 27 (3), 391–411.

Ho, S. S., Lwin, M. O., Yee, A. Z., & Lee, E. W. (2017). Understanding factors associated with Singaporean adolescents’ intention to adopt privacy protection behavior using an extended theory of planned behavior. Cyberpsychology, Behavior, and Social Networking, 20 (9), 572–579.

Hofmann, B. (2009). Broadening consent—and diluting ethics? Journal of Medical Ethics, 35 (2), 125–129.

Hong, W., Chan, F. K., & Thong, J. Y. (2021). Drivers and inhibitors of internet privacy concern: A multidimensional development theory perspective. Journal of Business Ethics, 168 (3), 539–564.

Hoofnagle, C. J., & Urban, J. M. (2014). Alan Westin’s privacy homo economicus. Wake Forest Law Review, 49 , 261.

Hurd, H. M. (1996). The moral magic of consent. Legal Theory, 2 (02), 121–146.

Kant, I. (1785). Groundwork for the metaphysics of morals . Oxford University Press.

Kirby, M. (1983). Informed consent: What does it mean? Journal of Medical Ethics, 9 (2), 69–75.

Lanier, J., & Weyl, E. G. (2018). A blueprint for a better digital society . Harvard Business Review.

Li, H., Sarathy, R., & Xu, H. (2011). The role of affect and cognition on online consumers’ decision to disclose personal information to unfamiliar online vendors. Decision Support Systems, 51 (3), 434–445.

Li, Y., Huang, Z., Wu, Y. J., & Wang, Z. (2019). Exploring how personality affects privacy control behavior on social networking sites. Frontiers in Psychology, 10 , 1771.

Lin, D., & Loui, M. C. (1998). Taking the byte out of cookies: Privacy, consent, and the Web. ACM Policy, 28 , 39–51.

Mai, J.-E. (2016). Big data privacy: The datafication of personal information. Information Society, 32 (3), 192–199. https://doi.org/10.1080/01972243.2016.1153010

Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet users’ information privacy concerns (IUIPC): Tthe construct, the scale, and a causal model. Information Systems Research, 15 (4), 336–355. https://doi.org/10.1287/isre.1040.0032

Marta, J. (1996). A linguistic model of informed consent. Journal of Medicine and Philosophy, 21 (1), 41–60.

Marwick, A., & Hargittai, E. (2019). Nothing to hide, nothing to lose? Incentives and disincentives to sharing information with institutions online. Information, Communication & Society, 22 (12), 1697–1713.

McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D., & Barton, D. (2012). Big data: The management revolution. Harvard Business Review, 90 (10), 60–68.

McCrae, R. R., & Costa, P. T. (1987). Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52 (1), 81.

Mikalef, P., Boura, M., Lekakos, G., & Krogstie, J. (2019). Big data analytics capabilities and innovation: The mediating role of dynamic capabilities and moderating effect of the environment. British Journal of Management, 30 (2), 272–298.

Mill, J. S. (1859). On liberty . Broadview Press.

Miller, F. G., & Wertheimer, A. (2011). The fair transaction model of informed consent: An alternative to autonomous authorization. Kennedy Institute of Ethics Journal, 21 (3), 201–218.

Millett, L. I., Friedman, B., & Felten, E. (2001). Cookies and web browser design: toward realizing informed consent online. In: Proceedings of the SIGCHI conference on Human factors in computing systems.

Milne, G. R., & Rohm, A. J. (2000). Consumer privacy and name removal across direct marketing channels: Exploring opt-in and opt-out alternatives. Journal of Public Policy & Marketing, 19 (2), 238–249.

Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening big data black boxes (without assistance). Big Data & Society, 7 (1), 2053951720935615.

Obar, J. A., & Oeldorf-Hirsch, A. (2020). The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society, 23 (1), 128–147.

O'Flaherty, K. (2018). This is why people no longer trust Google and Facebook with their data. Forbes. Retrieved from https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this-is-why-people-no-longer-trust-google-and-facebook-with-their-data/#61f2b1f54b09

Ossorio, P. G. (2006). The behavior of persons . Psychology Press.

Oxford English Dictionary. (2019). Oxford English dictionary . Oxford University Press.

Palmer, D. E. (2005). Pop-ups, cookies, and spam: Toward a deeper analysis of the ethical significance of internet marketing practices. Journal of Business Ethics, 58 (1), 271–280.

Pantano, E., Dennis, C., & Alamanos, E. (2021). Retail managers’ preparedness to capture customers’ emotions: A new synergistic framework to exploit unstructured data with new analytics. British Journal of Management, 2021 , 1–21.

Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19 , 123–205.

Plaut, V. C., & Bartlett, R. P. (2012). Blind consent? A social psychological investigation of non-readership of click-through agreements. Law and Human Behavior, 36 (4), 293–311. https://doi.org/10.1037/h0093969

Ploug, T., & Holm, S. (2012). Informed consent and routinisation. Journal of Medical Ethics, 39 (4), 214–218.

Ploug, T., & Holm, S. (2015a). Meta consent: a flexible and autonomous way of obtaining informed consent for secondary research. BMJ: British Medical Journal . https://doi.org/10.1136/bmj.h2146

Ploug, T., & Holm, S. (2015b). Routinisation of informed consent in online health care systems. International Journal of Medical Informatics, 84 (4), 229–236.

Pollach, I. (2005). A typology of communicative strategies in online privacy policies: Ethics, power and informed consent. Journal of Business Ethics, 62 (3), 221–235.

Prochaska, J. O., & DiClemente, C. C. (1984). The transtheoretical approach: Crossing traditional boundaries of therapy . Dow Jones-Irwin.

Rawls, J. (1971). 1971: A theory of justice . Harvard University Press.

Rhodes, R. E., Blanchard, C. M., Matheson, D. H., & Coble, J. (2006). Disentangling motivation, intention, and planning in the physical activity domain. Psychology of Sport and Exercise, 7 (1), 15–27.

Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change1. The Journal of Psychology, 91 (1), 93–114.

Romanou, A. (2018). The necessity of the implementation of privacy by design in sectors where data protection concerns arise. Computer Law & Security Review, 34 (1), 99–110.

Schermer, B. W., Custers, B., & van der Hof, S. (2014). The crisis of consent: How stronger legal protection may lead to weaker consent in data protection. Ethics and Information Technology, 16 (2), 171–182. https://doi.org/10.1007/s10676-014-9343-8

Sharif, S. P., & Naghavi, N. (2020). Online financial trading among young adults: Integrating the theory of planned behavior, technology acceptance model, and theory of flow. International Journal of Human-Computer Interaction, 37 (10), 949–962.

Siggelkow, N. (2007). Persuasion with case studies. Academy of Management Journal, 50 (1), 20–24.

Sim, J., & Wright, C. (2000). Research in health care: Concepts, designs and methods . Nelson Thornes.

Smith, H. J., Dinev, T., & Xu, H. (2011). Information privacy research: An interdisciplinary review. MIS Quarterly, 35 (4), 989–1015.

Smith, H. J., Milberg, S. J., & Burke, S. J. (1996). Information privacy: Measuring individuals’ concerns about organizational practices. MIS Quarterly, 20 (2), 167–196.

Solove, D. J. (2013). Privacy self-management and the consent dilemma. Harvard Law Review, 126 (7), 1880–1903.

Stead, B. A., & Gilbert, J. (2001). Ethical issues in electronic commerce. Journal of Business Ethics, 34 (2), 75–85.

Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology and Intellectual Property, 11 , xxvii.

Tsohou, A., & Kosta, E. (2017). Enabling valid informed consent for location tracking through privacy awareness of users: A process theory. Computer Law & Security Review, 33 (4), 434–457. https://doi.org/10.1016/j.clsr.2017.03.027

Tymchuk, A. J. (1997). Informing for consent: Concepts and methods. Canadian Psychology-Psychologie Canadienne, 38 (2), 55–75. https://doi.org/10.1037/0708-5591.38.2.55

Utz, C., Degeling, M., Fahl, S., Schaub, F., & Holz, T. (2019). (Un) informed consent: Studying GDPR consent notices in the field. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security.

Van Oosterhout, J., Heugens, P. P., & Kaptein, M. (2006). The internal morality of contracting: Advancing the contractualist endeavor in business ethics. Academy of Management Review, 31 (3), 521–539.

Van Slyke, C., Shim, J., Johnson, R., & Jiang, J. J. (2006). Concern for information privacy and online consumer purchasing. Journal of the Association for Information Systems, 7 (6), 16.

White, M. D. (2004). Can homo economicus follow Kant’s categorical imperative? The Journal of Socio-Economics, 33 (1), 89–106.

White, K. M., Smith, J. R., Terry, D. J., Greenslade, J. H., & McKimmie, B. M. (2009). Social influence in the theory of planned behaviour: The role of descriptive, injunctive, and in-group norms. British Journal of Social Psychology, 48 (1), 135–158.

Win, K. T. (2005). A review of security of electronic health records. Health Information Management, 34 (1), 13–18.

Wright, S. A., & Xie, G. X. (2019). Perceived privacy violation: Exploring the malleability of privacy expectations. Journal of Business Ethics, 156 (1), 123–140.

Xu, F., Michael, K., & Chen, X. (2013). Factors affecting privacy disclosure on social network sites: An integrated model. Electronic Commerce Research, 13 (2), 151–168.

Yin, R. K. (2009). Case study research: Design and methods (4th ed.). Sage.

Yun, H. J., Lee, G., & Kim, D. J. (2019). A chronological review of empirical research on personal information privacy concerns: An analysis of contexts and research constructs. Information & Management, 56 (4), 570–601. https://doi.org/10.1016/j.im.2018.10.001

Download references

No external funding was received to assist with the preparation of this manuscript.

Author information

Authors and affiliations.

Swansea University, Bay Campus, Fabian Way, Crymlyn Burrows, Skewen, Swansea, SA1 8EN, UK

Gary Burkhardt, Frederic Boy, Daniele Doneddu & Nick Hajli

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Gary Burkhardt .

Ethics declarations

Conflict of interest.

The authors have no conflict of interest to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Burkhardt, G., Boy, F., Doneddu, D. et al. Privacy Behaviour: A Model for Online Informed Consent. J Bus Ethics 186 , 237–255 (2023). https://doi.org/10.1007/s10551-022-05202-1

Download citation

Received : 06 September 2021

Accepted : 27 June 2022

Published : 14 July 2022

Issue Date : August 2023

DOI : https://doi.org/10.1007/s10551-022-05202-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Informed consent
  • Theory of planned behaviour
  • Personal data
  • Information management
  • Marketing ethics
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Med Internet Res

Logo of jmir

Patient Perspectives and Preferences for Consent in the Digital Health Context: State-of-the-art Literature Review

Iman kassam.

1 Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, ON, Canada

2 Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada

Daria Ilkina

3 Canada Health Infoway, Toronto, ON, Canada

Jessica Kemp

Abigail carter-langford, nelson shen, associated data.

Literature search strategy.

Overview of the included studies.

The increasing integration of digital health tools into care may result in a greater flow of personal health information (PHI) between patients and providers. Although privacy legislation governs how entities may collect, use, or share PHI, such legislation has not kept pace with digital health innovations, resulting in a lack of guidance on implementing meaningful consent. Understanding patient perspectives when implementing meaningful consent is critical to ensure that it meets their needs. Consent for research in the context of digital health is limited.

This state-of-the-art review aimed to understand the current state of research as it relates to patient perspectives on digital health consent. Its objectives were to explore what is known about the patient perspective and experience with digital health consent and provide recommendations on designing and implementing digital health consent based on the findings.

A structured literature search was developed and deployed in 4 electronic databases—MEDLINE, IEEE Xplore, Scopus, and Web of Science—for articles published after January 2010. The initial literature search was conducted in March 2021 and updated in March 2022. Articles were eligible for inclusion if they discussed electronic consent or consent, focused on the patient perspective or preference, and were related to digital health or digital PHI. Data were extracted using an extraction template and analyzed using qualitative content analysis.

In total, 75 articles were included for analysis. Most studies were published within the last 5 years (58/75, 77%) and conducted in a clinical care context (33/75, 44%) and in the United States (48/75, 64%). Most studies aimed to understand participants’ willingness to share PHI (25/75, 33%) and participants’ perceived usability and comprehension of an electronic consent notice (25/75, 33%). More than half (40/75, 53%) of the studies did not describe the type of consent model used. The broad open consent model was the most explored (11/75, 15%). Of the 75 studies, 68 (91%) found that participants were willing to provide consent; however, their consent behaviors and preferences were context-dependent. Common patient consent requirements included clear and digestible information detailing who can access PHI, for what purpose their PHI will be used, and how privacy will be ensured.

Conclusions

There is growing interest in understanding the patient perspective on digital health consent in the context of providing clinical care. There is evidence suggesting that many patients are willing to consent for various purposes, especially when there is greater transparency on how the PHI is used and oversight mechanisms are in place. Providing this transparency is critical for fostering trust in digital health tools and the innovative uses of data to optimize health and system outcomes.

Introduction

Digital health refers to the use of IT, services, and processes to support health care delivery [ 1 ]. These technologies include but are not limited to electronic health records (EHRs), consumer wearable devices, mobile apps, remote patient monitoring, artificial intelligence (AI), and virtual care [ 1 - 3 ]. Digital health tools can support improved patient engagement and empowerment and enhance care quality and delivery [ 4 , 5 ]. The success of these tools hinges on their ability to share patient data (herein referred to as personal health information [PHI]) to support clinical care [ 6 ], enable patient access to health records [ 7 ], and progress research and health system analytics (ie, secondary use) [ 8 ]. Opportunities for using digital health tools continue to grow as the potential for applications of AI, machine learning, and deep learning expands. Various types of AI are currently being used to support the development of learning health care systems [ 9 ]. With the growing volume of data produced through digital health tools, there is considerable interest in consolidating these data silos [ 10 - 12 ]. Numerous health care organizations aim to establish learning health systems to achieve these benefits; however, this work cannot be done without patients’ consent to share their PHI [ 13 ]. These systems require large amounts of PHI to develop algorithms that can guide improvements in patient safety, quality of care, and health outcomes [ 9 ].

Digital Health and Patient Privacy

The growing importance of and interest in integrating AI and other digital health tools into care raises questions on how to protect patient privacy. Although the public is supportive of investments in these technologies [ 14 , 15 ], there is also a corresponding concern about the potential for unethical or harmful uses of their PHI [ 16 - 18 ], especially as it relates to use by commercial entities. As health care data breaches and instances of misuse of PHI by commercial entities become more common [ 19 ], there is a need to support individuals in better understanding what they are consenting to (ie, becoming informed data citizens). It is necessary to reach a balance between protecting patient privacy and realizing the benefits of digital innovations to support meaningful consent and see success in this space. This balancing act is a product of polarizing perspectives grounded in the different values of various stakeholders in the health care community [ 20 ]. Privacy as contextual integrity [ 21 ] suggests that the appropriateness of information sharing should be based on the norms of specific social contexts, and trust in a product or service is predicated on the degree of consistency with expectations [ 21 ]. Contextual integrity also suggests that these norms should not only consider the societal perspectives and values but also account for the individual interests and preferences of the affected parties. To understand these norms, numerous studies have explored patient willingness to consent to share PHI [ 9 , 13 , 22 ]. Factors related to willingness include individuals’ perspectives on the privacy and security of their PHI, the relevance of sharing, and how sharing PHI would directly affect the quality of care [ 9 ].

Patient willingness to share PHI to support advancements in health care is improved when there is transparency, especially when the information regarding the collection, use, and disclosure of their PHI is clearly stated [ 22 ]. Moreover, this transparency is often seen as a mechanism for improving the trustworthiness of digital health tools [ 22 ]. There are ongoing efforts to improve consent processes by providing patients with improved transparency on PHI management, enabling them to make more informed and meaningful choices about PHI sharing [ 16 , 23 ]. Innovative consent processes through the use of electronic consent (eConsent) have become common practice by health care organizations and digital health service providers for obtaining consent. eConsent refers to the use of electronic information systems (ie, multimedia resources such as infographics, videos, and embedded links) to convey information commonly found in a paper-based consent form such that an individual’s consent is obtained electronically [ 24 , 25 ]. With the growing use of eConsent, new consent models have emerged, each providing individuals with varying control and autonomy over the collection, use, and sharing of their PHI. Broad consent models [ 26 ] (ie, consenting to the use of PHI for broad purposes) have commonly been used by digital health service providers; however, newer consent models such as dynamic consent [ 26 ] (ie, providing consent through a web-based platform, with the ability to set or change consent preferences) are being adopted to better support meaningful and informed consent practices. The Office of the Privacy Commissioner (OPC) of Canada has deemed meaningful consent a vital component of Canadian privacy legislation, citing the importance of meaningful consent and choice in establishing and sustaining public trust in digital health [ 27 , 28 ]. This review explores patient perspectives and preferences regarding digital health consent. Patient experiences with digital health consent will also be explored to generate insights into optimizing the effectiveness of digital health consent processes, making consent meaningful for patients.

Purpose and Objectives

State-of-the-art reviews are intended to summarize emerging trends and synthesize insights from the most current literature [ 29 ]. This state-of-the-art review was conducted to understand the current state of research concerning patient perspectives on digital health consent. Understanding the current state is essential as research on consent is primarily driven in the context of understanding its role in participant recruitment [ 30 ], biobanks [ 31 , 32 ], registries [ 33 , 34 ], and secondary use [ 35 , 36 ], but there has been limited attention paid to consent within a broader digital health context [ 17 ]. The objectives of this review were to characterize the state of evidence, explore what is known about the patient perspective and experience with digital health consent, and provide recommendations on designing and implementing digital health consent based on the findings.

Search Strategy

A structured literature search was developed and deployed in 4 electronic databases: MEDLINE, IEEE Xplore, Scopus, and Web of Science. The initial literature search was conducted in March 2021 and updated in March 2022. Primary peer-reviewed articles published after January 2010 were included in the search. The search strategy was first developed using Medical Subject Headings and keywords in MEDLINE and then translated to the other databases. Key search terms used in each database included information systems , electronic health/medical records , telemedicine , telehealth , mobile apps , mHealth , digital/virtual health , artificial intelligence , consent , informed consent , eConsent , consent model , consent framework , consent pathway , consent requirement , consent standards , eGovernment , and eServices . The search terms were combined using Boolean logic operators (eg, digital health OR eConsent AND consent OR framework ). Although the search terms used were quite broad in nature, we narrowed our search to specifically include papers that described digital health interventions or innovations and consent preferences, behaviors, or experiences of patient populations. The full search strategy is presented in Multimedia Appendix 1 .

Selection Criteria

The search results were uploaded to Covidence, a literature screening and data extraction tool. In total, 2 reviewers (NS and IK) piloted the extraction tool with the first 50 articles and independently screened the titles and abstracts for relevancy afterward. Any discrepancies or conflicts were discussed and resolved. A total of 3 reviewers (IK, DI, and HB) independently assessed the full text of all relevant articles. Articles were flagged for discussion if it was unclear whether they met the inclusion criteria. In total, 2 authors (IK and JK) repeated these steps when updating the search.

Articles were included if they met all the following criteria: (1) described an eConsent or consent process, design, or development; (2) focused on patients’ perspectives, preferences, acceptance, or behaviors; (3) were set in a digital health or health IT context; and (4) described the use of digital health or health IT or digital PHI for health care delivery, health research or analytics, or consumer use. Study eligibility was not limited by study design, thereby allowing for the inclusion of various study designs. Only studies published in English were eligible for inclusion. Studies that did not meet the inclusion criteria were excluded from the review.

Data Extraction and Analysis

A standardized data extraction form was developed on REDCap (Research Electronic Data Capture; Vanderbilt University), a secure software database for storing research data [ 37 ]. In total, 4 researchers (IK, DI, HB, and JK) independently extracted data from each article using a standardized data extraction template. Once extraction was complete, the full research team reviewed the compiled data. Three broad data collection categories were extracted from the articles: (1) study characteristics, (2) consent model, and (3) study results and main findings.

The extracted study characteristics included study title, year of publication, country of origin, sample size, sample source (eg, research, biobank or patient participant, and national or regional sample), study context or setting, study method, and study design and objectives. Data on consent models were extracted and categorized based on the seminal eConsent model framework by Coeira and Clarke [ 38 ]. The framework was then expanded to include contemporary models [ 26 , 39 - 42 ]. The types of consent models include broad open , broad controlled , broad tiered/menu/meta , dynamic , and general denial consent . The main findings of each study were extracted verbatim if they provided statistical analysis of patient perspectives and experiences with digital health consent.

A qualitative content analysis [ 43 ] of the free-text data was conducted using Microsoft Excel (Microsoft Corp). The analysis used an inductive approach in which the research team created codes and categories over several coding sessions. Once the coding was complete, the lead and senior authors (IK and NS) reviewed and refined the codes. Frequencies and percentages were calculated using Microsoft Excel.

A detailed overview of the included studies is presented in Multimedia Appendix 2 [ 44 - 116 ].

The search strategy yielded 3133 unique citations. Following the screening process, 2.39% (75/3133) of the articles were eligible for extraction. The flowchart of the search selection process can be found in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is jmir_v25i1e42507_fig1.jpg

Article selection flowchart. HIT: health IT.

Study Characteristics

Of the 75 studies included in this review, 58 (77%) were published within the last 5 years (between 2017 and 2022), and 48 (64%) were conducted in the United States. Most studies were conducted within a clinical care (33/75, 44%) or research context (29/75, 39%) where the study sample of focus was primarily research, biobank, and patient participants (54/75, 72%). Nearly half of the studies (36/75, 48%) used quantitative research methods, commonly using cross-sectional surveys (35/75, 47%) to collect data. The primary purpose of most studies was to understand participants’ willingness to share PHI (25/75, 33%) and participants’ perceived usability and comprehension of an eConsent platform (25/75, 33%). More than half of the studies (39/75, 52%) focused on participants’ hypothetical views on digital health consent. A total of 21% (16/75) of the studies described the design or development of an eConsent platform. In total, 27% (20/75) of the studies described the implementation, usability, or evaluation of an existing eConsent platform. Further details about the study characteristics are shown in Table 1 .

Study characteristics (N=75).

a Country of origin: countries categorized as Other included Australia (2/75, 3%), Canada (2/75, 3%), South Korea (2/75, 3%), Switzerland (2/75, 3%), Colombia (1/75, 1%), Denmark (1/75, 1%), India (1/75, 1%), and Singapore (1/75, 1%).

b PHI: personal health information.

c eConsent: electronic consent.

d General population can be further divided into studies that focus on a national population (8/75, 11%) or on a regional-, provincial-, or state-level population (11/75, 15%).

e Sample subgroup analysis: Yes —studies conducted a subgroup analysis to understand whether participant demographic characteristics (eg, race and ethnicity, education, age, digital and health literacy, income, and sex and gender) affected their consent preferences and behaviors.

Research Question 1: What Are Patient Preferences on Consent Models for Digital Health?

Table 2 presents a typology of consent models, outlining the models used across the studies. More than half of the included studies (40/75, 53%) did not describe or report the consent model used. Broad open consent was the most explored model (11/75, 15%) [ 44 - 54 ], followed by dynamic consent (8/75, 11%) [ 55 - 62 ], broad controlled consent (4/75, 5%) [ 63 - 66 ], broad tiered/meta/menu consent (3/75, 4%) [ 67 - 69 ], and general denial consent (2/75, 3%) [ 70 , 71 ].

Spectrum of consent models reported (N=75).

a PHI: personal health information.

In total, 9% (7/75) of the studies compared various consent models to understand whether a specific model increased participants’ comprehension of what they were consenting to or made them more willing to consent. Within these studies, patient consent model preferences varied; however, in most studies (5/7, 71%), participants preferred granular, informative, and transparent consent choices [ 72 - 76 ]. For example, Kim et al [ 76 ] found that 76.6% (955/1246) of the participants made sharing choices to select at least one PHI value that they would not want to share with a particular researcher. Participants also noted that, if consent choices were not offered, they were less likely to share their PHI. Kaufman et al [ 72 ] found that, when presented with various consent models, participants had similar preferences for general denial (1873/2601, 72%), broad tiered/menu/meta (1951/2601, 75%), and dynamic consent (1899/2601, 73%). A broad open consent model was the least preferred among participants, with 64% (1665/2601) stating that they would be willing to share their PHI under this model.

The remaining 29% (2/7) of the studies found that participants preferred broad open consent. Riordan et al [ 77 ] found that broader consent models may be appropriate under specific circumstances, where 91% (2873/3157) of respondents expected to be explicitly asked for consent for their identifiable records to be accessed for health provision, research, or planning, whereas half (1547/3157, 49%) of the respondents expected to be asked for consent if their records were deidentified. Brown et al [ 78 ] found that more participants (19/31, 61%) preferred a broad open consent model over a broad tiered consent model (10/31, 32%) when agreeing to the secondary use of their biospecimens. Participants least preferred “notice consent” (ie, general denial consent), stating that it was not informative and did not invoke a sense of control over their PHI sharing preferences.

Research Question 2: What Is Known About the Patient Perspective and Experience With Digital Health Consent?

In total, 91% (68/75) of the studies described the patient perspective or experience with digital health consent. These studies were further categorized according to their primary purpose (as described in Table 1 ): (1) user comprehension of consent notices and consent information needs (31/68, 46%), (2) willingness to consent to participate (12/68, 18%), and (3) willingness to consent to share PHI (25/68, 37%).

User Comprehension of Consent Notices and Consent Information Needs

In nearly half (15/31, 48%) of the studies, comprehension was assessed by comparing how different consent media such as eConsent or paper-based consent affected participants’ understanding of the consent information. A total of 73% (11/15) of the comprehension studies found that user comprehension improved when an eConsent medium was used. Participants in a randomized controlled study reported a greater understanding of most aspects of the consent notice in an eConsent platform than in a paper-based consent form [ 79 ]. Another study found that participants who used video- or app-based eConsent had greater comprehension than those provided with paper-based consent [ 80 ]. A longitudinal study [ 53 ] comparing understanding of 3 eConsent mediums with varying degrees of customizable information (eg, options to visit hyperlinks for additional information) and messaging found that participant understanding was similar among the 3 versions at the 1-week follow-up; however, participants with the least customizable eConsent version had a significant decrease in understanding at the 6-month follow-up.

Improvements in user comprehension of consent notices were attributed to the overall user satisfaction with and usability of eConsent systems. These systems were described as easy to use, well organized, and more engaging [ 47 , 50 , 53 , 54 , 58 , 80 - 89 ]. Specifically, a video eConsent system was favored in 7% (1/15) of the studies as participants felt that they could move forward through the video at their own pace, improving their understanding of what they were consenting to [ 82 ]. Other studies (21/31, 68%) exploring the use of different eConsent media such as web-based portals [ 48 , 53 , 89 - 91 ], animated videos and visuals [ 54 , 79 , 80 , 88 , 92 , 93 ], tablet kiosks [ 47 , 81 ], graphic organizers and mind maps [ 94 ], and mobile apps [ 56 , 58 , 62 , 69 , 85 , 95 , 96 ] found that participants favored the customizable elements (eg, drop-down menus, buttons, links, and multimedia) of these eConsent formats as they were more engaging and improved the presentation of consent information.

Among 19% (6/31) of the studies, mixed results were identified when assessing the amount of information and information elements that individuals require when reviewing consent notices. Beskow et al [ 44 ] found that information needs to consent to a hypothetical biobank differed at the individual level. Although 61% (34/56) felt that the information presented in the 2-page simplified and concise consent form supported their decision-making, 39% (22/56) of the participants wanted more information in the consent form. Another study found that providing additional information within their consent form did not discourage participation in a digital trace data collection app as there were no significant differences in rates of consent between participants who clicked to view a description of an app function and those who did not [ 65 ]. A few studies (3/31, 9.7%) suggested providing participants considered to be “high information seekers” with the option or ability to drill down on information elements within a consent form (ie, supplementary information and click to expand) [ 44 , 49 , 65 ]. Information elements deemed most important to participants included the purpose of the proposed initiative, duration of the study, permitted uses and access, data handling and control, safeguards and security measures, and the risks and benefits of consenting [ 49 , 60 , 65 , 97 , 98 ].

Willingness to Consent to Participate

In total, 18% (12/68) of the studies described participants’ willingness to consent to take part in an initiative or intervention (eg, research, clinical care, consumer digital health innovation, and biobank). Of the 12 studies, 5 (42%) found that most participants were willing to consent to participate. However, willingness to consent was contingent on several factors. Most often, willingness to consent depended on who their PHI would be shared with, where many participants were less trusting of entities outside their circle of care. For instance, the more trusting the participants were in their health care provider, the less control they required over their PHI [ 51 , 74 , 75 ]. Moreover, Deverka et al [ 74 ] gathered user requirements from stakeholders to support the design and management of a Medical Information Commons. The stakeholders expressed that granting for-profit entities access to participants’ information would reduce participation and trust in the Medical Information Commons. Sanderson et al [ 73 ] found that, although many participants trusted the health care system (8141/13,000, 63%) and medical researchers (7748/13,000, 60%), those reporting lower trust in the health care system and medical researchers were less willing to participate in the biobank. In interviews with users of a mobile health app, Zhou et al [ 99 ] found that, when participants were asked whom they would like to share their PHI with, 93.2% (109/117) indicated that they would share their PHI with their health care provider, 69.2% (81/109) would share it with family members, and 32.5% (38/117) would share it with friends. Another study identified a greater reluctance among participants to share PHI with pharmaceutical company researchers (1353/2601, 52%) and government researchers (1144/2601, 44%) [ 72 ]. Overall, the importance of building and fostering participant trust in digital health initiatives was described, calling for greater transparency in consent practices [ 75 , 100 ].

Participant concerns related to the privacy of their PHI and the perceived sensitivity of their PHI also hindered their willingness to participate. In a focus group study [ 51 ], participants raised concerns about the privacy and confidentiality of their biospecimens and EHR data. These concerns were primarily related to how their PHI was stored, protected, and deidentified. In a survey conducted by Sanderson et al [ 73 ], 90% (11,397/13,000) of participants agreed that health information privacy was important to them, and 64% (8135/13,000) agreed that they were worried about the privacy of their health information. Furthermore, individual privacy concerns were associated with a greater need for control over biospecimens among women participating in a breast cancer biobank study [ 75 ]. Finally, Cavazos-Rehg et al [ 101 ] explored how parental consent requirements would affect adolescents’ willingness to participate in a mental health app. Although 35% (106/303) of adolescents indicated that they would be willing to allow researchers to contact their parents for consent, 30% (91/303) would not allow researchers to contact their parents. This was primarily attributed to the importance of retaining privacy and autonomy.

Participation in the aforementioned initiatives was dependent on a variety of sociodemographic factors. A total of 42% (5/12) of the studies examined the effect of sociodemographic characteristics on willingness to consent to participate, finding that age [ 72 , 99 ], income [ 99 ], education [ 72 , 73 ], race [ 73 , 75 , 78 ], and religious beliefs [ 73 ] affected consent decisions. Participants who identified as racialized persons were less likely to consent when compared with participants identifying as White [ 73 , 75 ]. Consent decisions often depended on the granularity of consent such that those identifying as racialized persons preferred granular consent options (ie, control over PHI collection, use, and sharing) instead of broad consent options [ 78 ]. Furthermore, Sanderson et al [ 73 ] found that those who had less education and were religious were less willing to consent. In comparison, those highly educated [ 72 ] and younger [ 72 , 99 ] were more willing to consent. Zhou et al [ 99 ] also found that participants who made <US $10,000 annually exhibited the least concerns about the security and privacy of their PHI. In contrast, participants who made >US $75,000 annually expressed the strongest concerns and desire for security and privacy of their PHI.

Willingness to Consent to Share PHI

A total of 37% (25/68) of the studies explored participants’ willingness to consent to sharing their PHI for clinical care, research, biobanks, precision medicine initiatives, and consumer innovations. Generally, study participants were willing to share their PHI under certain conditions. For example, participants expressed greater comfort and willingness to share their PHI with health care providers, academic researchers, and not-for-profit organizations [ 46 , 55 , 57 , 64 , 71 , 102 - 109 ]. Participants were reluctant to share their PHI with for-profit organizations, pharmaceutical companies, government organizations, and researchers [ 46 , 55 , 57 , 64 , 71 , 103 , 105 - 108 , 110 ]. A focus group study exploring prospective genome research and repositories found that participants generally endorsed the value of sharing their PHI, especially with academic health researchers and nonprofit organizations; however, participants expressed apprehension toward sharing with for-profit entities because of the belief that for-profit entities would use their PHI to generate financial returns [ 105 ]. This finding was echoed in an American survey study in which individuals who tracked their PHI were willing to share it for health research and were more trusting of academic researchers than of for-profit entities [ 106 ].

A lack of information and transparency surrounding PHI-handling practices hindered participants’ willingness to provide consent. Unwillingness was most often attributed to a lack of information on the anonymization, aggregation, or deidentification of PHI [ 71 , 77 , 105 , 107 , 111 , 112 ]; the privacy policies and auditing practices of entities [ 105 , 106 , 112 , 113 ]; and consent or data-sharing options [ 55 , 57 , 71 , 76 , 108 , 114 ]. Participants’ willingness to share their PHI depended not only on whom they were sharing it with but also on how their PHI was to be used and for what purpose. Belfrage et al [ 109 ] found that most survey participants would not allow their EHR data to be used for quality assurance, research, or clinical education. In contrast, those who were more trusting of the health care system were more willing to permit these uses. Another study found that 79% (100/126) of participants would be willing to share their EHR data for research purposes, and 73% (92/126) indicated that knowing who would be accessing their EHR data would make them more comfortable in sharing them [ 57 ]. Grande et al [ 110 ] found that the specific use of an individual’s PHI influenced their willingness to share it for secondary purposes more than the user of the PHI and the sensitivity of the PHI. A focus group study also found that participants were more likely to share their mobile phone location data with health agencies if provided with information on how their data would be used, stored, and protected [ 111 ]. Overall, providing participants with further information about who can access their PHI and how entities can use it invoked a greater sense of trust in sharing PHI [ 102 , 103 , 111 , 113 ].

Several antecedents that either supported or hindered consent decisions were identified in the studies. Specifically, past health care and privacy experiences and health care perceptions influenced willingness to consent. Weidman et al [ 55 ] found that participants who had previously undergone a genetic test were more willing to share their PHI than those who had not. Moreover, participants with high levels of distrust in the health care system and those without a usual source of care were less supportive of secondary uses of their electronic PHI [ 110 ]. A Swedish health system user survey also found that participants with a self-reported health status of “good or very good” had higher trust in the health care system than those with a “bad or very bad” self-reported health status [ 109 ]. A focus group study by Murphy et al [ 111 ] found that, after headlines about the Cambridge Analytica scandal emerged, participants acknowledged a greater responsibility toward protecting their PHI; however, many participants did not act to safeguard their PHI further. A recurring theme was the inevitability of PHI being accidentally released or breached, raising concerns about whether privacy and security can be guaranteed by the entities who collect their PHI [ 105 ].

Across the studies, health care perceptions and consent decisions were often driven by altruistic beliefs [ 55 , 67 , 71 , 103 , 105 , 107 , 115 ]. For instance, participants in an observational study wearing mobile imaging and pervasive sensing and tracking devices did not consider privacy a primary concern [ 67 ]. Although 35% (29/82) reported having extremely private preferences or expectations for privacy, their participation in the study was motivated by the positive contributions this research could have toward health sciences, outweighing a temporary loss of privacy [ 67 ]. Rivas Velarde et al [ 103 ] found that, among focus group participants, their decisions to share their PHI for research were driven by the contributions the research could make toward the greater public good. Altruism was further demonstrated by Spencer et al [ 55 ] and Rowan et al [ 115 ], finding that most study participants understood the importance of sharing their PHI for research purposes, for the benefit of medical progress, and for the benefit of society.

It was also found that expectations regarding consent varied with sociodemographic factors and digital literacy. A UK study found that racialized participants with less education and lower digital literacy were more likely to prefer to be asked for explicit consent before their deidentified health records were accessed [ 77 ]. Consent preferences also differed by age group, where younger participants were less likely to consider informed consent important [ 116 ]. A survey of veterans enrolled in a US-based health care organization found that respondents who identified as White, male, and less educated were more likely to endorse information sharing without the need for consent [ 52 ]. Participants aged >60 years and those deemed to have an adequate health literacy level were more willing to share more items in their EHR than younger participants or those who did not report having an adequate health literacy level [ 76 ].

Principal Findings

Consent in digital health is contextually driven such that it is often dependent on who is using or accessing one’s PHI, how their PHI will be used, and for what purpose. The findings of this review underscore the context dependency of consent as there were mixed results on patient perspectives on consent models and willingness to share their PHI. For instance, broad consent models may be acceptable in specific study contexts. In contrast, consent models that provided patients with more control were favored in others (ie, broad tiered, menu, or meta and dynamic consent). Similarly, patient willingness to share their PHI, consent behaviors, perceptions, and preferences varied by study. Given this variance, enabling individuals to make informed choices based on their contexts is critical. At the most rudimentary level, individuals require specific and easily comprehensible information on who their PHI is being shared with, for what purpose their PHI will be used, and how the privacy and security of their PHI will be ensured [ 10 , 117 ]. Providing individuals with adequate information to make an informed choice fosters transparency—the moral and ethical obligation to enable meaningful consent [ 72 , 97 , 117 , 118 ]. The insights gathered and summarized in this review highlight the need to recognize individual behaviors and preferences when designing and implementing the consent processes of digital health initiatives and the importance of building and sustaining trust and transparency.

Consent Behaviors, Preferences, and Perceptions

Trust is central to individual consent behaviors, preferences, and perceptions. Willingness to consent often depended on the entity collecting PHI, where most individuals were comfortable sharing PHI with their health care providers, health care organizations, and academic researchers. Comfort in sharing PHI declined with recipients outside the individual’s circle of care, particularly with commercial or for-profit entities. There is a growing body of evidence highlighting individuals’ significant discomfort in sharing their PHI with commercial and for-profit entities, primarily because of a lack of trust in these entities [ 10 , 13 , 119 - 121 ]. This discomfort has been predominantly attributed to privacy concerns, loss of control and autonomy over one’s PHI, and the potential for misuse of one’s PHI (ie, for monetary gains) [ 13 , 19 , 121 , 122 ]. There was an evident desire for individuals to have greater control over their PHI-sharing preferences, largely attributing these needs to past privacy experiences, health care experiences, and general health care perceptions. Specifically, this review found that those with positive experiences within the health care system and those with access to a trusted usual source of care were more willing to share their PHI [ 110 , 112 ].

In contrast, those with poor health care experiences and awareness of commercial entities misusing PHI were less willing to share their PHI [ 105 , 111 , 112 ]. Unsurprisingly, other studies have found mixed results on how past health care and privacy experiences affect intentions to share [ 10 , 19 , 123 ]. This supports the notion that privacy concerns are contextually driven such that individual experiences, environmental factors, and personal dispositions influence consent behaviors and attitudes [ 124 ]. Thus, to build trust in sharing PHI outside the circle of care, understanding the influence of these past experiences on individual privacy concerns warrants further consideration and research [ 19 , 123 ].

Although many individuals were concerned about the potential risks of consenting and the confidentiality of their PHI, in several studies, the benefits outweighed the potential risks [ 55 , 67 , 71 , 103 , 105 , 107 , 115 ]. There was a general willingness to share PHI in support of research efforts and improvements to health care outcomes so long as the benefits of sharing were clear [ 19 ]. A sense of social responsibility and altruistic beliefs about improving care and treatment outcomes for oneself and society prevailed. A common explanation for this behavior is the privacy calculus, where an individual’s information-sharing decisions are based on a weighing of future consequences related to the benefits and risks of sharing [ 125 ]. For instance, Kaufman et al [ 126 ] found that general concerns about protecting one’s privacy were not substantially related to their willingness to participate in a biobank. This discordance or privacy paradox is echoed in a systematic review of patient privacy perspectives on health information exchange [ 124 ], concluding that studies are increasingly finding that individuals often rationalize the risks of sharing information by considering the potential benefits (ie, privacy calculus).

Consent Management and Information Needs

The need to modernize consent processes for the digital age is widely recognized as legislation has not kept pace with the rapidly evolving digital health environment [ 10 , 27 , 117 ]. The implementation of consent processes that adequately reflect and incorporate end users’ needs has been slow and insufficient. To modernize consent processes, dynamic consent models have been commonly adopted to allow individuals to update or alter their consent preferences when needed [ 127 ]. Dynamic consent has been described as a means to improve individual autonomy by enhancing choice, comprehension, and engagement in the consent process [ 26 , 41 ]. Although dynamic consent may enhance informed choice, studies in this review more commonly explored or implemented broad consent models. The rationale for implementing broad consent models has been attributed to the ease of implementation, the minimal impediment to the progression of research, and the lowered risk of “consent fatigue” [ 26 , 39 ]. From the patient perspective, this study found that consent models that offered enhanced control and options over their PHI were preferred over a broad consent model [ 35 , 72 - 76 ].

Interestingly, this review also found that the type of consent model may have little relevance to participants’ decisions to consent [ 73 , 75 ]. Instead, consent decisions depended on whether individuals felt well informed and trusted with whom they shared their PHI. For instance, Soni et al [ 108 ] found that, although mental health information was deemed sensitive among participants, they were still willing to share it depending on who the provider was (eg, behavioral vs nonbehavioral health care provider). This adds to the notion that preference for data sharing is not solely tied to the type of data being shared but instead is intrinsically associated with past experiences (ie, stigma and discrimination) and trust in the data recipient [ 108 ]. This finding is especially salient as the mental health privacy discourse is heightened by special legislation and expectations grounded in the subjective sensitivity of the PHI—often contributing to the disconnect between the historical paternalistic approach to protecting patient privacy and their nuanced data-sharing preferences [ 128 ]. Improving transparency around data sharing and the impacts of sharing may empower these individuals to better contextualize their past experiences, thereby supporting greater autonomy in data-sharing decisions and trust in the data recipients.

In terms of transparency, there was mixed evidence on the best practices for presenting information on consent forms. The mixed evidence is characterized by a dichotomy, where some assert that more detailed information better supports consent decisions [ 129 ], whereas others contend that brevity supports meaningful consent [ 130 - 132 ]. However, this review also found suggestive evidence that the quantity of information presented in a consent notice is less important when considering the quality of the information presented (ie, clear, transparent, and informative) [ 75 , 100 , 102 , 103 , 111 , 113 ]. As society becomes increasingly digital, consent design considerations should stem beyond document length and instead prioritize innovations in the presentation of consent information [ 117 ]. As illustrated in this review, customizable electronic formats can facilitate more informed consent decisions (eg, links, drop-down menus, and multimedia) [ 30 , 36 ]. Consistent with the meaningful consent guidelines of the OPC, this customizable approach would allow individuals to control how much information they wish to process, thereby tailoring the consent form to support an informed decision [ 28 ]. Future research should focus on understanding individual consent requirements when designing and developing eConsent platforms, aiding in implementing more meaningful functionalities.

Contributions, Future Research, and Limitations

This review provides insights into patient consent preferences in the digital health context. Given the rapid adoption and integration of digital health technologies in clinical care settings, it is unsurprising that many of the included studies were published within the last 5 years (2017 to 2022; 58/75, 77%). The summative findings of this review present the current state of patient consent preferences and emerging consent practices in the digital health context. Currently, most studies focus on collecting and using electronic PHI and EHR data for biobanks and research initiatives. Few studies (6/75, 8%) focused on understanding patient preferences, behaviors, and perspectives on consent in AI (ie, precision medicine, machine learning, and deep learning). As AI becomes more pervasive in clinical predictions and diagnosis, treatment recommendations and decision support, and consumer health innovations, additional research is needed to explore individuals’ consent preferences and experiences in these contexts.

Consistent with state-of-the-art reviews, this study highlighted gaps in digital health consent research. Although the included studies explored patient or public consent preferences, many (40/75, 53%) failed to clearly outline the type of consent model used. Reporting the consent model is essential as it provides greater context to the study findings, especially concerning individual perceptions. As with privacy research [ 124 , 133 ], more than half (39/75, 52%) of the studies relied on hypothetical scenarios to understand participants’ consent decisions. Although these studies generate insights that can inform how to approach consent for digital health initiatives, they only represent intentions rather than behaviors. They are subject to the privacy paradox [ 10 , 124 ]. Moreover, this review found that there were fewer qualitative and mixed methods studies. These formative or needs assessment studies may benefit from qualitative and mixed methods approaches to better understand individual preferences and behaviors [ 124 ].

When considering how health equity has become increasingly important in health care research [ 134 ], it was surprising to see the lack of publications exploring consent with a health equity lens. Such studies analyzed how various sociodemographic factors affected participants’ consent behaviors and preferences, finding that race, age, income, and education individually contributed to consent and data-sharing preferences. However, these findings do not provide enough depth to understand the collective, intersectional factors that influence these decisions [ 10 ]. By viewing these factors individually, the underlying and multifaceted impacts of trust and antecedents on consent behaviors and decisions are not adequately acknowledged [ 133 ]. Future research must extend beyond the notion that individual characteristics such as race and age are in themselves associated with consent decisions and examine the influences of past and current experiences (eg, distrust in the health care system and negative health care and privacy experiences) on consent behaviors [ 71 , 135 , 136 ]. In doing so, we may uncover valuable insights into building trust and empowerment within these groups.

Finally, the findings of this review highlight important considerations for designing consent in the digital health era. The design of meaningful consent processes must be rooted in co-design approaches, transparent practices, and integrated knowledge translation. As echoed by the OPC Meaningful Consent Guidelines [ 28 ], stakeholder and consumer perspectives must be included in the design of consent, where co-design with the general public, policy decision makers, digital health service providers, and graphic designers is needed. This will ensure that diverse needs and requirements are met throughout the design, development, and implementation of consent processes. Moreover, the transparency of consent notices was a reoccurring theme in this literature review, where willingness to consent was contingent on how informed individuals felt. Transparency can be facilitated by ensuring that information presented within consent notices is customizable, comprehensible, and accessible [ 117 ]. Furthermore, in progressing the call for meaningful consent in digital health, there is a strong need to ensure that knowledge dissemination and translation efforts are prioritized, especially with regard to sharing best practices and lessons learned from the use of eConsent platforms and consent models for digital health innovations.

There are some limitations to consider, most of which are related to the state-of-the-art review methodology [ 29 , 137 ]. State-of-the-art reviews intend to provide a snapshot of the most current literature on a given subject. Given the time-bound nature of this review methodology, it is possible that we did not include all the available literature. To mitigate this, we deployed our search strategy in several multidisciplinary academic search engines to obtain a comprehensive review of the literature. Furthermore, a quality assessment of the included articles was not conducted as it was beyond the scope of this review methodology. The search strategy and screening methodology should also be considered when interpreting the results. For instance, in this literature review, we searched only in MEDLINE-indexed journals as opposed to searching in PubMed-indexed journals, resulting in potentially relevant articles not being included in our search. Furthermore, by limiting the search of the literature to primary research studies published in the past decade, we may have missed studies that may have been relevant for inclusion in this review. However, fewer than one-quarter (17/75, 23%) of the studies included in this review were published before 2016. Considering the rapid advancements in digital health within the past decade, studies published before 2010 may not have provided much insight. Finally, 64% (48/75) of the studies included in this review were conducted in the United States, which may limit the generalizability of our findings given the differing cultural, social, legislative, and political environments in other countries. In addition, given the finding that consent preferences among individuals are context dependent, there is a need for further research focusing on patient and public digital health consent preferences in other regional and national contexts.

Consent is an increasingly important issue in the rapidly evolving digital health ecosystem. Implementing meaningful consent may be a complex endeavor as consent preferences and behaviors will vary based on context; however, this review found that most patients are willing to consent to share their PHI given the right circumstances. Suppose that the desired outcome is to use one’s PHI to develop, sustain, and enhance digital health innovations. In such cases, individuals must be provided with transparent information about the purpose of the collection and use of their PHI and the potential benefits, whether direct or indirect, of consenting to share their PHI. In addition to transparency, information must be customizable, allowing readers to tailor the granularity of detail to their individual needs. By enabling meaningful and informed consent, organizations can foster greater trust in their digital health solutions. Furthermore, to understand how to facilitate meaningful and informed consent in various contexts, patients and the public must be engaged in the design, development, and implementation of consent processes and notices for digital health initiatives. By doing so, consent practices in the digital health context will not simply act as a proxy for choice but will also be able to fulfill the notion of contextual integrity such that they account for individual interests and preferences in specific social contexts.

Acknowledgments

This work was supported by Canada Health Infoway, an independent, not-for-profit organization funded by the federal government. NS was supported by the Canadian Institutes of Health Research Health System Impact Fellowship. This program was led by the Canadian Institutes of Health Research Institute of Health Services and Policy Research in partnership with the Centre for Addiction and Mental Health.

Abbreviations

Multimedia appendix 1, multimedia appendix 2.

Conflicts of Interest: None declared.

  • Open access
  • Published: 18 April 2024

Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research

  • James Shaw 1 , 13 ,
  • Joseph Ali 2 , 3 ,
  • Caesar A. Atuire 4 , 5 ,
  • Phaik Yeong Cheah 6 ,
  • Armando Guio Español 7 ,
  • Judy Wawira Gichoya 8 ,
  • Adrienne Hunt 9 ,
  • Daudi Jjingo 10 ,
  • Katherine Littler 9 ,
  • Daniela Paolotti 11 &
  • Effy Vayena 12  

BMC Medical Ethics volume  25 , Article number:  46 ( 2024 ) Cite this article

920 Accesses

6 Altmetric

Metrics details

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022.

We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships.

Conclusions

The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Peer Review reports

Introduction

The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice [ 1 , 2 , 3 ]. Beyond the growing number of AI applications being implemented in health care, capabilities of AI models such as Large Language Models (LLMs) expand the potential reach and significance of AI technologies across health-related fields [ 4 , 5 ]. Discussion about effective, ethical governance of AI technologies has spanned a range of governance approaches, including government regulation, organizational decision-making, professional self-regulation, and research ethics review [ 6 , 7 , 8 ]. In this paper, we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health research, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Although applications of AI for research, health care, and public health are diverse and advancing rapidly, the insights generated at the forum remain highly relevant from a global health perspective. After summarizing important context for work in this domain, we highlight categories of ethical issues emphasized at the forum for attention from a research ethics perspective internationally. We then outline strategies proposed for research, innovation, and governance to support more ethical AI for global health.

In this paper, we adopt the definition of AI systems provided by the Organization for Economic Cooperation and Development (OECD) as our starting point. Their definition states that an AI system is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” [ 9 ]. The conceptualization of an algorithm as helping to constitute an AI system, along with hardware, other elements of software, and a particular context of use, illustrates the wide variety of ways in which AI can be applied. We have found it useful to differentiate applications of AI in research as those classified as “AI systems for discovery” and “AI systems for intervention”. An AI system for discovery is one that is intended to generate new knowledge, for example in drug discovery or public health research in which researchers are seeking potential targets for intervention, innovation, or further research. An AI system for intervention is one that directly contributes to enacting an intervention in a particular context, for example informing decision-making at the point of care or assisting with accuracy in a surgical procedure.

The mandate of the GFBR is to take a broad view of what constitutes research and its regulation in global health, with special attention to bioethics in Low- and Middle- Income Countries. AI as a group of technologies demands such a broad view. AI development for health occurs in a variety of environments, including universities and academic health sciences centers where research ethics review remains an important element of the governance of science and innovation internationally [ 10 , 11 ]. In these settings, research ethics committees (RECs; also known by different names such as Institutional Review Boards or IRBs) make decisions about the ethical appropriateness of projects proposed by researchers and other institutional members, ultimately determining whether a given project is allowed to proceed on ethical grounds [ 12 ].

However, research involving AI for health also takes place in large corporations and smaller scale start-ups, which in some jurisdictions fall outside the scope of research ethics regulation. In the domain of AI, the question of what constitutes research also becomes blurred. For example, is the development of an algorithm itself considered a part of the research process? Or only when that algorithm is tested under the formal constraints of a systematic research methodology? In this paper we take an inclusive view, in which AI development is included in the definition of research activity and within scope for our inquiry, regardless of the setting in which it takes place. This broad perspective characterizes the approach to “research ethics” we take in this paper, extending beyond the work of RECs to include the ethical analysis of the wide range of activities that constitute research as the generation of new knowledge and intervention in the world.

Ethical governance of AI in global health

The ethical governance of AI for global health has been widely discussed in recent years. The World Health Organization (WHO) released its guidelines on ethics and governance of AI for health in 2021, endorsing a set of six ethical principles and exploring the relevance of those principles through a variety of use cases. The WHO guidelines also provided an overview of AI governance, defining governance as covering “a range of steering and rule-making functions of governments and other decision-makers, including international health agencies, for the achievement of national health policy objectives conducive to universal health coverage.” (p. 81) The report usefully provided a series of recommendations related to governance of seven domains pertaining to AI for health: data, benefit sharing, the private sector, the public sector, regulation, policy observatories/model legislation, and global governance. The report acknowledges that much work is yet to be done to advance international cooperation on AI governance, especially related to prioritizing voices from Low- and Middle-Income Countries (LMICs) in global dialogue.

One important point emphasized in the WHO report that reinforces the broader literature on global governance of AI is the distribution of responsibility across a wide range of actors in the AI ecosystem. This is especially important to highlight when focused on research for global health, which is specifically about work that transcends national borders. Alami et al. (2020) discussed the unique risks raised by AI research in global health, ranging from the unavailability of data in many LMICs required to train locally relevant AI models to the capacity of health systems to absorb new AI technologies that demand the use of resources from elsewhere in the system. These observations illustrate the need to identify the unique issues posed by AI research for global health specifically, and the strategies that can be employed by all those implicated in AI governance to promote ethically responsible use of AI in global health research.

RECs and the regulation of research involving AI

RECs represent an important element of the governance of AI for global health research, and thus warrant further commentary as background to our paper. Despite the importance of RECs, foundational questions have been raised about their capabilities to accurately understand and address ethical issues raised by studies involving AI. Rahimzadeh et al. (2023) outlined how RECs in the United States are under-prepared to align with recent federal policy requiring that RECs review data sharing and management plans with attention to the unique ethical issues raised in AI research for health [ 13 ]. Similar research in South Africa identified variability in understanding of existing regulations and ethical issues associated with health-related big data sharing and management among research ethics committee members [ 14 , 15 ]. The effort to address harms accruing to groups or communities as opposed to individuals whose data are included in AI research has also been identified as a unique challenge for RECs [ 16 , 17 ]. Doerr and Meeder (2022) suggested that current regulatory frameworks for research ethics might actually prevent RECs from adequately addressing such issues, as they are deemed out of scope of REC review [ 16 ]. Furthermore, research in the United Kingdom and Canada has suggested that researchers using AI methods for health tend to distinguish between ethical issues and social impact of their research, adopting an overly narrow view of what constitutes ethical issues in their work [ 18 ].

The challenges for RECs in adequately addressing ethical issues in AI research for health care and public health exceed a straightforward survey of ethical considerations. As Ferretti et al. (2021) contend, some capabilities of RECs adequately cover certain issues in AI-based health research, such as the common occurrence of conflicts of interest where researchers who accept funds from commercial technology providers are implicitly incentivized to produce results that align with commercial interests [ 12 ]. However, some features of REC review require reform to adequately meet ethical needs. Ferretti et al. outlined weaknesses of RECs that are longstanding and those that are novel to AI-related projects, proposing a series of directions for development that are regulatory, procedural, and complementary to REC functionality. The work required on a global scale to update the REC function in response to the demands of research involving AI is substantial.

These issues take greater urgency in the context of global health [ 19 ]. Teixeira da Silva (2022) described the global practice of “ethics dumping”, where researchers from high income countries bring ethically contentious practices to RECs in low-income countries as a strategy to gain approval and move projects forward [ 20 ]. Although not yet systematically documented in AI research for health, risk of ethics dumping in AI research is high. Evidence is already emerging of practices of “health data colonialism”, in which AI researchers and developers from large organizations in high-income countries acquire data to build algorithms in LMICs to avoid stricter regulations [ 21 ]. This specific practice is part of a larger collection of practices that characterize health data colonialism, involving the broader exploitation of data and the populations they represent primarily for commercial gain [ 21 , 22 ]. As an additional complication, AI algorithms trained on data from high-income contexts are unlikely to apply in straightforward ways to LMIC settings [ 21 , 23 ]. In the context of global health, there is widespread acknowledgement about the need to not only enhance the knowledge base of REC members about AI-based methods internationally, but to acknowledge the broader shifts required to encourage their capabilities to more fully address these and other ethical issues associated with AI research for health [ 8 ].

Although RECs are an important part of the story of the ethical governance of AI for global health research, they are not the only part. The responsibilities of supra-national entities such as the World Health Organization, national governments, organizational leaders, commercial AI technology providers, health care professionals, and other groups continue to be worked out internationally. In this context of ongoing work, examining issues that demand attention and strategies to address them remains an urgent and valuable task.

The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, REC members and other actors to engage with challenges and opportunities specifically related to research ethics. Each year the GFBR meeting includes a series of case studies and keynotes presented in plenary format to an audience of approximately 100 people who have applied and been competitively selected to attend, along with small-group breakout discussions to advance thinking on related issues. The specific topic of the forum changes each year, with past topics including ethical issues in research with people living with mental health conditions (2021), genome editing (2019), and biobanking/data sharing (2018). The forum is intended to remain grounded in the practical challenges of engaging in research ethics, with special interest in low resource settings from a global health perspective. A post-meeting fellowship scheme is open to all LMIC participants, providing a unique opportunity to apply for funding to further explore and address the ethical challenges that are identified during the meeting.

In 2022, the focus of the GFBR was “Ethics of AI in Global Health Research”. The forum consisted of 6 case study presentations (both short and long form) reporting on specific initiatives related to research ethics and AI for health, and 16 governance presentations (both short and long form) reporting on actual approaches to governing AI in different country settings. A keynote presentation from Professor Effy Vayena addressed the topic of the broader context for AI ethics in a rapidly evolving field. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. The 2-day forum addressed a wide range of themes. The conference report provides a detailed overview of each of the specific topics addressed while a policy paper outlines the cross-cutting themes (both documents are available at the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ ). As opposed to providing a detailed summary in this paper, we aim to briefly highlight central issues raised, solutions proposed, and the challenges facing the research ethics community in the years to come.

In this way, our primary aim in this paper is to present a synthesis of the challenges and opportunities raised at the GFBR meeting and in the planning process, followed by our reflections as a group of authors on their significance for governance leaders in the coming years. We acknowledge that the views represented at the meeting and in our results are a partial representation of the universe of views on this topic; however, the GFBR leadership invested a great deal of resources in convening a deeply diverse and thoughtful group of researchers and practitioners working on themes of bioethics related to AI for global health including those based in LMICs. We contend that it remains rare to convene such a strong group for an extended time and believe that many of the challenges and opportunities raised demand attention for more ethical futures of AI for health. Nonetheless, our results are primarily descriptive and are thus not explicitly grounded in a normative argument. We make effort in the Discussion section to contextualize our results by describing their significance and connecting them to broader efforts to reform global health research and practice.

Uniquely important ethical issues for AI in global health research

Presentations and group dialogue over the course of the forum raised several issues for consideration, and here we describe four overarching themes for the ethical governance of AI in global health research. Brief descriptions of each issue can be found in Table  1 . Reports referred to throughout the paper are available at the GFBR website provided above.

The first overarching thematic issue relates to the appropriateness of building AI technologies in response to health-related challenges in the first place. Case study presentations referred to initiatives where AI technologies were highly appropriate, such as in ear shape biometric identification to more accurately link electronic health care records to individual patients in Zambia (Alinani Simukanga). Although important ethical issues were raised with respect to privacy, trust, and community engagement in this initiative, the AI-based solution was appropriately matched to the challenge of accurately linking electronic records to specific patient identities. In contrast, forum participants raised questions about the appropriateness of an initiative using AI to improve the quality of handwashing practices in an acute care hospital in India (Niyoshi Shah), which led to gaming the algorithm. Overall, participants acknowledged the dangers of techno-solutionism, in which AI researchers and developers treat AI technologies as the most obvious solutions to problems that in actuality demand much more complex strategies to address [ 24 ]. However, forum participants agreed that RECs in different contexts have differing degrees of power to raise issues of the appropriateness of an AI-based intervention.

The second overarching thematic issue related to whether and how AI-based systems transfer from one national health context to another. One central issue raised by a number of case study presentations related to the challenges of validating an algorithm with data collected in a local environment. For example, one case study presentation described a project that would involve the collection of personally identifiable data for sensitive group identities, such as tribe, clan, or religion, in the jurisdictions involved (South Africa, Nigeria, Tanzania, Uganda and the US; Gakii Masunga). Doing so would enable the team to ensure that those groups were adequately represented in the dataset to ensure the resulting algorithm was not biased against specific community groups when deployed in that context. However, some members of these communities might desire to be represented in the dataset, whereas others might not, illustrating the need to balance autonomy and inclusivity. It was also widely recognized that collecting these data is an immense challenge, particularly when historically oppressive practices have led to a low-trust environment for international organizations and the technologies they produce. It is important to note that in some countries such as South Africa and Rwanda, it is illegal to collect information such as race and tribal identities, re-emphasizing the importance for cultural awareness and avoiding “one size fits all” solutions.

The third overarching thematic issue is related to understanding accountabilities for both the impacts of AI technologies and governance decision-making regarding their use. Where global health research involving AI leads to longer-term harms that might fall outside the usual scope of issues considered by a REC, who is to be held accountable, and how? This question was raised as one that requires much further attention, with law being mixed internationally regarding the mechanisms available to hold researchers, innovators, and their institutions accountable over the longer term. However, it was recognized in breakout group discussion that many jurisdictions are developing strong data protection regimes related specifically to international collaboration for research involving health data. For example, Kenya’s Data Protection Act requires that any internationally funded projects have a local principal investigator who will hold accountability for how data are shared and used [ 25 ]. The issue of research partnerships with commercial entities was raised by many participants in the context of accountability, pointing toward the urgent need for clear principles related to strategies for engagement with commercial technology companies in global health research.

The fourth and final overarching thematic issue raised here is that of consent. The issue of consent was framed by the widely shared recognition that models of individual, explicit consent might not produce a supportive environment for AI innovation that relies on the secondary uses of health-related datasets to build AI algorithms. Given this recognition, approaches such as community oversight of health data uses were suggested as a potential solution. However, the details of implementing such community oversight mechanisms require much further attention, particularly given the unique perspectives on health data in different country settings in global health research. Furthermore, some uses of health data do continue to require consent. One case study of South Africa, Nigeria, Kenya, Ethiopia and Uganda suggested that when health data are shared across borders, individual consent remains necessary when data is transferred from certain countries (Nezerith Cengiz). Broader clarity is necessary to support the ethical governance of health data uses for AI in global health research.

Recommendations for ethical governance of AI in global health research

Dialogue at the forum led to a range of suggestions for promoting ethical conduct of AI research for global health, related to the various roles of actors involved in the governance of AI research broadly defined. The strategies are written for actors we refer to as “governance leaders”, those people distributed throughout the AI for global health research ecosystem who are responsible for ensuring the ethical and socially responsible conduct of global health research involving AI (including researchers themselves). These include RECs, government regulators, health care leaders, health professionals, corporate social accountability officers, and others. Enacting these strategies would bolster the ethical governance of AI for global health more generally, enabling multiple actors to fulfill their roles related to governing research and development activities carried out across multiple organizations, including universities, academic health sciences centers, start-ups, and technology corporations. Specific suggestions are summarized in Table  2 .

First, forum participants suggested that governance leaders including RECs, should remain up to date on recent advances in the regulation of AI for health. Regulation of AI for health advances rapidly and takes on different forms in jurisdictions around the world. RECs play an important role in governance, but only a partial role; it was deemed important for RECs to acknowledge how they fit within a broader governance ecosystem in order to more effectively address the issues within their scope. Not only RECs but organizational leaders responsible for procurement, researchers, and commercial actors should all commit to efforts to remain up to date about the relevant approaches to regulating AI for health care and public health in jurisdictions internationally. In this way, governance can more adequately remain up to date with advances in regulation.

Second, forum participants suggested that governance leaders should focus on ethical governance of health data as a basis for ethical global health AI research. Health data are considered the foundation of AI development, being used to train AI algorithms for various uses [ 26 ]. By focusing on ethical governance of health data generation, sharing, and use, multiple actors will help to build an ethical foundation for AI development among global health researchers.

Third, forum participants believed that governance processes should incorporate AI impact assessments where appropriate. An AI impact assessment is the process of evaluating the potential effects, both positive and negative, of implementing an AI algorithm on individuals, society, and various stakeholders, generally over time frames specified in advance of implementation [ 27 ]. Although not all types of AI research in global health would warrant an AI impact assessment, this is especially relevant for those studies aiming to implement an AI system for intervention into health care or public health. Organizations such as RECs can use AI impact assessments to boost understanding of potential harms at the outset of a research project, encouraging researchers to more deeply consider potential harms in the development of their study.

Fourth, forum participants suggested that governance decisions should incorporate the use of environmental impact assessments, or at least the incorporation of environment values when assessing the potential impact of an AI system. An environmental impact assessment involves evaluating and anticipating the potential environmental effects of a proposed project to inform ethical decision-making that supports sustainability [ 28 ]. Although a relatively new consideration in research ethics conversations [ 29 ], the environmental impact of building technologies is a crucial consideration for the public health commitment to environmental sustainability. Governance leaders can use environmental impact assessments to boost understanding of potential environmental harms linked to AI research projects in global health over both the shorter and longer terms.

Fifth, forum participants suggested that governance leaders should require stronger transparency in the development of AI algorithms in global health research. Transparency was considered essential in the design and development of AI algorithms for global health to ensure ethical and accountable decision-making throughout the process. Furthermore, whether and how researchers have considered the unique contexts into which such algorithms may be deployed can be surfaced through stronger transparency, for example in describing what primary considerations were made at the outset of the project and which stakeholders were consulted along the way. Sharing information about data provenance and methods used in AI development will also enhance the trustworthiness of the AI-based research process.

Sixth, forum participants suggested that governance leaders can encourage or require community engagement at various points throughout an AI project. It was considered that engaging patients and communities is crucial in AI algorithm development to ensure that the technology aligns with community needs and values. However, participants acknowledged that this is not a straightforward process. Effective community engagement requires lengthy commitments to meeting with and hearing from diverse communities in a given setting, and demands a particular set of skills in communication and dialogue that are not possessed by all researchers. Encouraging AI researchers to begin this process early and build long-term partnerships with community members is a promising strategy to deepen community engagement in AI research for global health. One notable recommendation was that research funders have an opportunity to incentivize and enable community engagement with funds dedicated to these activities in AI research in global health.

Seventh, forum participants suggested that governance leaders can encourage researchers to build strong, fair partnerships between institutions and individuals across country settings. In a context of longstanding imbalances in geopolitical and economic power, fair partnerships in global health demand a priori commitments to share benefits related to advances in medical technologies, knowledge, and financial gains. Although enforcement of this point might be beyond the remit of RECs, commentary will encourage researchers to consider stronger, fairer partnerships in global health in the longer term.

Eighth, it became evident that it is necessary to explore new forms of regulatory experimentation given the complexity of regulating a technology of this nature. In addition, the health sector has a series of particularities that make it especially complicated to generate rules that have not been previously tested. Several participants highlighted the desire to promote spaces for experimentation such as regulatory sandboxes or innovation hubs in health. These spaces can have several benefits for addressing issues surrounding the regulation of AI in the health sector, such as: (i) increasing the capacities and knowledge of health authorities about this technology; (ii) identifying the major problems surrounding AI regulation in the health sector; (iii) establishing possibilities for exchange and learning with other authorities; (iv) promoting innovation and entrepreneurship in AI in health; and (vi) identifying the need to regulate AI in this sector and update other existing regulations.

Ninth and finally, forum participants believed that the capabilities of governance leaders need to evolve to better incorporate expertise related to AI in ways that make sense within a given jurisdiction. With respect to RECs, for example, it might not make sense for every REC to recruit a member with expertise in AI methods. Rather, it will make more sense in some jurisdictions to consult with members of the scientific community with expertise in AI when research protocols are submitted that demand such expertise. Furthermore, RECs and other approaches to research governance in jurisdictions around the world will need to evolve in order to adopt the suggestions outlined above, developing processes that apply specifically to the ethical governance of research using AI methods in global health.

Research involving the development and implementation of AI technologies continues to grow in global health, posing important challenges for ethical governance of AI in global health research around the world. In this paper we have summarized insights from the 2022 GFBR, focused specifically on issues in research ethics related to AI for global health research. We summarized four thematic challenges for governance related to AI in global health research and nine suggestions arising from presentations and dialogue at the forum. In this brief discussion section, we present an overarching observation about power imbalances that frames efforts to evolve the role of governance in global health research, and then outline two important opportunity areas as the field develops to meet the challenges of AI in global health research.

Dialogue about power is not unfamiliar in global health, especially given recent contributions exploring what it would mean to de-colonize global health research, funding, and practice [ 30 , 31 ]. Discussions of research ethics applied to AI research in global health contexts are deeply infused with power imbalances. The existing context of global health is one in which high-income countries primarily located in the “Global North” charitably invest in projects taking place primarily in the “Global South” while recouping knowledge, financial, and reputational benefits [ 32 ]. With respect to AI development in particular, recent examples of digital colonialism frame dialogue about global partnerships, raising attention to the role of large commercial entities and global financial capitalism in global health research [ 21 , 22 ]. Furthermore, the power of governance organizations such as RECs to intervene in the process of AI research in global health varies widely around the world, depending on the authorities assigned to them by domestic research governance policies. These observations frame the challenges outlined in our paper, highlighting the difficulties associated with making meaningful change in this field.

Despite these overarching challenges of the global health research context, there are clear strategies for progress in this domain. Firstly, AI innovation is rapidly evolving, which means approaches to the governance of AI for health are rapidly evolving too. Such rapid evolution presents an important opportunity for governance leaders to clarify their vision and influence over AI innovation in global health research, boosting the expertise, structure, and functionality required to meet the demands of research involving AI. Secondly, the research ethics community has strong international ties, linked to a global scholarly community that is committed to sharing insights and best practices around the world. This global community can be leveraged to coordinate efforts to produce advances in the capabilities and authorities of governance leaders to meaningfully govern AI research for global health given the challenges summarized in our paper.

Limitations

Our paper includes two specific limitations that we address explicitly here. First, it is still early in the lifetime of the development of applications of AI for use in global health, and as such, the global community has had limited opportunity to learn from experience. For example, there were many fewer case studies, which detail experiences with the actual implementation of an AI technology, submitted to GFBR 2022 for consideration than was expected. In contrast, there were many more governance reports submitted, which detail the processes and outputs of governance processes that anticipate the development and dissemination of AI technologies. This observation represents both a success and a challenge. It is a success that so many groups are engaging in anticipatory governance of AI technologies, exploring evidence of their likely impacts and governing technologies in novel and well-designed ways. It is a challenge that there is little experience to build upon of the successful implementation of AI technologies in ways that have limited harms while promoting innovation. Further experience with AI technologies in global health will contribute to revising and enhancing the challenges and recommendations we have outlined in our paper.

Second, global trends in the politics and economics of AI technologies are evolving rapidly. Although some nations are advancing detailed policy approaches to regulating AI more generally, including for uses in health care and public health, the impacts of corporate investments in AI and political responses related to governance remain to be seen. The excitement around large language models (LLMs) and large multimodal models (LMMs) has drawn deeper attention to the challenges of regulating AI in any general sense, opening dialogue about health sector-specific regulations. The direction of this global dialogue, strongly linked to high-profile corporate actors and multi-national governance institutions, will strongly influence the development of boundaries around what is possible for the ethical governance of AI for global health. We have written this paper at a point when these developments are proceeding rapidly, and as such, we acknowledge that our recommendations will need updating as the broader field evolves.

Ultimately, coordination and collaboration between many stakeholders in the research ethics ecosystem will be necessary to strengthen the ethical governance of AI in global health research. The 2022 GFBR illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.

Data availability

All data and materials analyzed to produce this paper are available on the GFBR website: https://www.gfbr.global/past-meetings/16th-forum-cape-town-south-africa-29-30-november-2022/ .

Clark P, Kim J, Aphinyanaphongs Y, Marketing, Food US. Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical devices: a systematic review. JAMA Netw Open. 2023;6(7):e2321792–2321792.

Article   Google Scholar  

Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial intelligence in breast cancer screening: evaluation of FDA device regulation and future recommendations. JAMA Intern Med. 2022;182(12):1306–12.

Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. 2022;296:114782.

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. A large language model for electronic health records. NPJ Digit Med. 2022;5(1):194.

Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Minssen T, Vayena E, Cohen IG. The challenges for Regulating Medical Use of ChatGPT and other large Language models. JAMA. 2023.

Ho CWL, Malpani R. Scaling up the research ethics framework for healthcare machine learning as global health ethics and governance. Am J Bioeth. 2022;22(5):36–8.

Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. 2020;59(1):27–34.

Maddox TM, Rumsfeld JS, Payne PR. Questions for artificial intelligence in health care. JAMA. 2019;321(1):31–2.

Dzau VJ, Balatbat CA, Ellaissi WF. Revisiting academic health sciences systems a decade later: discovery to health to population to society. Lancet. 2021;398(10318):2300–4.

Ferretti A, Ienca M, Sheehan M, Blasimme A, Dove ES, Farsides B, et al. Ethics review of big data research: what should stay and what should be reformed? BMC Med Ethics. 2021;22(1):1–13.

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3.

Kling S, Singh S, Burgess TL, Nair G. The role of an ethics advisory committee in data science research in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–3.

Google Scholar  

Cengiz N, Kabanda SM, Esterhuizen TM, Moodley K. Exploring perspectives of research ethics committee members on the governance of big data in sub-saharan Africa. South Afr J Sci. 2023;119(5–6):1–9.

Doerr M, Meeder S. Big health data research and group harm: the scope of IRB review. Ethics Hum Res. 2022;44(4):34–8.

Ballantyne A, Stewart C. Big data and public-private partnerships in healthcare and research: the application of an ethics framework for big data in health and research. Asian Bioeth Rev. 2019;11(3):315–26.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. J Empir Res Hum Res Ethics. 2021;16(3):325–37.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):1–17.

Teixeira da Silva JA. Handling ethics dumping and neo-colonial research: from the laboratory to the academic literature. J Bioethical Inq. 2022;19(3):433–43.

Ferryman K. The dangers of data colonialism in precision public health. Glob Policy. 2021;12:90–2.

Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media. 2019;20(4):336–49.

Organization WH. Ethics and governance of artificial intelligence for health: WHO guidance. 2021.

Metcalf J, Moss E. Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc Res Int Q. 2019;86(2):449–76.

Data Protection Act - OFFICE OF THE DATA PROTECTION COMMISSIONER KENYA [Internet]. 2021 [cited 2023 Sep 30]. https://www.odpc.go.ke/dpa-act/ .

Sharon T, Lucivero F. Introduction to the special theme: the expansion of the health data ecosystem–rethinking data ethics and governance. Big Data & Society. Volume 6. London, England: SAGE Publications Sage UK; 2019. p. 2053951719852969.

Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic impact assessments: a practical Framework for Public Agency. AI Now. 2018.

Morgan RK. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 2012;30(1):5–14.

Samuel G, Richie C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J Med Ethics. 2023;49(6):428–33.

Kwete X, Tang K, Chen L, Ren R, Chen Q, Wu Z, et al. Decolonizing global health: what should be the target of this movement and where does it lead us? Glob Health Res Policy. 2022;7(1):3.

Abimbola S, Asthana S, Montenegro C, Guinto RR, Jumbam DT, Louskieter L, et al. Addressing power asymmetries in global health: imperatives in the wake of the COVID-19 pandemic. PLoS Med. 2021;18(4):e1003604.

Benatar S. Politics, power, poverty and global health: systems and frames. Int J Health Policy Manag. 2016;5(10):599.

Download references

Acknowledgements

We would like to acknowledge the outstanding contributions of the attendees of GFBR 2022 in Cape Town, South Africa. This paper is authored by members of the GFBR 2022 Planning Committee. We would like to acknowledge additional members Tamra Lysaght, National University of Singapore, and Niresh Bhagwandin, South African Medical Research Council, for their input during the planning stages and as reviewers of the applications to attend the Forum.

This work was supported by Wellcome [222525/Z/21/Z], the US National Institutes of Health, the UK Medical Research Council (part of UK Research and Innovation), and the South African Medical Research Council through funding to the Global Forum on Bioethics in Research.

Author information

Authors and affiliations.

Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada

Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA

Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Department of Philosophy and Classics, University of Ghana, Legon-Accra, Ghana

Caesar A. Atuire

Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK

Mahidol Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Phaik Yeong Cheah

Berkman Klein Center, Harvard University, Bogotá, Colombia

Armando Guio Español

Department of Radiology and Informatics, Emory University School of Medicine, Atlanta, GA, USA

Judy Wawira Gichoya

Health Ethics & Governance Unit, Research for Health Department, Science Division, World Health Organization, Geneva, Switzerland

Adrienne Hunt & Katherine Littler

African Center of Excellence in Bioinformatics and Data Intensive Science, Infectious Diseases Institute, Makerere University, Kampala, Uganda

Daudi Jjingo

ISI Foundation, Turin, Italy

Daniela Paolotti

Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland

Effy Vayena

Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

JS led the writing, contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. CA contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. PYC contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AE contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. JWG contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. AH contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DJ contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. KL contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. DP contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper. EV contributed to conceptualization and analysis, critically reviewed and provided feedback on drafts of this paper, and provided final approval of the paper.

Corresponding author

Correspondence to James Shaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Shaw, J., Ali, J., Atuire, C.A. et al. Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research. BMC Med Ethics 25 , 46 (2024). https://doi.org/10.1186/s12910-024-01044-w

Download citation

Received : 31 October 2023

Accepted : 01 April 2024

Published : 18 April 2024

DOI : https://doi.org/10.1186/s12910-024-01044-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Research ethics
  • Global health

BMC Medical Ethics

ISSN: 1472-6939

research paper on free consent

Stanford Prison Experiment Ethical Issues

This essay about the Stanford Prison Experiment scrutinizes the ethical dilemmas surrounding the study. It explores issues such as informed consent, researcher bias, and the decision not to intervene in the face of escalating harm. The summary emphasizes the blurred line between scientific inquiry and ethical misconduct, as well as the broader implications of the study’s findings. Ultimately, it underscores the importance of upholding ethical principles and prioritizing the well-being of participants in psychological research.

How it works

Delving into the annals of psychological research, one cannot bypass the notorious Stanford Prison Experiment, a venture orchestrated by the esteemed Dr. Philip Zimbardo in the early 1970s. Intended to unravel the intricate dynamics of power and authority within a simulated prison setting, the study instead unfurled a tapestry of ethical quandaries, inviting a probing analysis of the researcher’s obligations and the boundaries of scientific inquiry.

At the heart of the ethical maelstrom swirling around the Stanford Prison Experiment lies the issue of informed consent.

While participants willingly entered the realm of the study, they were not fully apprised of the potential psychological toll that awaited them. This omission shrouded the experiment in a cloak of ethical ambiguity, as individuals were unwittingly thrust into distressing circumstances without a complete understanding of the risks involved. The foundational principles of autonomy and respect for persons were thus undermined, casting a shadow over the ethical integrity of the study.

Moreover, the conduct exhibited by both guards and prisoners within the simulated prison environment raises profound ethical concerns. The blurring of boundaries between research and reality gave rise to a crucible of abuse and mistreatment, where participants endured humiliation, coercion, and even physical violence. The ethical compass of the experiment appeared to falter as the line between scientific inquiry and ethical transgression became increasingly blurred. The ramifications of subjecting individuals to such distressing conditions in the name of academic pursuit resonate far beyond the confines of the laboratory.

Central to the ethical discourse surrounding the Stanford Prison Experiment is the role of the researcher and the influence wielded therein. Dr. Zimbardo’s dual role as both the principal investigator and the overseer of the simulated prison bestowed upon him a mantle of authority that extended beyond the confines of academic inquiry. This imbalance of power raises questions about the potential for coercion and undue influence, as participants may have felt compelled to conform to the expectations set forth by the experimenter. The ethical fabric of the study thus becomes entangled in a web of researcher bias and influence, challenging the integrity of the findings and the validity of the conclusions drawn.

Furthermore, the decision not to intervene in the face of escalating harm and abuse among participants casts a pall over the ethical landscape of the Stanford Prison Experiment. Despite bearing witness to distressing acts of aggression and suffering, the researchers opted to prioritize the observation of natural behavior over the well-being of the individuals involved. This ethical calculus, or lack thereof, raises fundamental questions about the researcher’s duty to protect human subjects from harm and the ethical boundaries that delineate acceptable research practices from ethical transgression.

Additionally, the broader implications and generalizability of the study’s findings have come under scrutiny in the wake of ethical scrutiny. The highly artificial nature of the simulated prison environment raises doubts about the applicability of the results to real-world contexts, casting a shadow of doubt over the validity and relevance of the findings. The ethical imperative to conduct research that yields meaningful insights while safeguarding the well-being of participants is thus called into question, as the ethical complexities of the Stanford Prison Experiment continue to reverberate through the corridors of academic discourse.

In summation, the Stanford Prison Experiment serves as a cautionary tale about the ethical complexities inherent in psychological research involving human subjects. From issues of informed consent and researcher bias to the failure to intervene in the face of escalating harm, the study underscores the importance of upholding ethical principles and prioritizing the well-being of participants above all else. As scholars and practitioners, it is our responsibility to navigate these ethical waters with care and diligence, ensuring that our research practices uphold the dignity and rights of all individuals involved.

owl

Cite this page

Stanford Prison Experiment Ethical Issues. (2024, Apr 29). Retrieved from https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/

"Stanford Prison Experiment Ethical Issues." PapersOwl.com , 29 Apr 2024, https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/

PapersOwl.com. (2024). Stanford Prison Experiment Ethical Issues . [Online]. Available at: https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/ [Accessed: 29 Apr. 2024]

"Stanford Prison Experiment Ethical Issues." PapersOwl.com, Apr 29, 2024. Accessed April 29, 2024. https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/

"Stanford Prison Experiment Ethical Issues," PapersOwl.com , 29-Apr-2024. [Online]. Available: https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/. [Accessed: 29-Apr-2024]

PapersOwl.com. (2024). Stanford Prison Experiment Ethical Issues . [Online]. Available at: https://papersowl.com/examples/stanford-prison-experiment-ethical-issues/ [Accessed: 29-Apr-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

IMAGES

  1. Free Informed Consent for Research Templates

    research paper on free consent

  2. FREE 6+ Research Consent Forms in PDF

    research paper on free consent

  3. FREE 8+ Sample Research Consent Forms in PDF

    research paper on free consent

  4. FREE 8+ Research Consent Forms in PDF

    research paper on free consent

  5. FREE 8+ Sample Research Consent Forms in PDF

    research paper on free consent

  6. 9+ Research Consent Form Templates

    research paper on free consent

VIDEO

  1. Research Consent Form Video

  2. Free consent (Factors affecting consent) Section 14 of Indian Contract Act

  3. Informed consent in research ethics

  4. Writing an Informed Consent Form

  5. Beginner's Guide to INFORMED CONSENT in Clinical Trials I Informed Consent Form I #informedconsent

  6. Free Consent [ Meaning And Definition of Free Consent, with Trick ] for B.COM, B.B.A PART-4

COMMENTS

  1. (PDF) Free Consent

    PDF | A Project Presentation On Business Law on the topics Free Consent | Find, read and cite all the research you need on ResearchGate

  2. Implementing continuous consent in qualitative research

    Facilitating free and informed consent is a key ethical standard to consider when conducting social research. The principle of informed consent was formalized to help create research relationships that are founded on 'trust and integrity' and aims to safeguard people's freedom to decide whether or not to participate in research.An important criterion for consent's validity is that an ...

  3. Digital tools in the informed consent process: a systematic review

    Digital technologies for informed consent were not found to negatively affect any of the outcomes, and overall, multimedia tools seem desirable. Multimedia tools indicated a higher impact than videos only. Presence of a researcher may potentially enhance efficacy of different outcomes in research IC processes.

  4. A Modern History of Informed Consent and the Role of Key Information

    INTRODUCTION. Informed consent is one of the primary principles on which the framework of protections for human subjects in research is built. In the United States, informed consent was codified in law via the statutes at 45 CFR 46 and 21 CFR 50 of the Code of Federal Regulations, yet the intellectual scaffolding on which it has been built over time has shifted, just as the landscape of human ...

  5. Informed Consent

    The debate about the need for and form of informed consent for research with stored biospecimens was revived by recent international discussions and proposed changes to the U.S. Common Rule ...

  6. Informed Consent in Research

    Informed Consent in Research. Informed consent is a process of communication between a researcher and a potential participant in which the researcher provides adequate information about the study, its risks and benefits, and the participant voluntarily agrees to participate. It is a cornerstone of ethical research involving human subjects and is intended to protect the rights and welfare of ...

  7. The Concept of Free Consent With Special Reference to a Landmark Case

    In this research paper, our concern is with discussing the aspects of the first essential element as mentioned above, that is, the free consent of the parties to a contract. In furtherance of this, the legal provisions regarding consent and free consent are defined in Section 13 and 14 of the Indian Contract Act respectively.

  8. PDF A Study on Free Consent to Contract

    contract would not be free. According to Section 10 of Indian Contract Act, "All agreements are contract , if they are made with free consent, by the parties competent to contract by a lawful consideration for a lawful object. The free consent of the parties to contract is very important for the validity of the contract. No Consent = No Contract.

  9. Informed Consent—We Can and Should Do Better

    Informed consent is fundamental to the ethical and legal doctrines respecting research participants' voluntary participation in clinical research, enshrined in such documents as the 1947 Nuremberg Code; reaffirmed in the 1964 Declaration of Helsinki, revised in 1975, and the 1978 Belmont Report; and codified in the United States in the 1981 Common Rule, revised in 2018 and implemented in 2019. 1

  10. PDF FREE CONSENT

    When the consent of any party is not free, the contract is usually treated as voidable at the option of the party whose consent was not free. If, however, the consent has been caused by mistake on the part of both the pmies, the contract is considered void. Look at Figure 4.1. It depicts the factors affecting free consent

  11. How to obtain informed consent for research

    The consent signature requirements from the mother and father are summarised in table 3. Once the informed consent is obtained, the pregnant women will be included into any phase of the study unless the research project will be compromised or the patient's health (mother and/or fetus) will be in danger.

  12. Consent in Contract Law by Brian Bix :: SSRN

    Consent, in terms of voluntary choice, is - or, at least, appears to be or purports to be - at the essence of contract law. ... University of Minnesota Law School Legal Studies Research Paper Series. ... Subscribe to this free journal for more curated articles on this topic FOLLOWERS. 751. PAPERS. 13,532. Recommended Papers ...

  13. Capacity and consent: Knowledge and practice of legal and healthcare

    Those participants who recognised that necessity or substitute consent conditions were necessary to treat a patient without their consent were more likely to have a good understanding of the legal criteria for assessing DMC (χ 2 (1) = 10.88, p = .001) and state that all professional groups could perform routine DMC assessments (χ 2 (1) = 7.96 ...

  14. Free Consent as an Essential for a valid Contract: An Analysis with

    Not many people are aware of the difference between consent and free consent. More research papers must be written covering this concept. People must be informed about the difference in mistake of fact and mistake of law. Lastly, the most important thing everyone must be made aware of their rights when their right to free consent is infringed.

  15. An Analysis on Major Elements of a Valid Contract Under Muluki Civil

    It is a body of law which is concerned with formation and enforcement of the contract. The formation of a contract generally requires an offer, acceptance, consideration, certainty, capacity, free consent and a mutual assent of two or more persons to be bound. The forms of contract may be in written, oral and by conduct.

  16. PDF Consent for publication: Best practice for authors

    Some consent forms for participation in research do cover intended publication, however these forms rarely meet BMJ's requirements which ensure the individual has been fully informed of the benefits and harms of publication. Consent for participation in research still must be obtained according to appropriate ethical standards.

  17. Research Ethics and Informed Consent

    Research Ethics and Informed Consent. As researchers, we are bound by rules of ethics. For example, we usually cannot collect data from minors without parental or guardian permission. All research participants must give their permission to be part of a study and they must be given pertinent information to make an "informed" consent to ...

  18. Informed Consent Guidelines & Templates

    IRB-Health Sciences and Behavioral Sciences (IRB-HSBS) Phone: (734) 936-0933. Fax: (734) 936-1852. [email protected]. Defines the term "informed consent process" and provides tips and other information to craft an appropriate informed consent document for a human subjects study and Univeristy of Michigan IRB review.

  19. PDF 11. Ijhral

    To be a valid contract, there must be free consent section-10. The Indian contract Act, 1872 static that to enter into a valid contract, there must be free consent of both the parties. In the same way, section 13 of the contract Act, 1872 deals with the real consent. It says that a contract is said to be freely entered into when both the ...

  20. Privacy Behaviour: A Model for Online Informed Consent

    The overreaching aim of this paper is to support ethical personal information management and marketing practices by developing a conceptual model for online informed consent decision-making, situated in normative ethics theory, based upon the unification of the autonomous authorisation (AA) model of informed consent and the theory of planned behaviour (TPB), which prioritises users' interests.

  21. Patient Perspectives and Preferences for Consent in the Digital Health

    In nearly half (15/31, 48%) of the studies, comprehension was assessed by comparing how different consent media such as eConsent or paper-based consent affected participants' understanding of the consent information. A total of 73% (11/15) of the comprehension studies found that user comprehension improved when an eConsent medium was used.

  22. PDF Use this template if your research is NOT Federally-sponsored AND

    permission, adult consent, teacher consent, screening consent, etc.). • In this template, "we" refers to the researchers. If there is only one researcher, edit as appropriate. If the PI is a student, always use "we" to include the faculty advisor. • Submit consent documents in MS Word whenever possible. The iMedRIS comparison tool for

  23. Research ethics and artificial intelligence for global health

    The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town ...

  24. Stanford Prison Experiment Ethical Issues

    Essay Example: Delving into the annals of psychological research, one cannot bypass the notorious Stanford Prison Experiment, a venture orchestrated by the esteemed Dr. Philip Zimbardo in the early 1970s. Intended to unravel the intricate dynamics of power and authority within a simulated prison

  25. Free Research Informed Consent Form

    How to Write. Step 1 - Download in PDF, Microsoft Word (.docx), or Open Document Text (.odt). Step 2 - The title of the research study being conducted must be included at the top of the consent form. Step 3 - Enter the following information related to the primary researcher in the fields provided: Step 4 - The purpose of the study, the ...