Research Design Review

A discussion of qualitative & quantitative research design, ethical considerations in case-centered qualitative research.

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 304-305)

anonymity

Ethical considerations are important in every research method involving human subjects but they take on added significance in case-centered research where researchers often work closely with research participants over a period of time and frequently in the face-to-face mode (where researcher-participant relationships play an important role in the research outcomes).  Both case study and narrative research gather a great deal of highly detailed information on each case, e.g., a case study may collect a detailed account of a particular social program; or a narrative inquiry may result in long, very personal stories associated with a chronic illness.  Beyond the ethical dilemma associated with drawing interpretations of narratives outside their temporal and social contexts (Brinkmann & Kvale, 2008), case-centered studies run the risk of of inadvertently exposing participants’ identities (without their permission) unless preventive measures are taken.

This is why the use of informed and voluntary consent as well as approval from institutional review boards (when required) is critical in case-centered research.  Consent involves disclosing the various aspects of the research, emphasizing the voluntary component, promising to keep participants safe, and paying particular attention to vulnerable population segments (e.g., children).  Yet these efforts need to go further.  Case-centered researchers must also effectively communicate the confidential nature of the research and take extra precautions to ensure participants’ right to privacy – which can be particularly challenging when only one case is the focal point of the research (e.g., a city social program).  For this reason, it is not uncommon for case study and narrative researchers to maintain participants’ anonymity in their final reports by changing participants’ names as well as the names of the characters and places revealed in the course of the research.

The path that these ethical considerations – consent and anonymity – take in the research design is also important.  The skilled researcher will think carefully about how and when to incorporate these ethical standards while maintaining the quality and integrity of the data.  For instance, narrative researchers are reluctant to reveal “too much” regarding the study objectives at the onset of an interview in fear of biasing the participant’s narrative. The thinking tends to be that “the ‘scholarly good’ of framing the study in a way that makes possible the kind of narration the researcher needs outweighs the ‘moral’ good of telling the participant the exact nature of the study” (Josselson, 2007, p. 540). Many of these researchers balance the ethical obligation of informed consent with the need for quality outcomes by, among other things, gaining consent twice, i.e., before the interview and again at the completion of the interview, and by conducting a thoughtful debriefing with each participant.

Case-centered researchers also need to give thoughtful attention to anonymity and its impact on the final outcomes.  Specifically, researchers must address questions such as: How will anonymizing the data introduce bias or error by way of changing context?  and How will de-identifying the data alter its interpretation?  These are important questions because the answers may determine how or if the data is used.

Ethical considerations revolve around transparency and safety, with safety broadly defined in terms of both physical and psychological harm, including the potential harm associated with the invasion of privacy and confidentiality.  However, ethical considerations cannot (should not) be contemplated in a vacuum.  Researchers – particularly case-centered researchers – need to carefully incorporate these ethics while also ensuring the quality of the research results.

Brinkmann, S., & Kvale, S. (2008). Ethics in qualitative psychological research. In C. Willig & W. Stainton-Rogers (Eds.), The Sage handbook of qualitative psychology (pp. 263–279). London: Sage Publications.

Josselson, R. (2007). The ethical attitude in narrative research: Principles and practicalities. In D. J. Clandinin (Ed.), Handbook of narrative inquiry (pp. 537–566). Thousand Oaks, CA: Sage Publications.

Image captured from: http://digiday.com/platforms/anonymity-apps/

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

References needed

  • Pingback: Generalizability in Case Study Research | Research Design Review

In qualitative research, ethical principles are primarily centered on protecting research participants and the guiding foundation of “do no harm”.

Following is a list of core ethical principles that are important in qualitative research:

Respect for persons – Respect the autonomy, decision-making and dignity of participants.

Beneficence – Minimizing the risks (physically, psychologically and socially) and maximizing the benefits to research participants.

Justice – Participants should be selected from groups of people whom the research may benefit.

Respect for communities – Protect and respect the values and interests of the community as a whole and protect the community from harm.

Leave a comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Studies UT Star Icon

Case Studies

More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

A Million Little Pieces

A Million Little Pieces

James Frey’s popular memoir stirred controversy and media attention after it was revealed to contain numerous exaggerations and fabrications.

Abramoff: Lobbying Congress

Abramoff: Lobbying Congress

Super-lobbyist Abramoff was caught in a scheme to lobby against his own clients. Was a corrupt individual or a corrupt system – or both – to blame?

Apple Suppliers & Labor Practices

Apple Suppliers & Labor Practices

Is tech company Apple, Inc. ethically obligated to oversee the questionable working conditions of other companies further down their supply chain?

Approaching the Presidency: Roosevelt & Taft

Approaching the Presidency: Roosevelt & Taft

Some presidents view their responsibilities in strictly legal terms, others according to duty. Roosevelt and Taft took two extreme approaches.

Appropriating “Hope”

Appropriating “Hope”

Fairey’s portrait of Barack Obama raised debate over the extent to which an artist can use and modify another’s artistic work, yet still call it one’s own.

Arctic Offshore Drilling

Arctic Offshore Drilling

Competing groups frame the debate over oil drilling off Alaska’s coast in varying ways depending on their environmental and economic interests.

Banning Burkas: Freedom or Discrimination?

Banning Burkas: Freedom or Discrimination?

The French law banning women from wearing burkas in public sparked debate about discrimination and freedom of religion.

Birthing Vaccine Skepticism

Birthing Vaccine Skepticism

Wakefield published an article riddled with inaccuracies and conflicts of interest that created significant vaccine hesitancy regarding the MMR vaccine.

Blurred Lines of Copyright

Blurred Lines of Copyright

Marvin Gaye’s Estate won a lawsuit against Robin Thicke and Pharrell Williams for the hit song “Blurred Lines,” which had a similar feel to one of his songs.

Bullfighting: Art or Not?

Bullfighting: Art or Not?

Bullfighting has been a prominent cultural and artistic event for centuries, but in recent decades it has faced increasing criticism for animal rights’ abuse.

Buying Green: Consumer Behavior

Buying Green: Consumer Behavior

Do purchasing green products, such as organic foods and electric cars, give consumers the moral license to indulge in unethical behavior?

Cadavers in Car Safety Research

Cadavers in Car Safety Research

Engineers at Heidelberg University insist that the use of human cadavers in car safety research is ethical because their research can save lives.

Cardinals’ Computer Hacking

Cardinals’ Computer Hacking

St. Louis Cardinals scouting director Chris Correa hacked into the Houston Astros’ webmail system, leading to legal repercussions and a lifetime ban from MLB.

Cheating: Atlanta’s School Scandal

Cheating: Atlanta’s School Scandal

Teachers and administrators at Parks Middle School adjust struggling students’ test scores in an effort to save their school from closure.

Cheating: Sign-Stealing in MLB

Cheating: Sign-Stealing in MLB

The Houston Astros’ sign-stealing scheme rocked the baseball world, leading to a game-changing MLB investigation and fallout.

Cheating: UNC’s Academic Fraud

Cheating: UNC’s Academic Fraud

UNC’s academic fraud scandal uncovered an 18-year scheme of unchecked coursework and fraudulent classes that enabled student-athletes to play sports.

Cheney v. U.S. District Court

Cheney v. U.S. District Court

A controversial case focuses on Justice Scalia’s personal friendship with Vice President Cheney and the possible conflict of interest it poses to the case.

Christina Fallin: “Appropriate Culturation?”

Christina Fallin: “Appropriate Culturation?”

After Fallin posted a picture of herself wearing a Plain’s headdress on social media, uproar emerged over cultural appropriation and Fallin’s intentions.

Climate Change & the Paris Deal

Climate Change & the Paris Deal

While climate change poses many abstract problems, the actions (or inactions) of today’s populations will have tangible effects on future generations.

Cover-Up on Campus

Cover-Up on Campus

While the Baylor University football team was winning on the field, university officials failed to take action when allegations of sexual assault by student athletes emerged.

Covering Female Athletes

Covering Female Athletes

Sports Illustrated stirs controversy when their cover photo of an Olympic skier seems to focus more on her physical appearance than her athletic abilities.

Covering Yourself? Journalists and the Bowl Championship

Covering Yourself? Journalists and the Bowl Championship

Can news outlets covering the Bowl Championship Series fairly report sports news if their own polls were used to create the news?

Cyber Harassment

Cyber Harassment

After a student defames a middle school teacher on social media, the teacher confronts the student in class and posts a video of the confrontation online.

Defending Freedom of Tweets?

Defending Freedom of Tweets?

Running back Rashard Mendenhall receives backlash from fans after criticizing the celebration of the assassination of Osama Bin Laden in a tweet.

Dennis Kozlowski: Living Large

Dennis Kozlowski: Living Large

Dennis Kozlowski was an effective leader for Tyco in his first few years as CEO, but eventually faced criminal charges over his use of company assets.

Digital Downloads

Digital Downloads

File-sharing program Napster sparked debate over the legal and ethical dimensions of downloading unauthorized copies of copyrighted music.

Dr. V’s Magical Putter

Dr. V’s Magical Putter

Journalist Caleb Hannan outed Dr. V as a trans woman, sparking debate over the ethics of Hannan’s reporting, as well its role in Dr. V’s suicide.

East Germany’s Doping Machine

East Germany’s Doping Machine

From 1968 to the late 1980s, East Germany (GDR) doped some 9,000 athletes to gain success in international athletic competitions despite being aware of the unfortunate side effects.

Ebola & American Intervention

Ebola & American Intervention

Did the dispatch of U.S. military units to Liberia to aid in humanitarian relief during the Ebola epidemic help or hinder the process?

Edward Snowden: Traitor or Hero?

Edward Snowden: Traitor or Hero?

Was Edward Snowden’s release of confidential government documents ethically justifiable?

Ethical Pitfalls in Action

Ethical Pitfalls in Action

Why do good people do bad things? Behavioral ethics is the science of moral decision-making, which explores why and how people make the ethical (and unethical) decisions that they do.

Ethical Use of Home DNA Testing

Ethical Use of Home DNA Testing

The rising popularity of at-home DNA testing kits raises questions about privacy and consumer rights.

Flying the Confederate Flag

Flying the Confederate Flag

A heated debate ensues over whether or not the Confederate flag should be removed from the South Carolina State House grounds.

Freedom of Speech on Campus

Freedom of Speech on Campus

In the wake of racially motivated offenses, student protests sparked debate over the roles of free speech, deliberation, and tolerance on campus.

Freedom vs. Duty in Clinical Social Work

Freedom vs. Duty in Clinical Social Work

What should social workers do when their personal values come in conflict with the clients they are meant to serve?

Full Disclosure: Manipulating Donors

Full Disclosure: Manipulating Donors

When an intern witnesses a donor making a large gift to a non-profit organization under misleading circumstances, she struggles with what to do.

Gaming the System: The VA Scandal

Gaming the System: The VA Scandal

The Veterans Administration’s incentives were meant to spur more efficient and productive healthcare, but not all administrators complied as intended.

German Police Battalion 101

German Police Battalion 101

During the Holocaust, ordinary Germans became willing killers even though they could have opted out from murdering their Jewish neighbors.

Head Injuries & American Football

Head Injuries & American Football

Many studies have linked traumatic brain injuries and related conditions to American football, creating controversy around the safety of the sport.

Head Injuries & the NFL

Head Injuries & the NFL

American football is a rough and dangerous game and its impact on the players’ brain health has sparked a hotly contested debate.

Healthcare Obligations: Personal vs. Institutional

Healthcare Obligations: Personal vs. Institutional

A medical doctor must make a difficult decision when informing patients of the effectiveness of flu shots while upholding institutional recommendations.

High Stakes Testing

High Stakes Testing

In the wake of the No Child Left Behind Act, parents, teachers, and school administrators take different positions on how to assess student achievement.

In-FUR-mercials: Advertising & Adoption

In-FUR-mercials: Advertising & Adoption

When the Lied Animal Shelter faces a spike in animal intake, an advertising agency uses its moral imagination to increase pet adoptions.

Krogh & the Watergate Scandal

Krogh & the Watergate Scandal

Egil Krogh was a young lawyer working for the Nixon Administration whose ethics faded from view when asked to play a part in the Watergate break-in.

Limbaugh on Drug Addiction

Limbaugh on Drug Addiction

Radio talk show host Rush Limbaugh argued that drug abuse was a choice, not a disease. He later became addicted to painkillers.

LochteGate

U.S. Olympic swimmer Ryan Lochte’s “over-exaggeration” of an incident at the 2016 Rio Olympics led to very real consequences.

Meet Me at Starbucks

Meet Me at Starbucks

Two black men were arrested after an employee called the police on them, prompting Starbucks to implement “racial-bias” training across all its stores.

Myanmar Amber

Myanmar Amber

Buying amber could potentially fund an ethnic civil war, but refraining allows collectors to acquire important specimens that could be used for research.

Negotiating Bankruptcy

Negotiating Bankruptcy

Bankruptcy lawyer Gellene successfully represented a mining company during a major reorganization, but failed to disclose potential conflicts of interest.

Pao & Gender Bias

Pao & Gender Bias

Ellen Pao stirred debate in the venture capital and tech industries when she filed a lawsuit against her employer on grounds of gender discrimination.

Pardoning Nixon

Pardoning Nixon

One month after Richard Nixon resigned from the presidency, Gerald Ford made the controversial decision to issue Nixon a full pardon.

Patient Autonomy & Informed Consent

Patient Autonomy & Informed Consent

Nursing staff and family members struggle with informed consent when taking care of a patient who has been deemed legally incompetent.

Prenatal Diagnosis & Parental Choice

Prenatal Diagnosis & Parental Choice

Debate has emerged over the ethics of prenatal diagnosis and reproductive freedom in instances where testing has revealed genetic abnormalities.

Reporting on Robin Williams

Reporting on Robin Williams

After Robin Williams took his own life, news media covered the story in great detail, leading many to argue that such reporting violated the family’s privacy.

Responding to Child Migration

Responding to Child Migration

An influx of children migrants posed logistical and ethical dilemmas for U.S. authorities while intensifying ongoing debate about immigration.

Retracting Research: The Case of Chandok v. Klessig

Retracting Research: The Case of Chandok v. Klessig

A researcher makes the difficult decision to retract a published, peer-reviewed article after the original research results cannot be reproduced.

Sacking Social Media in College Sports

Sacking Social Media in College Sports

In the wake of questionable social media use by college athletes, the head coach at University of South Carolina bans his players from using Twitter.

Selling Enron

Selling Enron

Following the deregulation of electricity markets in California, private energy company Enron profited greatly, but at a dire cost.

Snyder v. Phelps

Snyder v. Phelps

Freedom of speech was put on trial in a case involving the Westboro Baptist Church and their protesting at the funeral of U.S. Marine Matthew Snyder.

Something Fishy at the Paralympics

Something Fishy at the Paralympics

Rampant cheating has plagued the Paralympics over the years, compromising the credibility and sportsmanship of Paralympian athletes.

Sports Blogs: The Wild West of Sports Journalism?

Sports Blogs: The Wild West of Sports Journalism?

Deadspin pays an anonymous source for information related to NFL star Brett Favre, sparking debate over the ethics of “checkbook journalism.”

Stangl & the Holocaust

Stangl & the Holocaust

Franz Stangl was the most effective Nazi administrator in Poland, killing nearly one million Jews at Treblinka, but he claimed he was simply following orders.

Teaching Blackface: A Lesson on Stereotypes

Teaching Blackface: A Lesson on Stereotypes

A teacher was put on leave for showing a blackface video during a lesson on racial segregation, sparking discussion over how to teach about stereotypes.

The Astros’ Sign-Stealing Scandal

The Astros’ Sign-Stealing Scandal

The Houston Astros rode a wave of success, culminating in a World Series win, but it all came crashing down when their sign-stealing scheme was revealed.

The Central Park Five

The Central Park Five

Despite the indisputable and overwhelming evidence of the innocence of the Central Park Five, some involved in the case refuse to believe it.

The CIA Leak

The CIA Leak

Legal and political fallout follows from the leak of classified information that led to the identification of CIA agent Valerie Plame.

The Collapse of Barings Bank

The Collapse of Barings Bank

When faced with growing losses, investment banker Nick Leeson took big risks in an attempt to get out from under the losses. He lost.

The Costco Model

The Costco Model

How can companies promote positive treatment of employees and benefit from leading with the best practices? Costco offers a model.

The FBI & Apple Security vs. Privacy

The FBI & Apple Security vs. Privacy

How can tech companies and government organizations strike a balance between maintaining national security and protecting user privacy?

The Miss Saigon Controversy

The Miss Saigon Controversy

When a white actor was cast for the half-French, half-Vietnamese character in the Broadway production of Miss Saigon , debate ensued.

The Sandusky Scandal

The Sandusky Scandal

Following the conviction of assistant coach Jerry Sandusky for sexual abuse, debate continues on how much university officials and head coach Joe Paterno knew of the crimes.

The Varsity Blues Scandal

The Varsity Blues Scandal

A college admissions prep advisor told wealthy parents that while there were front doors into universities and back doors, he had created a side door that was worth exploring.

Therac-25

Providing radiation therapy to cancer patients, Therac-25 had malfunctions that resulted in 6 deaths. Who is accountable when technology causes harm?

Welfare Reform

Welfare Reform

The Welfare Reform Act changed how welfare operated, intensifying debate over the government’s role in supporting the poor through direct aid.

Wells Fargo and Moral Emotions

Wells Fargo and Moral Emotions

In a settlement with regulators, Wells Fargo Bank admitted that it had created as many as two million accounts for customers without their permission.

Stay Informed

Support our work.

Advertisement

Advertisement

Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality

  • Original Research/Scholarship
  • Open access
  • Published: 08 March 2021
  • Volume 27 , article number  16 , ( 2021 )

Cite this article

You have full access to this open access article

case study ethical considerations

  • Mark Ryan   ORCID: orcid.org/0000-0003-4850-0111 1 ,
  • Josephina Antoniou 2 ,
  • Laurence Brooks 3 ,
  • Tilimbe Jiya 4 ,
  • Kevin Macnish 5 &
  • Bernd Stahl 3  

12k Accesses

20 Citations

7 Altmetric

Explore all metrics

This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.

Similar content being viewed by others

case study ethical considerations

Organisational responses to the ethical issues of artificial intelligence

case study ethical considerations

Ethical AI governance: mapping a research ecosystem

case study ethical considerations

From Principled to Applied AI Ethics in Organizations: A Scoping Review

Avoid common mistakes on your manuscript.

Introduction

Big Data and Artificial Intelligence (BD + AI) are emerging technologies that offer great potential for business, healthcare, the public sector, and development agencies alike. The increasing impact of these two technologies and their combined potential in these sectors can be highlighted for diverse organisational aspects such as for customisation of organisational processes and for automated decision making. The combination of Big Data and AI, often in the form of machine learning applications, can better exploit the granularity of data and analyse it to offer better insights into behaviours, incidents, and risk, eventually aiming at positive organisational transformation.

Big Data offers fresh and interesting insights into structural patterns, anomalies, and decision-making in a broad range of different applications (Cuquet & Fensel, 2018 ), while AI provides predictive foresight, intelligent recommendations, and sophisticated modelling. The integration and combination of AI + BD offer phenomenal potential for correlating, predicting and prescribing recommendations in insurance, human resources (HR), agriculture, and energy, as well as many other sectors. While BD + AI provides a wide range of benefits, they also pose risks to users, including but not limited to privacy infringements, threats of unemployment, discrimination, security concerns, and increasing inequalities (O’Neil, 2016 ). Footnote 1 Adequate and timely policy needs to be implemented to prevent many of these risks occurring.

One of the main limitations preventing key decision-making for ethical BD + AI use is that there are few rigorous empirical studies carried out on the ethical implications of these technologies across multiple application domains. This renders it difficult for policymakers and developers to identify when ethical issues resulting from BD + AI use are only relevant for isolated domains and applications, or whether there are repeated/universal concerns which can be seen across different sectors. While the field lacks literature evaluating ethical issues Footnote 2 ‘on the ground’, there are even fewer multi-case evaluations.

This paper provides a cohesive multi-case study analysis across ten different application domains, including domains such as government, agriculture, insurance, and the media. It reviews ethical concerns found within these case studies to establish cross-cutting thematic issues arising from the implementation and use of BD + AI. The paper collects relevant literature and proposes a simple classification of ethical issues (short term, medium term, long term), which is then juxtaposed with the ethical concerns highlighted from the multiple-case study analysis. This multiple-case study analysis of BD + AI offers an understanding of current organisational practices.

The work described in this paper makes an important contribution to the literature, based on its empirical findings. By presenting the ethical issues across an array of application areas, the paper provides much-needed rigorous empirical insight into the social and organisational reality of ethics of AI + BD. Our empirical research brings together a collection of domains that gives a broad oversight about issues that underpin the implementation of AI. Through its empirical insights the paper provides a basis for a broader discussion of how these issues can and should be addressed.

This paper is structured in six main sections: this introduction is followed by a literature review, which allows for an integrated review of ethical issues, contrasting them with those found in the cases. This provides the basis for a categorisation or classification of ethical issues in BD + AI. The third section contains a description of the interpretivist qualitative case study methodology used in this paper. The subsequent section provides an overview of the organisations participating in the cases to contrast similarities and divisions, while also comparing the diversity of their use of BD + AI. Footnote 3 The fifth section provides a detailed analysis of the ethical issues derived from using BD + AI, as identified in the cases. The concluding section analyses the differences between theoretical and empirical work and spells out implications and further work.

Literature Review

An initial challenge that any researcher faces when investigating ethical issues of AI + BD is that, due to the popularity of the topic, there is a vast and rapidly growing literature to be considered. Ethical issues of AI + BD are covered by a number of academic venues, including some specific ones such as the AAAI/ACM Conference on AI, Ethics, and Society ( https://dl.acm.org/doi/proceedings/10.1145/3306618 ), policy initiative and many publicly and privately financed research reports (Whittlestone, Nyrup, Alexandrova, Dihal, & Cave, 2019 ). Initial attempts to provide overviews of the area have been published (Jobin, 2019 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ), but there is no settled view on what counts as an ethical issue and why. In this paper we aim to provide a broad overview of issues found through the case studies. This paper puts forward what are commonly perceived to be ethical issues within the literature or concerns that have ethical impacts and repercussions. We explicitly do not apply a particular philosophical framework of ethics but accept as ethical issues those issues that we encounter in the literature. This review is based on an understanding of the current state of the literature by the paper's authors. It is not a structured review and does not claim comprehensive coverage but does share some interesting insights.

To be able to undertake the analysis of ethical issues in our case studies, we sought to categorise the ethical issues found in the literature. There are potentially numerous ways of doing so and our suggestion does not claim to be authoritative. Our suggestion is to order ethical issues in terms of their temporal horizon, i.e., the amount of time it is likely to take to be able to address them. Time is a continuous variable, but we suggest that it is possible to sort the issues into three clusters: short term, medium term, and long term (see Fig.  1 ).

figure 1

Temporal horizon for addressing ethical issues

As suggested by Baum ( 2017 ), it is best to acknowledge that there will be ethical issues and related mitigating activities that cannot exclusively fit in as short, medium or long term.

ather than seeing it as an authoritative classification, we see this as a heuristic that reflects aspects of the current discussion. One reason why this categorisation is useful is that the temporal horizon of ethical issues is a potentially useful variable, with companies often being accused of favouring short-term gains over long-term benefits. Similarly, short-term issues must be able to be addressed on the local level for short-term fixes to work.

Short-term issues

These are issues for which there is a reasonable assumption that they are capable of being addressed in the short term. We do not wish to quantify what exactly counts as short term, as any definition put forward will be contentious when analysing the boundaries and transition periods. A better definition of short term might therefore be that such issues can be expected to be successfully addressed in technical systems that are currently in operation or development. Many of the issues we discuss under the heading of short-term issues are directly linked to some of the key technologies driving the current AI debate, notably machine learning and some of its enabling techniques and approaches such as neural networks and reinforcement learning.

Many of the advantages promised by BD + AI involve the use of personal data, data which can be used to identify individuals. This includes health data; customer data; ANPR data (Automated Number Plate Recognition); bank data; and even includes data about farmers’ land, livestock, and harvests. Issues surrounding privacy and control of data are widely discussed and recognized as major ethical concerns that need to be addressed (Boyd & Crawford, 2012 ; Tene & Polonetsky, 2012 , 2013 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Jain, Gyanchandani, & Khare, 2016 ; Mai, 2016 ; Macnish, 2018 ). The concern surrounding privacy can be put down to a combination of a general level of awareness of privacy issues and the recently-introduced General Data Protection Regulation (GDPR). Closely aligned with privacy issues are those relating to transparency of processes dealing with data, which can often be classified as internal, external, and deliberate opaqueness (Burrell, 2016 ; Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ).

The Guidelines for Trustworthy AI Footnote 4 were released in 2018 by the High-Level Expert Group on Artificial Intelligence (AI HLEG Footnote 5 ), and address the need for technical robustness and safety, including accuracy, reproducibility, and reliability. Reliability is further linked to the requirements of diversity, fairness, and social impact because it addresses freedom from bias from a technical point of view. The concept of reliability, when it comes to BD + AI, refers to the capability to verify the stability or consistency of a set of results (Bush, 2012 ; Ferraggine, Doorn, & Rivera, 2009 ; Meeker and Hong, 2014 ).

If a technology is unreliable, error-prone, and unfit-for-purpose, adverse ethical issues may result from decisions made by the technology. The accuracy of recommendations made by BD + AI is a direct consequence of the degree of reliability of the technology (Barolli, Takizawa, Xhafa, & Enokido, 2019 ). Bias and discrimination in algorithms may be introduced consciously or unconsciously by those employing the BD + AI or because of algorithms reflecting pre-existing biases (Baroccas and Selbst, 2016 ). Examples of bias have been documented often reflecting “an imbalance in socio-economic or other ‘class’ categories—ie, a certain group or groups are not sampled as much as others or at all” (Panch et al., 2019 ). have the potential to affect levels of inequality and discrimination, and if biases are not corrected these systems can reproduce existing patterns of discrimination and inherit the prejudices of prior decision makers (Barocas & Selbst, 2016 , p. 674). An example of inherited prejudices is documented in the United States, where African-American citizens, more often than not, have been given longer prison sentences than Caucasians for the same crime.

Medium-term issues

Medium-term issues are not clearly linked to a particular technology but typically arise from the integration of AI techniques including machine learning into larger socio-technical systems and contexts. They are thus related to the way life in modern societies is affected by new technologies. These can be based on the specific issues listed above but have their main impact on the societal level. The use of BD + AI may allow individuals’ behaviour to be put under scrutiny and surveillance , leading to infringements on privacy, freedom, autonomy, and self-determination (Wolf, 2015 ). There is also the possibility that the increased use of algorithmic methods for societal decision-making may create a type of technocratic governance (Couldry & Powell, 2014 ; Janssen & Kuk, 2016 ), which could infringe on people’s decision-making processes (Kuriakose & Iyer, 2018 ). For example, because of the high levels of public data retrieval, BD + AI may harm people’s freedom of expression, association, and movement, through fear of surveillance and chilling effects (Latonero, 2018 ).

Corporations have a responsibility to the end-user to ensure compliance, accountability, and transparency of their BD + AI (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ). However, when the source of a problem is difficult to trace, owing to issues of opacity, it becomes challenging to identify who is responsible for the decisions made by the BD + AI. It is worth noting that a large-scale survey in Australia in 2020 indicated that 57.9% of end-users are not at all confident that most companies take adequate steps to protect user data. The significance of understanding and employing responsibility is an issue targeted in many studies (Chatfield et al., 2017 ; Fothergill et al., 2019 ; Jirotka et al., 2017 ; Pellé & Reber, 2015 ). Trust and control over BD + AI as an issue is reiterated by a recent ICO report demonstrating that most UK citizens do not trust organisations with their data (ICO, 2017 ).

Justice is a central concern in BD + AI (Johnson, 2014 , 2018 ). As a starting point, justice consists in giving each person his or her due or treating people equitably (De George, p. 101). A key concern is that benefits will be reaped by powerful individuals and organisations, while the burden falls predominantly on poorer members of society (Taylor, 2017 ). BD + AI can also reflect human intentionality, deploying patterns of power and authority (Portmess & Tower, 2015 , p. 1). The knowledge offered by BD + AI is often in the hands of a few powerful corporations (Wheeler, 2016 ). Power imbalances are heightened because companies and governments can deploy BD + AI for surveillance, privacy invasions and manipulation, through personalised marketing efforts and social control strategies (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 11). They play a role in the ascent of datafication, especially when specific groups (such as corporate, academic, and state institutions) have greater unrestrained access to big datasets (van Dijck, 2014 , p. 203).

Discrimination , in BD + AI use, can occur when individuals are profiled based on their online choices and behaviour, but also their gender, ethnicity and belonging to specific groups (Calders, Kamiran, & Pechenizkiy, 2009 ; Cohen et al., 2014 ; and Danna & Gandy, 2002 ). Data-driven algorithmic decision-making may lead to discrimination that is then adopted by decision-makers and those in power (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 4). Biases and discrimination can contribute to inequality . Some groups that are already disadvantaged may face worse inequalities, especially if those belonging to historically marginalised groups have less access and representation (Barocas & Selbst, 2016 , p. 685; Schradie, 2017 ). Inequality-enhancing biases can be reproduced in BD + AI, such as the use of predictive policing to target neighbourhoods of largely ethnic minorities or historically marginalised groups (O’Neil, 2016 ).

BD + AI offers great potential for increasing profit, reducing physical burdens on staff, and employing innovative sustainability practices (Badri, Boudreau-Trudel, & Souissi, 2018 ). They offer the potential to bring about improvements in innovation, science, and knowledge; allowing organisations to progress, expand, and economically benefit from their development and application (Crawford et al., 2014 ). BD + AI are being heralded as monumental for the economic growth and development of a wide diversity of industries around the world (Einav & Levin, 2014 ). The economic benefits accrued from BD + AI may be the strongest driver for their use, but BD + AI also holds the potential to cause economic harm to citizens and businesses or create other adverse ethical issues (Newman, 2013 ).

However, some in the literature view the co-development of employment and automation as somewhat naïve outlook (Zuboff, 2015 ). BD + AI companies may benefit from a ‘post-labour’ automation economy, which may have a negative impact on the labour market (Bossman, 2016 ), replacing up to 47% of all US jobs within the next 20 years (Frey & Osborne, 2017 ). The professions most at risk of affecting employment correlated with three of our case studies: farming, administration support and the insurance sector (Frey & Osborne, 2017 ).

Long-term issues

Long-term issues are those pertaining to fundamental aspects of nature of reality, society, or humanity. For example, that AI will develop capabilities far exceeding human beings (Kurzweil, 2006 ). At this point, sometimes called the ‘ singularity ’ machines achieve human intelligence, are expected to be able to improve on themselves and thereby surpass human intelligence and become superintelligent (Bostrom, 2016 ). If this were to happen, then it might have dystopian consequences for humanity as often depicted in science fiction. Also, it stands to reason that the superintelligent, or even just the normally intelligent machines may acquire a moral status.

It should be clear that these expectations are not universally shared. They refer to what is often called ‘ artificial general intelligence’ (AGI), a set of technologies that emulate human reasoning capacities more broadly. Footnote 6

Furthermore, if we may acquire new capabilities, e.g. by using technical implants to enhance human nature. The resulting being might be called a transhuman , the next step of human evolution or development. Again, it is important to underline that this is a contested idea (Livingstone, 2015 ) but one that has increasing traction in public discourse and popular science accounts (Harari, 2017 ).

We chose this distinction of three groups of issues for understanding how mitigation strategies within organisations can be contextualised. We concede that this is one reading of the literature and that many others are possible. In this account of the literature we tried to make sense of the current discourse to allow us to understand our empirical findings which are introduced in the following sections.

Case Study Methodology

Despite the impressive amount of research undertaken on ethical issues of AI + BD (e.g. Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Zwitter, 2014 ), there are few case studies exploring such issues. This paper builds upon this research and employs an interpretivist methodology to do so, focusing on how, what, and why questions relevant to the ethical use of BD + AI (Walsham, 1995a , b ). The primary research questions for the case studies were: How do organisations perceive ethical concerns related to BD + AI and in what ways do they deal with them?

We sought to elicit insights from interviews, rather than attempting to reach an objective truth about the ethical impacts of BD + AI. The interpretivist case study approach (Stake 2003) allowed the researchers ‘to understand ‘reality’ as the blending of the various (and sometimes conflicting) perspectives which coexist in social contexts, the common threads that connect the different perspectives and the value systems that give rise to the seeming contradictions and disagreements around the topics discussed. Whether one sees this reality as static (social constructivism) or dynamic (social constructionism) was also a point of consideration, as they both belong in the same “family” approach where methodological flexibility is as important a value as rigour’ (XXX).

Through extensive brainstorming within the research team, and evaluations of relevant literature, 16 social application domains were established as topics for case study analysis. Footnote 7 The project focused on ten out of these application domains in accordance with the partners’ competencies. The case studies have covered ten domains, and each had their own unique focus, specifications, and niches, which added to the richness of the evaluations (Table 1 ).

The qualitative analysis approach adopted in this study focused on these ten standalone operational case studies that were directly related to the application domains presented in Table 1 . These individual case studies provide valuable insights (Yin, 2014 , 2015 ); however, a multiple-case study approach offers a more comprehensive analysis of ethical issues related to BD + AI use (Herriott & Firestone, 1983 ). Thus, this paper adopts a multiple-case study methodology to identify what insights can be obtained from the ten cases, identifies whether any generalisable understandings can be retrieved, and evaluates how different organisations deal with issues pertaining to BD + AI development and use. The paper does not attempt to derive universal findings from this analysis, in line with the principles of interpretive research, but further attempts to gain an in-depth understanding of the implications of selected BD + AI applications.

The data collection was guided by specific research questions identified through each case, including five desk research questions (see appendix 1); 24 interview questions (see appendix 2); and a checklist of 17 potential ethical issues, developed by the project leader Footnote 8 (see appendix 3). A thematic analysis framework was used to ‘highlight, expose, explore, and record patterns within the collected data. The themes were patterns across data sets that were important to describe several ethical issues which arise through the use of BD  +  AI across different types of organisations and application domains’ (XXX).

A workshop was then held after the interviews were carried out. The workshop brought together the experts in the case study team to discuss their findings. This culminated in 26 ethical issues Footnote 9 that were inductively derived from the data collected throughout the interviews (see Fig.  2 and Table 3). Footnote 10 In order to ensure consistency and rigour in the multiple-case study approach, researchers followed a standardised case study protocol (Yin, 2014 ). Footnote 11

figure 2

The Prevalence of Ethical Issues in the Case Studies

Thirteen different organisations were interviewed for 10 case studies, consisting of 22 interviews in total. Footnote 12 These ranged from 30 min to 1 ½ hours in-person or Skype interviews. The participants that were selected for interviews represented a very broad range of application domains and organisations that use BD + AI. The case study organisations were selected according to their relevance to the overall case study domains and considering their fit with the domains and likelihood of providing interesting insights. The interviewees were then selected according to their ability to explain their BD + AI and its role in their organisation. In addition to interviews, a document review provided supporting information about the organisation. Thus, websites and published material were used to provide background to the research.

Findings: Ten Case Studies

This section gives a brief overview of the cases, before analysing their similarities and differences. It also highlights the different types of BD + AI being used, and the types of data used by the BD + AI in the case study organisations, before conducting an ethical analysis of the cases. Table 2 presents an overview of the 10 cases to show the roles of the interviewees, the focus of the technologies being used, and the data retrieved by each organisation’s BD + AI. All interviews were conducted in English.

The types of organisations that were used in the case studies varied extensively. They included start-ups (CS10), niche software companies (CS1), national health insurers (Organisation X in CS6), national energy providers (CS7), chemical/agricultural multinational (CS3), and national (CS9) and international (CS8) telecommunications providers. The case studies also included public (CS2, Organisation 1 and 4 in CS4) and semi-public (Organisation 2 in CS4) organisations, as well as a large scientific research project (CS5).

The types of individuals interviewed also varied extensively. For example, CS6 and CS7 did not have anyone with a specific technical background, which limited the possibility of analysing issues related to the technology itself. Some case studies only had technology experts (such as CS1, CS8, and CS9), who mostly concentrated on technical issues, with much less of a focus on ethical concerns. Other case studies had a combination of both technical and policy-focused experts (i.e. CS3, CS4, and CS5). Footnote 13

Therefore, it must be made fundamentally clear that we are not proposing that all of the interviewees were authorities in the field, or that even collectively they represent a unified authority on the matter, but instead, that we are hoping to show what are the insights and perceived ethical issues of those currently working with AI on the ground view as ethical concerns. While the paper is presenting the ethical concerns found within an array of domains, we do not claim that any individual case study is representative of their entire industry, but instead, our intent was to capture a wide diversity of viewpoints, domains, and applications of AI, to encompass a broad amalgamation of concerns. We should also state that this is not a shortcoming of the study but that it is the normal approach that social science often takes.

The diversity of organisations and their application focus areas also varied. Some organisations focused more so on the Big Data component of their AI, while others more strictly on the AI programming and analytics. Even when organisations concentrated on a specific type of BD + AI, such as Big Data, its use varied immensely, including retrieval (CS1), analysis (CS2), predictive analytics (CS10), and transactional value (Organisation 2 in CS4). Some domains adopted BD + AI earlier and more emphatically than others (such as communications, healthcare, and insurance). Also, the size, investment, and type of organisation played a part in the level of BD + AI innovation (for example, the two large multinationals in CS3 and CS8 had well-developed BD + AI).

The maturity level of BD + AI was also determined by how it was integrated, and its importance, within an organisation. For instance, in organisations where BD + AI were fundamental for the success of the business (e.g. CS1 and CS10), they played a much more important role than in companies where there was less of a reliance (e.g. CS7). In some organisations, even when BD + AI was not central to success, the level of development was still quite advanced because of economic investment capabilities (e.g. CS3 and CS8).

These differences provided important questions to ask throughout this multi-case study analysis, such as: Do certain organisations respond to ethical issues relating to BD + AI in a certain way? Does the type of interviewee affect the ethical issues discussed—e.g. case studies without technical experts, those that only had technical experts, and those that had both? Does the type of BD + AI used impact the types of ethical issues discussed? What significance does the type of data retrieved have on ethical issues identified by the organisations? These inductive ethical questions provided a template for the qualitative analysis in the following section.

Ethical Issues in the Case Studies

Based on the interview data, the ethical issues identified in the case studies were grouped into six specific thematic sections to provide a more conducive, concise, and pragmatic methodology. Those six sections are: control of data, reliability of data, justice, economic issues, role of organisations, and individual freedoms. From the 26 ethical issues, privacy was the only ethical issue addressed in all 10 case studies, which was not surprising because it has received a great deal of attention recently because of the GDPR. Also, security, transparency, and algorithmic bias are regularly discussed in the literature, so we expected them to be significant issues across many of the cases. However, there were many issues that received less attention in the literature—such as access to BD + AI, trust, and power asymmetries—which were discussed frequently in the interviews. In contrast to this, there were ethical issues that were heavily discussed in the literature which received far less attention in the interviews, such as employment, autonomy, and criminal or malicious use of BD + AI (Fig.  2 ).

The ethical analysis was conducted using a combination of literature reviews and interviews carried out with stakeholders. The purpose of the interviews was to ensure that there were no obvious ethical issues faced by stakeholders in their day-to-day activities which had been missed in the academic literature. As such, the starting point was not an overarching normative theory, which might have meant that we looked for issues which fit well with the theory but ignored anything that fell outside of that theory. Instead the combined approach led to the identification of the 26 ethical issues, each labelled based on particular words or phrases used in the literature or by the interviewees. For example, the term "privacy" was used frequently and so became the label for references to and instances of privacy-relevant concerns. In this section we have clustered issues together based on similar problems faced (e.g. accuracy of data and accuracy of algorithms within the category of ‘reliability of data’).

In an attempt to highlight similar ethical issues and improve the overall analysis to better capture similar perspectives, the research team decided to use the method of clustering, a technique often used in data mining to efficiently group similar elements together. Through discussion in the research team, and bearing in mind that the purpose of the clustering process was to form clusters that would enhance understanding of the impact of these ethical issues, we arrived at the following six clusters: the control of data (covering privacy, security, and informed consent); the reliability of data (accuracy of data and accuracy of algorithms); justice (power asymmetries, justice, discrimination, and bias); economic issues (economic concerns, sustainability, and employment); the role of organisations (trust and responsibility); and human freedoms (autonomy, freedom, and human rights). Both the titles and the precise composition of each cluster of issues are the outcome of a reasoned agreement of the research team. However, it should be clear that we could have used different titles and different clustering. The point is not that each cluster forms a distinct group of ethical issues, independent from any other. Rather the ethical issues faced overlap and play into one another, but to present them in a manageable format we have opted to use this bottom-up clustering approach.

Human Freedoms

An interviewee from CS10 stated that they were concerned about human rights because they were an integral part of the company’s ethics framework. This was beneficial to their business because they were required to incorporate human rights to receive public funding by the Austrian government. The company ensured that they would not grant ‘full exclusivity on generated social unrest event data to any single party, unless the data is used to minimise the risk of suppression of unrest events, or to protect the violation of human rights’ (XXX). The company demonstrates that while BD + AI has been criticised for infringing upon human rights in the literature, they also offer the opportunity to identify and prevent human rights abuses. The company’s moral framework definitively stemmed from regulatory and funding requirements, which lends itself to the benefit of effective ethical top-down approaches, which is a divisive topic in the literature, with diverging views about whether top-down or bottom-up approaches are better options for improved AI ethics.

Trust & Responsibility

Responsibility was a concern in 5 of the case studies, confirming the importance it is given in the literature (see Sect.  3 ). Trust appeared in seven of the case studies. The cases focused on concerns found in the literature, such as BD + AI use in policy development, public distrust about automated decision-making and the integrity of corporations utilising datafication methods (van Dijck 2014 ).

Trust and control over BD + AI were an issue throughout the case studies. The organisation from the predictive intelligence case study (CS10) identified that their use of social media data raised trust issues. They converged with perspectives found in the literature that when people feel disempowered to use or be part of the BD + AI development process, they tend to lose trust in the BD + AI (Accenture, 2016 , 2017 ). In CS6, stakeholders (health insurers) trusted the decisions made by BD + AI when they were engaged and empowered to give feedback on how their data was used. Trust is enhanced when users can refuse the use of their data (CS7), which correlates with the literature. Companies discussed the benefits of establishing trustworthy relationships. For example, in CS9, they have “ been trying really hard to avoid the existence of fake [mobile phone] base stations, because [these raise] an issue with the trust that people put in their networks” (XXX).

Corporations need to determine the objective of the data analysis (CS3), what data is required for the BD + AI to work (CS2), and accountability for when it does not work as intended or causes undesirable outcomes (CS4). The issue here is whether the organisation takes direct responsibility for these outcomes, or, if informed consent has been given, can responsibility be shared with the granter of consent (CS3). The cases also raised the question of ‘responsible to whom’, the person whose data is being used or the proxy organisation who has provided data (CS6). For example, in the insurance case study, the company stated that they only had a responsibility towards the proxy organisation and not the sources of the data. All these issues are covered extensively in the literature in most application domains.

Control of Data

Concerns surrounding the control of data for privacy reasons can be put down to a general awareness of privacy issues in the press, reinforced by the recently-introduced GDPR. This was supported in the cases, where interviewees expressed the opinion that the GDPR had raised general awareness of privacy issues (CS1, CS9) or that it had lent weight to arguments concerning the importance of privacy (CS8).

The discussion of privacy ranged from stressing that it was not an issue for some interviewees, because there was no personal information in the data they used (CS4), to its being an issue for others, but one which was being dealt with (CS2 and CS8). One interviewee (CS5) expressed apprehension that privacy concerns conflicted with scientific innovation, introducing hitherto unforeseen costs. This view is not uncommon in scientific and medical innovation, where harms arising from the use of anonymised medical data are often seen as minimal and the potential benefits significant (Manson & O’Neill, 2007 ). In other cases (CS1), there was a confusion between anonymisation (data which cannot be traced back to the originating source) and pseudonymisation (where data can be traced back, albeit with difficulty) of users’ data. A common response from the cases was that providing informed consent for the use of personal data waived some of the rights to privacy of the user.

Consent may come in the form of a company contract Footnote 14 or an individual agreement. Footnote 15 In the former, the company often has the advantage of legal support prior to entering a contract and so should be fully aware of the information provided. In individual agreements, though, the individual is less likely to be legally supported, and so may be at risk of exploitation through not reading the information sufficiently (CS3), or of responding without adequate understanding (CS9). In one case (CS5), referring to anonymised data, consent was implied rather than given: the interviewee suggested that those involved in the project may have contributed data without giving clear informed consent. The interviewee also noted that some data may have been shared without the permission, or indeed knowledge, of those contributing individuals. This was acknowledged by the interviewee as a potential issue.

In one case (CS6), data was used without informed consent for fraud detection purposes. The interviewees noted that their organisation was working within the parameters of national and EU legislation, which allows for non-consensual use of data for these ends. One interviewee in this case stated that informed consent was sought for every novel use of the data they held. However, this was sought from the perceived owner of the data (an insurance company) rather than from the originating individuals. This case demonstrates how people may expect their data to be used without having a full understanding of the legal framework under which the data are collected. For example, data relating to individuals may legally be accessed for fraud detection without notifying the individual and without relying on the individual’s consent.

This use of personal data for fraud detection in CS6 also led to concerns regarding opacity. In both CS6 and CS10 there was transparency within the organisations (a shared understanding among staff as to the various uses of the data) but that did not extend to the public outside those organisations. In some cases (CS5) the internal transparency/external opacity meant that those responsible for developing BD + AI were often hard to meet. Of those who were interviewed in CS5, many did not know the providence of the data or the algorithms they were using. Equally, some organisations saw external opacity as integral to the business environment in which they were operating (CS9, CS10) for reasons of commercial advantage. The interviewee in CS9 cautioned that this approach, coupled with a lack of public education and the speed of transformation within the industry, would challenge any meaningful level of public accountability. This would render processes effectively opaque to the public, despite their being transparent to experts.

Reliability of Data

There can be multiple sources of unreliability in BD + AI. Unreliability originating from faults in the technology can lead to algorithmic bias, which can cause ethical issues such as unfairness, discrimination, and general negative social impact (CS3 and CS6). Considering algorithmic bias as a key input to data reliability, there exist two types of issues that may need to be addressed. Primarily, bias may stem from the input data, referred to as training data, if such data excludes adequate representation of the world, e.g. gender-biased datasets (CS6). Secondly, an inadequate representation of the world may be the result of lack of data, e.g. a correctly designed algorithm to learn from and predict a rare disease, may not have sufficient representative data to achieve correct predictions (CS5). In either case the input data are biased and may result in inaccurate decision-making and recommendations.

The issues of reliability of data stemming from data accuracy and/or algorithmic bias, may escalate depending on their use, as for example in predictive or risk-assessment algorithms (CS10). Consider the risks of unreliable data in employee monitoring situations (CS1), detecting pests and diseases in agriculture (CS3), in human brain research (CS5) or cybersecurity applications (CS8). Such issues are not singular in nature but closely linked to other ethical issues such as information asymmetries, trust, and discrimination. Consequently, the umbrella issue of reliability of data must be approached from different perspectives to ensure the validity of the decision-making processes of the BD + AI.

Data may over-represent some people or social groups who are likely to be already privileged or under-represent disadvantaged and vulnerable groups (CS3). Furthermore, people who are better positioned to gain access to data and have the expertise to interpret them may have an unfair advantage over people devoid of such competencies. In addition, BD + AI can work as a tool of disciplinary power, used to evaluate people’s conformity to norms representing the standards of disciplinary systems (CS5). We focus on the following aspects of justice in our case study analysis: power asymmetries, discrimination, inequality, and access.

The fact that issues of power can arise in public as well as private organisations was discussed in our case studies. The smart city case (CS4) showed that the public organisations were aware of potential problems arising from companies using public data and were trying to put legal safeguards in place to avoid such misuse. As a result of misuse, there is the potential that cities, or the companies with which they contract, may use data in harmful or discriminatory ways. Our case study on the use of BD + AI in scientific research showed that the interviewees were acutely aware of the potential of discrimination (CS10). They stated that biases in the data may not be easy to identify, and may lead to misclassification or misinterpretation of findings, which may in turn skew results. Discrimination refers to the recognition of difference, but it may also refer to unjust treatment of different categories of people based on their gender, sex, religion, race, class, or disability. BD + AI are often employed to distinguish between different cases, e.g. between normal and abnormal behaviour in cybersecurity. Determining whether such classification entails discrimination in the latter sense can be difficult, due to the nature of the data and algorithms involved.

Examples of potential inequality based on BD + AI could be seen in several case studies. The agricultural case (CS3) highlighted the power differential between farmers and companies with potential implications for inequality, but also the global inequality between farmers, linked to farming practices in different countries (CS3). Subsistence farmers in developing countries, for example, might find it more difficult to benefit from these technologies than large agro-businesses. The diverging levels of access to BD + AI entail different levels of ability to benefit from them and counteract possible disadvantages (CS3). Some companies restrict access to their data entirely, and others sell access at a fee, while others offer small datasets to university-based researchers (Boyd & Crawford, 2012 , p. 674).

Economic Issues

One economic impact of BD + AI outlined in the agriculture case study (CS3) focused on whether this technology, and their ethical implementation, were economically affordable. If BD + AI could not improve economic efficiency, they would be rejected by the end-user, whether they were more productive, sustainable, and ethical options. This is striking, as it raises a serious challenge for the AI ethics literature and industry. It establishes that no matter how well intentioned and principled AI ethics guidelines and charters are, unless their implementation can be done in an economically viable way, their implementation will be challenged and resisted by those footing the bill.

The telecommunications case study (CS9) focused on how GDPR legislation may economically impact businesses using BD + AI by creating disparities in competitiveness between EU and non-EU companies developing BD + AI. Owing to the larger data pools of the latter, their BD + AI may prove to be more effective than European-manufactured alternatives, which cannot bypass the ethical boundaries of European law in the same way (CS8). This is something that is also being addressed in the literature and is a very serious concern for the future profitability and development of AI in Europe (Wallace & Castro, 2018 ). The literature notes additional issues in this area that were not covered in the cases. There is the potential that the GDPR will increase costs of European AI companies by having to manually review algorithmic decision-making; the right to explanation could reduce AI accuracy; and the right to erasure could damage AI systems (Wallace & Castro, 2018 , p. 2).

One interviewee stated that public–private BD + AI projects should be conducted in a collaborative manner, rather than a sale-of-service (CS4). However, this harmonious partnership is often not possible. Another interviewee discussed the tension between public and private interests on their project—while the municipality tried to focus on citizen value, the ICT company focused on the project’s economic success. The interviewee stated that the project would have terminated earlier if it were the company’s decision, because it was unprofitable (CS4). This is a huge concern in the literature, whereby private interests will cloud, influence, and damage public decision-making within the city because of their sometimes-incompatible goals (citizen value vs. economic growth) (Sadowski & Pasquale, 2015 ). One interviewee said that the municipality officials were aware of the problems of corporate influence and thus are attempting to implement the approach of ‘data sovereignty’ (CS2).

During our interviews, some viewed BD + AI as complementary to human employment (CS3), collaborative with such employment (CS4), or as a replacement to employment (CS6). The interviewees from the agriculture case study (CS3) stated that their BD + AI were not sufficiently advanced to replace humans and were meant to complement the agronomist, rather than replace them. However, they did not indicate what would happen when the technology is advanced enough, and it becomes profitable to replace the agronomist. The insurance company interviewee (CS6) stated that they use BD + AI to reduce flaws in personal judgment. The literature also supports this viewpoint, where BD + AI is seen to offer the potential to evaluate cases impartially, which is beneficial to the insurance industry (Belliveau, Gray, & Wilson, 2019 ). Footnote 16 The interviewee reiterated this and also stated that BD + AI would reduce the number of people required to work on fraud cases. The interviewee stated that BD + AI are designed to replace these individuals, but did not indicate whether their jobs were secure or whether they would be retrained for different positions, highlighting a concern found in the literature about the replacement and unemployment of workers by AI (Bossman, 2016 ). In contrast to this, a municipality interviewee from CS4 stated that their chat-bots are used in a collaborative way to assist customer service agents, allowing them to concentrate on higher-level tasks, and that there are clear policies set in place to protect their jobs.

Sustainability was only explicitly discussed in two interviews (CS3 and CS4). The agriculture interviewees stated that they wanted to be the ‘first’ to incorporate sustainability metrics into agricultural BD + AI, indicating a competitive and innovative rationale for their company (CS3). Whereas the interviewee from the sustainable development case study (CS4) stated that their goal of using BD + AI was to reduce Co2 emissions and improve energy and air quality. He stated that there are often tensions between ecological and economic goals and that this tension tends to slow down the efforts of BD + AI public–private projects—an observation also supported by the literature (Keeso, 2014 ). This tension between public and private interests in BD + AI projects was a recurring issue throughout the cases, which will be the focus of the next section on the role of organisations.

Discussion and Conclusion

The motivation behind this paper is to come to a better understanding of ethical issues related to BD + AI based on a rich empirical basis across different application domains. The exploratory and interpretive approach chosen for this study means that we cannot generalise from our research to all possible examples of BD + AI, but it does allow us to generalise to theory and rich insights (Walsham, 1995a , b , 2006 ). These theoretical insights can then provide the basis for further empirical research, possibly using other methods to allow an even wider set of inputs to move beyond some of the limitations of the current study.

Organisational Practice and the Literature

The first point worth stating is that there is a high level of consistency both among the case studies and between cases and literature. Many of the ethical issues identified cut across the cases and are interpreted in similar ways by different stakeholders. The frequency distribution of ethical issues indicates that very few, if any, issues are relevant to all cases but many, such as privacy, have a high level of prevalence. Despite appearing in all case studies, privacy was not seen as overly problematic and could be dealt with in the context of current regulatory principles (GDPR). Most of the issues that we found in the literature (see Sect.  2 ) were also present in the case studies. In addition to privacy and data protection, this included accuracy, reliability, economic and power imbalances, justice, employment, discrimination and bias, autonomy and human rights and freedoms.

Beyond the general confirmation of the relevance of topics discussed in the literature, though, the case studies provide some further interesting insights. From the perspective of an individual case some societal factors are taken for granted and outside of the control of individual actors. For example, intellectual property regimes have significant and well-recognised consequences for justice, as demonstrated in the literature. However, there is often little that individuals or organisations can do about them. Even in cases where individuals may be able to make a difference and the problem is clear, it is not always obvious how to do this. Some well-publicised discrimination cases may be easy to recognise, for example where an HR system discriminates against women or where a facial recognition system discriminates against black people. But in many cases, it may be exceedingly difficult to recognise discrimination where it is not clear how a person is discriminated against. If, for example, an image-based medical diagnostic system leads to disadvantages for people with genetic profiles, this may not be easy to identify.

With regards to the classification of the literature suggested in Sect.  2 along the temporal dimension, we can see that the attention of the case study respondents seems to be correlated to the temporal horizon of the issues. The issues we see as short-term figures most prominently, whereas the medium-term issues, while still relevant and recognisable, appear to be less pronounced. The long-term questions are least visible in the cases. This is not very surprising, as the short-term issues are those that are at least potentially capable of being addressed relatively quickly and thus must be accessible on the local level. Organisations deploying or using AI therefore are likely to have a responsibility to address these issues and our case studies have shown that they are aware of this and putting measures in place. This is clearly true for data protection or security issues. The medium-term issues that are less likely to find local resolutions still figure prominently, even though an individual organisation has less influence on how they can be addressed. Examples of this would be questions of unemployment, justice, or fairness. There was little reference to what we call long-term issues, which can partly be explained by the fact that the type of AI user organisations we investigated have very limited influence on how they are perceived and how they may be addressed.

Interpretative Differences on Ethical Issues

Despite general agreement on the terminology used to describe ethical issues, there are often important differences in interpretation and understanding. In the first ethics theme, control of data, the perceptions of privacy ranged from ‘not an issue’ to an issue that was being dealt with. Some of this arose from the question of informed consent and the GDPR. However, a reliance on legislation, such as GDPR, without full knowledge of the intricacies of its details (i.e. that informed consent is only one of several legal bases of lawful data processing), may give rise to a false sense of security over people’s perceived privacy. This was also linked to the issue of transparency (of processes dealing with data), which may be external to the organisation (do people outside understand how an organisation holds and processes their data), or internal (how well does the organisation understand the algorithms developed internally) and sometimes involve deliberate opacity (used in specific contexts where it is perceived as necessary, such as in monitoring political unrest and its possible consequences). Therefore, a clearer and more nuanced understanding of privacy and other ethical terms raised here might well be useful, albeit tricky to derive in a public setting (for an example of complications in defining privacy, see Macnish, 2018 ).

Some issues from the literature were not mentioned in the cases, such as warfare. This can easily be explained by our choice of case studies, none of which drew on work done in this area. It indicates that even a set of 10 case studies falls short of covering all issues.

A further empirical insight is in the category we called ‘role of organisations’, which covers trust and responsibility. Trust is a key term in the discussion of the ethics of AI, prominently highlighted by the focus on trustworthy AI by the EU’s High-Level Expert Group, among others. We put this into the ‘role of organisations’ category because our interaction with the case study respondents suggested that they felt it was part of the role of their organisations to foster trust and establish responsibilities. But we are open to the suggestion that these are concepts on a slightly different level that may provide the link between specific issues in applications and broader societal debate.

Next Steps: Addressing the Ethics of AI and Big Data

This paper is predominantly descriptive, and it aims to provide a theoretically sound and empirically rich account of ethical concerns in AI + BD. While we hope that it proves to be insightful it is only a first step in the broader journey towards addressing and resolving these issues. The categorisation suggested here gives an initial indication of which type of actor may be called upon to address which type of issue. The distinction between micro-, meso- and macro perspectives suggested by Haenlein and Kaplan ( 2019 ) resonates to some degree with our categorisation of issues.

This points to the question what can be done to address these ethical issues and by whom should it be done? We have not touched on this question in the theoretical or empirical part of the paper, but the question of mitigation is the motivating force behind much of the AI + BD ethics research. The purpose of understanding these ethical questions is to find ways of addressing them.

This calls for a more detailed investigation of the ethical nature of the issues described here. As indicated earlier, we did not begin with a specific ethical theoretical framework imposed onto the case studies, but did have some derived ethics concepts which we explored within the context of the cases and allowed others to emerge over the course of the interviews. One issue is the philosophical question whether the different ethical issues discussed here are of a similar or comparable nature and what characterises them as ethical issues. This is not only a philosophical question but also a practical one for policymakers and decision makers. We have alluded to the idea that privacy and data protection are ethical issues, but they also have strong legal implications and can also be human rights issues. It would therefore be beneficial to undertake a further analysis to investigate which of these ethical issues are already regulated and to what degree current regulation covers BD + AI, and how this varies across the various EU nations and beyond.

Another step could be to expand an investigation like the one presented here to cover the ethics of AI + BD debate with a focus on suggested resolutions and policies. This could be achieved by adopting the categorisation and structure presented here and extending it to the currently discussed option for addressing the ethical issues. These include individual and collective activities ranging from technical measures to measure bias in data or individual professional guidance to standardisation, legislation, the creation of a specific regulator and many more. It will be important to understand how these measures are conceptualised as well as which ones are already used to which effect. Any such future work, however, will need to be based on a sound understanding of the issues themselves, which this paper contributes to. The key contribution of the paper, namely the presentation of empirical findings from 10 case studies show in more detail how ethical issues play out in practice. While this work can and should be expanded by including an even broader variety of cases and could be supplemented by other empirical research methods, it marks an important step in the development of our understanding of these ethical issues. This should form a part of the broader societal debate about what these new technologies can and should be used for and how we can ensure that their consequences are beneficial for individuals and society.

Throughout the paper, XXX will be used to anonymise relevant text that may identify the authors, either through the project and/or publications resulting from the individual case studies. All case studies have been published individually. Several the XXX references in the findings refer to these individual publications which provide more detail on the cases than can be provided in this cross-case analysis.

The ethical issues that we discussed throughout the case studies refers to issues broadly construed as ethical issues, or issues that have ethical significance. While some issues may not be directly obvious how they are ethical issues, they may give rise to significant harm relevant to ethics. For example, accuracy of data may not explicitly be an ethical issue, if inaccurate data is used in algorithms, it may lead to discrimination, unfair bias, or harms to individuals.

Such as chat-bots, natural language processing AI, IoT data retrieval, predictive risk analysis, cybersecurity machine-learning, and large dataset exchanges.

https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1 .

https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence .

The type of AI currently in vogue, as outlined earlier, is based on machine learning, typically employing artificial neural networks for big data analysis. This is typically seen as ‘narrow AI’ and it is not clear whether there is a way from narrow to general AI, even if one were to accept that achieving general AI is fundamentally possible.

The 16 social domains were: Banking and securities; Healthcare; Insurance; Retail and wholesale trade; Science; Education; Energy and utilities; Manufacturing and natural resources; Agriculture; Communications, media and entertainment; Transportation; Employee monitoring and administration; Government; Law enforcement and justice; Sustainable development; and Defence and national security.

This increased to 26 ethical issues following a group brainstorming session at the case study workshop.

The nine additional ethical issues from the initial 17 drafted by the project leader were: human rights, transparency, responsibility, ownership of data, algorithmic bias, integrity, human rights, human contact, and accuracy of data.

The additional ethical issues were access to BD + AI, accuracy of data, accuracy of recommendations, algorithmic bias, economic, human contact, human rights, integrity, ownership of data, responsibility, and transparency. Two of the initial ethical concerns were removed (inclusion of stakeholders and environmental impact). The issues raised concerning inclusion of stakeholders were deemed to be sufficiently included in access to BD + AI, and those relating to environmental impact were felt to be sufficiently covered by sustainability.

The three appendices attached in this paper comprise much of this case study protocol.

CS4 evaluated four organisations, but one of these organisations was also part of CS2 – Organisation 1. CS6 analysed two insurance organisations.

Starting out, we aimed to have both policy/ethics-focused experts within the organisation and individuals that could also speak with us about the technical aspects of the organisation’s BD + AI. However, this was often not possible, due to availability, organisations’ inability to free up resources (e.g. employee’s time) for interviews, or lack of designated experts in those areas.

For example, in CS1, CS6, and CS8.

For example, in CS2, CS3, CS4, CS5, CS6, and CS9.

As is discussed elsewhere in this paper, algorithms also hold the possibility of reinforcing our prejudices and biases or creating new ones entirely.

Accenture. (2016). Building digital trust: The role of data ethics in the digital age. Retrieved December 1, 2020 from https://www.accenture.com/t20160613T024441__w__/us-en/_acnmedia/PDF-22/Accenture-Data-Ethics-POV-WEB.pdf .

Accenture. (2017). Embracing artificial intelligence. Enabling strong and inclusive AI driven growth. Retrieved December 1, 2020 from https://www.accenture.com/t20170614T130615Z__w__/us-en/_acnmedia/Accenture/next-gen-5/event-g20-yea-summit/pdfs/Accenture-Intelligent-Economy.pdf .

Antoniou, J., & Andreou, A. (2019). Case study: The Internet of Things and Ethics. The Orbit Journal, 2 (2), 67.

Google Scholar  

Badri, A., Boudreau-Trudel, B., & Souissi, A. S. (2018). Occupational health and safety in the industry 4.0 era: A cause for major concern? Safety Science, 109, 403–411. https://doi.org/10.1016/j.ssci.2018.06.012

Article   Google Scholar  

Barolli, L., Takizawa, M., Xhafa, F., & Enokido, T. (ed.) (2019). Web, artificial intelligence and network applications. In Proceedings of the workshops of the 33rd international conference on advanced information networking and applications , Springer.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104 (671), 671–732. https://doi.org/10.15779/Z38BG31

Baum, S. D. (2017). Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Society, 2018 (33), 565–572.

Belliveau, K. M., Gray, L. E., & Wilson, R. J. (2019). Busting the Black Box: Big Data Employment and Privacy | IADC LAW. https://www.iadclaw.org/publications-news/defensecounseljournal/busting-the-black-box-big-data-employment-and-privacy/ . Accessed 10 May 2019.

Bossman, J. (2016). Top 9 ethical issues in artificial intelligence. World Economic Forum . https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/ . Accessed 10 May 2019.

Bostrom, N. (2016). Superintelligence: Paths . OUP Oxford.

Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15 (5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society, 3 (1), 2053951715622512.

Bush, T., (2012). Authenticity in Research: Reliability, Validity and Triangulation. Chapter 6 in edited “Research Methods in Educational Leadership and Management”, SAGE Publications.

Calders, T., Kamiran, F., & Pechenizkiy, M. (2009). Building classifiers with independency constraints. In IEEE international conference data mining workshops , ICDMW’09, Miami, USA.

Chatfield, K., Iatridis, K., Stahl, B. C., & Paspallis, N. (2017). Innovating responsibly in ICT for ageing: Drivers, obstacles and implementation. Sustainability, 9 (6), 971. https://doi.org/10.3390/su9060971 .

Cohen, I. G., Amarasingham, R., Shah, A., et al. (2014). The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs, 33 (7), 1139–1147.

Couldry, N., & Powell, A. (2014). Big Data from the bottom up. Big Data and Society, 1 (2), 205395171453927. https://doi.org/10.1177/2053951714539277

Crawford, K., Gray, M. L., & Miltner, K. (2014). Big data| critiquing big data: Politics, ethics, epistemology | special section introduction. International Journal of Communication, 8, 10.

Cuquet, M., & Fensel, A. (2018). The societal impact of big data: A research roadmap for Europe. Technology in Society, 54, 74–86.

Danna, A., & Gandy, O. H., Jr. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics, 40 (4), 373–438.

European Convention for the Protection of HUman Rights and Fundamental Freedoms, pmbl., Nov. 4, 1950, 213 UNTS 221.

Herriott, E. R., & Firestone, W. (1983). Multisite qualitative policy research: Optimizing description and generalizability. Educational Researcher, 12, 14–19. https://doi.org/10.3102/0013189X012002014

Einav, L., & Levin, J. (2014). Economics in the age of big data. Science, 346 (6210), 1243089. https://doi.org/10.1126/science.1243089

Ferraggine, V. E., Doorn, J. H., & Rivera, L. C. (2009). Handbook of research on innovations in database technologies and applications: Current and future trends (pp. 1–1124). IGI Global.

Fothergill, B. T., Knight, W., Stahl, B. C., & Ulnicane, I. (2019). Responsible data governance of neuroscience big data. Frontiers in Neuroinformatics, 13 . https://doi.org/10.3389/fninf.2019.00028

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61 (4), 5–14.

Harari, Y. N. (2017). Homo deus: A brief history of tomorrow (1st ed.). Vintage.

Book   Google Scholar  

ICO. (2017). Big data, artificial intelligence, machine learning and data protection. Retrieved December 1, 2020 from Information Commissioner’s Office website: https://iconewsblog.wordpress.com/2017/03/03/ai-machine-learning-and-personal-data/ .

Ioannidis, J. P. (2013). Informed consent, big data, and the oxymoron of research that is not research. The American Journal of Bioethics., 2, 15.

Jain, P., Gyanchandani, M., & Khare, N. (2016). Big data privacy: A technological perspective and review. Journal of Big Data, 3 (1), 25.

Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33 (3), 371–377. https://doi.org/10.1016/j.giq.2016.08.011

Jirotka, M., Grimpe, B., Stahl, B., Hartswood, M., & Eden, G. (2017). Responsible research and innovation in the digital age. Communications of the ACM, 60 (5), 62–68. https://doi.org/10.1145/3064940

Jiya, T. (2019). Ethical Implications Of Predictive Risk Intelligence. ORBIT Journal, 2 (2), 51.

Jiya, T. (2019). Ethical reflections of human brain research and smart information systems. The ORBIT Journal, 2 (2), 1–24.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 4 (16), 263–274.

Johnson, J. A. (2018). Open data, big data, and just data. In J. A. Johnson (Ed.), Toward information justice (pp. 23–49). Berlin: Springer.

Chapter   Google Scholar  

Kancevičienė, N. (2019). Insurance, smart information systems and ethics: a case study. The ORBIT Journal, 2 (2), 1–27.

Keeso, A. (2014). Big data and environmental sustainability: A conversation starter . https://www.google.com/search?rlz=1C1CHBF_nlNL796NL796&ei=YF3VXN3qCMLCwAKp4qjYBQ&q=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&oq=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&gs_l=psy-ab.3...15460.16163..16528...0.0..0.76.371.6......0....1..gws-wiz.......0i71j35i304i39j0i13i30.M_8nNbaL2E8 . Accessed 10 May 2019.

Kuriakose, F., & Iyer, D. (2018). Human Rights in the Big Data World (SSRN Scholarly Paper No. ID 3246969). Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3246969 . Accessed 13 May 2019.

Kurzweil, R. (2006). The singularity is near . Gerald Duckworth & Co Ltd.

Latonero, M. (2018). Big data analytics and human rights. New Technologies for Human Rights Law and Practice. https://doi.org/10.1017/9781316838952.007

Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The tyranny of data? the bright and dark sides of data-driven decision-making for social good. In Transparent data mining for big and small data (pp. 3–24). Springer.

Livingstone, D. (2015). Transhumanism: The history of a dangerous idea . CreateSpace Independent Publishing Platform.

Macnish, K. (2018). Government surveillance and why defining privacy matters in a post-snowden world. Journal of Applied Philosophy, 35 (2), 417–432.

Macnish, K., & Inguanzo, A. (2019). Case study-customer relation management, smart information systems and ethics. The ORBIT Journal, 2 (2), 1–24.

Macnish, K., Inguanzo, A. F., & Kirichenko, A. (2019). Smart information systems in cybersecurity. ORBIT Journal, 2 (2), 15.

Mai, J. E. (2016). Big data privacy: The datafication of personal information. The Information Society, 32 (3), 192–199.

Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics . Cambridge University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3 (2), 2053951716679679.

Meeker, Q. W., & , Hong, Y. . (2014). Reliability Meets big data: Opportunities and challenges. Quality Engineering, 26 (1), 102–116.

Newman, N. (2013). The costs of lost privacy: Consumer harm and rising economic inequality in the age of google (SSRN Scholarly Paper No. ID 2310146). Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=2310146 . Accessed 10 May 2019.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy . Crown Publishers.

Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of global health, 9 (2).

Pellé, S., & Reber, B. (2015). Responsible innovation in the light of moral responsibility. Journal on Chain and Network Science, 15 (2), 107–117. https://doi.org/10.3920/JCNS2014.x017

Portmess, L., & Tower, S. (2015). Data barns, ambient intelligence and cloud computing: The tacit epistemology and linguistic representation of Big Data. Ethics and Information Technology, 17 (1), 1–9. https://doi.org/10.1007/s10676-014-9357-2

Ryan, M. (2019). Ethics of public use of AI and big data. ORBIT Journal, 2 (2), 15.

Ryan, M. (2019). Ethics of using AI and big data in agriculture: The case of a large agriculture multinational. The ORBIT Journal, 2 (2), 1–27.

Ryan, M., & Gregory, A. (2019). Ethics of using smart city AI and big data: The case of four large European cities. The ORBIT Journal, 2 (2), 1–36.

Sadowski, J., & Pasquale, F. A. (2015). The spectrum of control: A social theory of the smart city. First Monday, 20 (7), 16.

Schradie, J. (2017). Big data is too small: Research implications of class inequality for online data collection. In D. June & P. Andrea (Eds.), Media and class: TV, film and digital culture . Abingdon: Taylor and Francis.

Taylor, L. (2017). ‘What is data justice? The case for connecting digital rights and freedoms globally’ In Big data and society (pp. 1–14). https://doi.org/10.1177/2053951717736335 .

Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. The Northwestern Journal of Technology and Intellectual Property, 11, 10.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: technology, privacy and shifting social norms. Yale JL and Technology, 16, 59.

Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1 (1), 2–14.

Voinea, C., & Uszkai, R. (n.d.). An assessement of algorithmic accountability methods .

Walsham, G. (1995). Interpretive case studies in IS research: nature and method. European Journal of Information Systems, 4 (2), 74–81.

Wallace, N., & Castro, D. (2018) The Impact of the EU’s New Data Protection Regulation on AI, Centre for Data Innovation .

Walsham, G. (1995). Interpretive case-studies in IS research-nature and method. European Journal of Information Systems, 4 (2), 74–81.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Wheeler, G. (2016). Machine epistemology and big data. In L. McIntyre & A. Rosenburg (Eds.), Routledge Companion to Philosophy of Social Science . Routledge.

Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf .

Wolf, B. (2015). Burkhardt Wolf: Big data, small freedom? / Radical Philosophy. Radical Philosophy . https://www.radicalphilosophy.com/commentary/big-data-small-freedom . Accessed 13 May 2019.

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). SAGE.

Yin, R. K. (2015). Qualitative research from start to finish . Guilford Publications.

Zwitter, A. (2014). Big data ethics. Big Data and Society, 1 (2), 51.

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization (April 4, 2015). Journal of Information Technology, 2015 (30), 75–89. https://doi.org/10.1057/jit.2015.5

Download references

Acknowledgements

This SHERPA Project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 786641. The author(s) acknowledge the contribution of the consortium to the development and design of the case study approach.

Author information

Authors and affiliations.

Wageningen Economic Research, Wageningen University and Research, Wageningen, The Netherlands

UCLan Cyprus, Larnaka, Cyprus

Josephina Antoniou

De Montford University, Leicester, UK

Laurence Brooks & Bernd Stahl

Northampton University, Northampton, UK

Tilimbe Jiya

The University of Twente, Enschede, The Netherlands

Kevin Macnish

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark Ryan .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1: Desk Research Questions

Number Research Question.

In which sector is the organisation located (e.g. industry, government, NGO, etc.)?

What is the name of the organisation?

What is the geographic scope of the organisation?

What is the name of the interviewee?

What is the interviewee’s role within the organisation?

Appendix 2: Interview Research Questions

No Research Question.

What involvement has the interviewee had with BD + AI within the organisation?

What type of BD + AI is the organisation using? (e.g. IBM Watson, Google Deepmind)

What is the field of application of the BD + AI (e.g. administration, healthcare, retail)

Does the BD + AI work as intended or are there problems with its operation?

What are the innovative elements introduced by the BD + AI (e.g. what has the technology enabled within the organisation?)

What is the level of maturity of the BD + AI ? (i.e. has the technology been used for long at the organisation? Is it a recent development or an established approach?)

How does the BD + AI interact with other technologies within the organisation?

What are the parameters/inputs used to inform the BD + AI ? (e.g. which sorts of data are input, how is the data understood within the algorithm?). Does the BD + AI collect and/or use data which identifies or can be used to identify a living person (personal data)?. Does the BD + AI collect personal data without the consent of the person to whom those data relate?

What are the principles informing the algorithm used in the BD + AI (e.g. does the algorithm assume that people walk in similar ways, does it assume that loitering involves not moving outside a particular radius in a particular time frame?). Does the BD + AI classify people into groups? If so, how are these groups determined? Does the BD + AI identify abnormal behaviour? If so, what is abnormal behaviour to the BD + AI ?

Are there policies in place governing the use of the BD + AI ?

How transparent is the technology to administrators within the organisation, to users within the organisation?

Who are the stakeholders in the organisation?

What has been the impact of the BD + AI on stakeholders?

How transparent is the technology to people outside the organisation?

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?). If so, what is the nature of this engagement? (focus groups, feedback, etc.)

In what way are stakeholders impacted by the BD + AI ? (e.g. what is the societal impact: are there issues of inequality, fairness, safety, filter bubbles, etc.?)

What are the costs of using the BD + AI to stakeholders? (e.g. potential loss of privacy, loss of potential to sell information, potential loss of reputation)

What is the expected longevity of this impact? (e.g. is this expected to be temporary or long-term?)

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?)

If so, what is the nature of this engagement? (focus groups, feedback, etc.)

Appendix 3: Checklist of Ethical Issues

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ryan, M., Antoniou, J., Brooks, L. et al. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. Sci Eng Ethics 27 , 16 (2021). https://doi.org/10.1007/s11948-021-00293-x

Download citation

Received : 26 August 2019

Accepted : 10 February 2021

Published : 08 March 2021

DOI : https://doi.org/10.1007/s11948-021-00293-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Smart information systems
  • Big data analytics
  • Artificial intelligence ethics
  • Multiple-case study analysis
  • Philosophy of technology
  • Find a journal
  • Publish with us
  • Track your research

case study ethical considerations

  • The Open University
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

case study ethical considerations

Addressing ethical issues in your research proposal

This article explores the ethical issues that may arise in your proposed study during your doctoral research degree.

What ethical principles apply when planning and conducting research?

Research ethics are the moral principles that govern how researchers conduct their studies (Wellcome Trust, 2014). As there are elements of uncertainty and risk involved in any study, every researcher has to consider how they can uphold these ethical principles and conduct the research in a way that protects the interests and welfare of participants and other stakeholders (such as organisations).  

You will need to consider the ethical issues that might arise in your proposed study. Consideration of the fundamental ethical principles that underpin all research will help you to identify the key issues and how these could be addressed. As you are probably a practitioner who wants to undertake research within your workplace, consider how your role as an ‘insider’ influences how you will conduct your study. Think about the ethical issues that might arise when you become an insider researcher (for example, relating to trust, confidentiality and anonymity).  

What key ethical principles do you think will be important when planning or conducting your research, particularly as an insider? Principles that come to mind might include autonomy, respect, dignity, privacy, informed consent and confidentiality. You may also have identified principles such as competence, integrity, wellbeing, justice and non-discrimination.  

Key ethical issues that you will address as an insider researcher include:

  • Gaining trust
  • Avoiding coercion when recruiting colleagues or other participants (such as students or service users)
  • Practical challenges relating to ensuring the confidentiality and anonymity of organisations and staff or other participants.

(Heslop et al, 2018)

A fuller discussion of ethical principles is available from the British Psychological Society’s Code of Human Research Ethics (BPS, 2021).

You can also refer to guidance from the British Educational Research Association and the British Association for Applied Linguistics .

Pebbles balance on a stone see-saw

Ethical principles are essential for protecting the interests of research participants, including maximising the benefits and minimising any risks associated with taking part in a study. These principles describe ethical conduct which reflects the integrity of the researcher, promotes the wellbeing of participants and ensures high-quality research is conducted (Health Research Authority, 2022).  

Research ethics is therefore not simply about gaining ethical approval for your study to be conducted. Research ethics relates to your moral conduct as a doctoral researcher and will apply throughout your study from design to dissemination (British Psychological Society, 2021). When you apply to undertake a doctorate, you will need to clearly indicate in your proposal that you understand these ethical principles and are committed to upholding them.  

Where can I find ethical guidance and resources? 

Professional bodies, learned societies, health and social care authorities, academic publications, Research Ethics Committees and research organisations provide a range of ethical guidance and resources. International codes such as the Universal Declaration of Human Rights underpin ethical frameworks (United Nations, 1948).  

You may be aware of key legislation in your own country or the country where you plan to undertake the research, including laws relating to consent, data protection and decision-making capacity, for example, the Data Protection Act, 2018 (UK).  If you want to find out more about becoming an ethical researcher, check out this Open University short course: Becoming an ethical researcher: Introduction and guidance: What is a badged course? - OpenLearn - Open University  

You should be able to justify the research decisions you make. Utilising these resources will guide your ethical judgements when writing your proposal and ultimately when designing and conducting your research study. The Ethical Guidelines for Educational Research (British Educational Research Association, 2018) identifies the key responsibilities you will have when you conduct your research, including the range of stakeholders that you will have responsibilities to, as follows:   

  • to your participants (e.g. to appropriately inform them, facilitate their participation and support them)
  • clients, stakeholders and sponsors
  • the community of educational or health and social care researchers
  • for publication and dissemination
  • your wellbeing and development

The National Institute for Health and Care Research (no date) has emphasised the need to promote equality, diversity and inclusion when undertaking research, particularly to address long-standing social and health inequalities. Research should be informed by the diversity of people’s experiences and insights, so that it will lead to the development of practice that addresses genuine need. A commitment to equality, diversity and inclusion aims to eradicate prejudice and discrimination on the basis of an individual or group of individuals' protected characteristics such as sex (gender), disability, race, sexual orientation, in line with the Equality Act 2010.  

The NIHR has produced guidance for enhancing the inclusion of ‘under-served groups’ when designing a research study (2020). Although the guidance refers to clinical research it is relevant to research more broadly.  

You should consider how you will promote equality and diversity in your planned study, including through aspects such as your research topic or question, the methodology you will use, the participants you plan to recruit and how you will analyse and interpret your data.    

What ethical issues do I need to consider when writing my research proposal?

Camera equipment set up filming a man talking

You might be planning to undertake research in a health, social care, educational or other setting, including observations and interviews. The following prompts should help you to identify key ethical issues that you need to bear in mind when undertaking research in such settings.  

1.     Imagine you are a potential participant. Think about the questions and concerns that you might have:

  • How would you feel if a researcher sat in your space and took notes, completed a checklist, or made an audio or film recording?
  • What harm might a researcher cause by observing or interviewing you and others?
  • What would you want to know about the researcher and ask them about the study before giving consent?
  • When imagining you are the participant, how could the researcher make you feel more comfortable to be observed or interviewed? 

2.     Having considered the perspective of your potential participant, how would you take account of concerns such as privacy, consent, wellbeing and power in your research proposal?  

[Adapted from OpenLearn course: Becoming an ethical researcher, Week 2 Activity 3: Becoming an ethical researcher - OpenLearn - Open University ]  

The ethical issues to be considered will vary depending on your organisational context/role, the types of participants you plan to recruit (for example, children, adults with mental health problems), the research methods you will use, and the types of data you will collect. You will need to decide how to recruit your participants so you do not inappropriately exclude anyone.  Consider what methods may be necessary to facilitate their voice and how you can obtain their consent to taking part or ensure that consent is obtained from someone else as necessary, for example, a parent in the case of a child. 

You should also think about how to avoid imposing an unnecessary burden or costs on your participants. For example, by minimising the length of time they will have to commit to the study and by providing travel or other expenses. Identify the measures that you will take to store your participants’ data safely and maintain their confidentiality and anonymity when you report your findings. You could do this by storing interview and video recordings in a secure server and anonymising their names and those of their organisations using pseudonyms.  

Professional codes such as the Code of Human Research Ethics (BPS, 2021) provide guidance on undertaking research with children. Being an ‘insider’ researching within your own organisation has advantages. However, you should also consider how this might impact on your research, such as power dynamics, consent, potential bias and any conflict of interest between your professional and researcher roles (Sapiro and Matthews, 2020).  

How have other researchers addressed any ethical challenges?

The literature provides researchers’ accounts explaining how they addressed ethical challenges when undertaking studies. For example, Turcotte-Tremblay and McSween-Cadieux (2018) discuss strategies for protecting participants’ confidentiality when disseminating findings locally, such as undertaking fieldwork in multiple sites and providing findings in a generalised form. In addition, professional guidance includes case studies illustrating how ethical issues can be addressed, including when researching online forums (British Sociological Association, no date).

Watch the videos below and consider what insights the postgraduate researcher and supervisor provide  regarding issues such as being an ‘insider researcher’, power relations, avoiding intrusion, maintaining participant anonymity and complying with research ethics and professional standards. How might their experiences inform the design and conduct of your own study?

Postgraduate researcher and supervisor talk about ethical considerations

Your thoughtful consideration of the ethical issues that might arise and how you would address these should enable you to propose an ethically informed study and conduct it in a responsible, fair and sensitive manner. 

British Educational Research Association (2018)  Ethical Guidelines for Educational Research.  Available at:  https://www.bera.ac.uk/publication/ethical-guidelines-for-educational-research-2018  (Accessed: 9 June 2023).

British Psychological Society (2021)  Code of Human Research Ethics . Available at:  https://cms.bps.org.uk/sites/default/files/2022-06/BPS%20Code%20of%20Human%20Research%20Ethics%20%281%29.pdf  (Accessed: 9 June 2023).

British Sociological Association (2016)  Researching online forums . Available at:  https://www.britsoc.co.uk/media/24834/j000208_researching_online_forums_-cs1-_v3.pdf  (Accessed: 9 June 2023).

Health Research Authority (2022)  UK Policy Framework for Health and Social Care Research . Available at:  https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/uk-policy-framework-health-and-social-care-research/#chiefinvestigators  (Accessed: 9 June 2023).

Heslop, C., Burns, S., Lobo, R. (2018) ‘Managing qualitative research as insider-research in small rural communities’,  Rural and Remote Health , 18: pp. 4576.

Equality Act 2010, c. 15.  Available at:   https://www.legislation.gov.uk/ukpga/2010/15/introduction   (Accessed: 9 June 2023).

National Institute for Health and Care Research (no date)  Equality, Diversity and Inclusion (EDI) . Available at:  https://arc-kss.nihr.ac.uk/public-and-community-involvement/pcie-guide/how-to-do-pcie/equality-diversity-and-inclusion-edi  (Accessed: 9 June 2023).

National Institute for Health and Care Research (2020)  Improving inclusion of under-served groups in clinical research: Guidance from INCLUDE project.  Available at:   https://www.nihr.ac.uk/documents/improving-inclusion-of-under-served-groups-in-clinical-research-guidance-from-include-project/25435  (Accessed: 9 June 2023).

Sapiro, B. and Matthews, E. (2020) ‘Both Insider and Outsider. On Conducting Social Work Research in Mental Health Settings’,  Advances in Social Work , 20(3). Available at:  https://doi.org/10.18060/23926

Turcotte-Tremblay, A. and McSween-Cadieux, E. (2018) ‘A reflection on the challenge of protecting confidentiality of participants when disseminating research results locally’,  BMC Medical Ethics,  19(supplement 1), no. 45. Available at:   https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-018-0279-0

United Nations General Assembly (1948)  The Universal Declaration of Human Rights . Resolution A/RES/217/A. Available at:  https://www.un.org/en/about-us/universal-declaration-of-human-rights#:~:text=Drafted%20by%20representatives%20with%20different,all%20peoples%20and%20all%20nations . (Accessed: 9 June 2023).

Wellcome Trust (2014)  Ensuring your research is ethical: A guide for Extended Project Qualification students . Available at:  https://wellcome.org/sites/default/files/wtp057673_0.pdf  (Accessed: 9 June 2023).

More articles from the research proposal collection

Writing your research proposal

Writing your research proposal

A doctoral research degree is the highest academic qualification that a student can achieve. The guidance provided in these articles will help you apply for one of the two main types of research degree offered by The Open University.

Level: 1 Introductory

Defining your research methodology

Defining your research methodology

Your research methodology is the approach you will take to guide your research process and explain why you use particular methods. This article explains more.

Writing your proposal and preparing for your interview

Writing your proposal and preparing for your interview

The final article looks at writing your research proposal - from the introduction through to citations and referencing - as well as preparing for your interview.

Free courses on postgraduate study

Are you ready for postgraduate study?

Are you ready for postgraduate study?

This free course, Are you ready for postgraduate study, will help you to become familiar with the requirements and demands of postgraduate study and ensure you are ready to develop the skills and confidence to pursue your learning further.

Succeeding in postgraduate study

Succeeding in postgraduate study

This free course, Succeeding in postgraduate study, will help you to become familiar with the requirements and demands of postgraduate study and to develop the skills and confidence to pursue your learning further.

Applying to study for a PhD in psychology

Applying to study for a PhD in psychology

This free OpenLearn course is for psychology students and graduates who are interested in PhD study at some future point. Even if you have met PhD students and heard about their projects, it is likely that you have only a vague idea of what PhD study entails. This course is intended to give you more information.

Become an OU student

Ratings & comments, share this free course, copyright information, publication details.

  • Originally published: Tuesday, 27 June 2023
  • Body text - Creative Commons BY-NC-SA 4.0 : The Open University
  • Image 'Pebbles balance on a stone see-saw' - Copyright: Photo  51106733  /  Balance  ©  Anatoli Styf  |  Dreamstime.com
  • Image 'Camera equipment set up filming a man talking' - Copyright: Photo  42631221  ©  Gabriel Robledo  |  Dreamstime.com
  • Image 'Applying to study for a PhD in psychology' - Copyright free
  • Image 'Succeeding in postgraduate study' - Copyright: © Everste/Getty Images
  • Image 'Addressing ethical issues in your research proposal' - Copyright: Photo 50384175 / Children Playing © Lenutaidi | Dreamstime.com
  • Image 'Writing your proposal and preparing for your interview' - Copyright: Photo 133038259 / Black Student © Fizkes | Dreamstime.com
  • Image 'Defining your research methodology' - Copyright free
  • Image 'Writing your research proposal' - Copyright free
  • Image 'Are you ready for postgraduate study?' - Copyright free

Rate and Review

Rate this article, review this article.

Log into OpenLearn to leave reviews and join in the conversation.

Article reviews

For further information, take a look at our frequently asked questions which may give you the support you need.

  • Privacy Policy

Research Method

Home » Ethical Considerations – Types, Examples and Writing Guide

Ethical Considerations – Types, Examples and Writing Guide

Table of Contents

Ethical Considerations

Ethical Considerations

Ethical considerations in research refer to the principles and guidelines that researchers must follow to ensure that their studies are conducted in an ethical and responsible manner. These considerations are designed to protect the rights, safety, and well-being of research participants, as well as the integrity and credibility of the research itself

Some of the key ethical considerations in research include:

  • Informed consent: Researchers must obtain informed consent from study participants, which means they must inform participants about the study’s purpose, procedures, risks, benefits, and their right to withdraw at any time.
  • Privacy and confidentiality : Researchers must ensure that participants’ privacy and confidentiality are protected. This means that personal information should be kept confidential and not shared without the participant’s consent.
  • Harm reduction : Researchers must ensure that the study does not harm the participants physically or psychologically. They must take steps to minimize the risks associated with the study.
  • Fairness and equity : Researchers must ensure that the study does not discriminate against any particular group or individual. They should treat all participants equally and fairly.
  • Use of deception: Researchers must use deception only if it is necessary to achieve the study’s objectives. They must inform participants of the deception as soon as possible.
  • Use of vulnerable populations : Researchers must be especially cautious when working with vulnerable populations, such as children, pregnant women, prisoners, and individuals with cognitive or intellectual disabilities.
  • Conflict of interest : Researchers must disclose any potential conflicts of interest that may affect the study’s integrity. This includes financial or personal relationships that could influence the study’s results.
  • Data manipulation: Researchers must not manipulate data to support a particular hypothesis or agenda. They should report the results of the study objectively, even if the findings are not consistent with their expectations.
  • Intellectual property: Researchers must respect intellectual property rights and give credit to previous studies and research.
  • Cultural sensitivity : Researchers must be sensitive to the cultural norms and beliefs of the participants. They should avoid imposing their values and beliefs on the participants and should be respectful of their cultural practices.

Types of Ethical Considerations

Types of Ethical Considerations are as follows:

Research Ethics:

This includes ethical principles and guidelines that govern research involving human or animal subjects, ensuring that the research is conducted in an ethical and responsible manner.

Business Ethics :

This refers to ethical principles and standards that guide business practices and decision-making, such as transparency, honesty, fairness, and social responsibility.

Medical Ethics :

This refers to ethical principles and standards that govern the practice of medicine, including the duty to protect patient autonomy, informed consent, confidentiality, and non-maleficence.

Environmental Ethics :

This involves ethical principles and values that guide our interactions with the natural world, including the obligation to protect the environment, minimize harm, and promote sustainability.

Legal Ethics

This involves ethical principles and standards that guide the conduct of legal professionals, including issues such as confidentiality, conflicts of interest, and professional competence.

Social Ethics

This involves ethical principles and values that guide our interactions with other individuals and society as a whole, including issues such as justice, fairness, and human rights.

Information Ethics

This involves ethical principles and values that govern the use and dissemination of information, including issues such as privacy, accuracy, and intellectual property.

Cultural Ethics

This involves ethical principles and values that govern the relationship between different cultures and communities, including issues such as respect for diversity, cultural sensitivity, and inclusivity.

Technological Ethics

This refers to ethical principles and guidelines that govern the development, use, and impact of technology, including issues such as privacy, security, and social responsibility.

Journalism Ethics

This involves ethical principles and standards that guide the practice of journalism, including issues such as accuracy, fairness, and the public interest.

Educational Ethics

This refers to ethical principles and standards that guide the practice of education, including issues such as academic integrity, fairness, and respect for diversity.

Political Ethics

This involves ethical principles and values that guide political decision-making and behavior, including issues such as accountability, transparency, and the protection of civil liberties.

Professional Ethics

This refers to ethical principles and standards that guide the conduct of professionals in various fields, including issues such as honesty, integrity, and competence.

Personal Ethics

This involves ethical principles and values that guide individual behavior and decision-making, including issues such as personal responsibility, honesty, and respect for others.

Global Ethics

This involves ethical principles and values that guide our interactions with other nations and the global community, including issues such as human rights, environmental protection, and social justice.

Applications of Ethical Considerations

Ethical considerations are important in many areas of society, including medicine, business, law, and technology. Here are some specific applications of ethical considerations:

  • Medical research : Ethical considerations are crucial in medical research, particularly when human subjects are involved. Researchers must ensure that their studies are conducted in a way that does not harm participants and that participants give informed consent before participating.
  • Business practices: Ethical considerations are also important in business, where companies must make decisions that are socially responsible and avoid activities that are harmful to society. For example, companies must ensure that their products are safe for consumers and that they do not engage in exploitative labor practices.
  • Environmental protection: Ethical considerations play a crucial role in environmental protection, as companies and governments must weigh the benefits of economic development against the potential harm to the environment. Decisions about land use, resource allocation, and pollution must be made in an ethical manner that takes into account the long-term consequences for the planet and future generations.
  • Technology development : As technology continues to advance rapidly, ethical considerations become increasingly important in areas such as artificial intelligence, robotics, and genetic engineering. Developers must ensure that their creations do not harm humans or the environment and that they are developed in a way that is fair and equitable.
  • Legal system : The legal system relies on ethical considerations to ensure that justice is served and that individuals are treated fairly. Lawyers and judges must abide by ethical standards to maintain the integrity of the legal system and to protect the rights of all individuals involved.

Examples of Ethical Considerations

Here are a few examples of ethical considerations in different contexts:

  • In healthcare : A doctor must ensure that they provide the best possible care to their patients and avoid causing them harm. They must respect the autonomy of their patients, and obtain informed consent before administering any treatment or procedure. They must also ensure that they maintain patient confidentiality and avoid any conflicts of interest.
  • In the workplace: An employer must ensure that they treat their employees fairly and with respect, provide them with a safe working environment, and pay them a fair wage. They must also avoid any discrimination based on race, gender, religion, or any other characteristic protected by law.
  • In the media : Journalists must ensure that they report the news accurately and without bias. They must respect the privacy of individuals and avoid causing harm or distress. They must also be transparent about their sources and avoid any conflicts of interest.
  • In research: Researchers must ensure that they conduct their studies ethically and with integrity. They must obtain informed consent from participants, protect their privacy, and avoid any harm or discomfort. They must also ensure that their findings are reported accurately and without bias.
  • In personal relationships : People must ensure that they treat others with respect and kindness, and avoid causing harm or distress. They must respect the autonomy of others and avoid any actions that would be considered unethical, such as lying or cheating. They must also respect the confidentiality of others and maintain their privacy.

How to Write Ethical Considerations

When writing about research involving human subjects or animals, it is essential to include ethical considerations to ensure that the study is conducted in a manner that is morally responsible and in accordance with professional standards. Here are some steps to help you write ethical considerations:

  • Describe the ethical principles: Start by explaining the ethical principles that will guide the research. These could include principles such as respect for persons, beneficence, and justice.
  • Discuss informed consent : Informed consent is a critical ethical consideration when conducting research. Explain how you will obtain informed consent from participants, including how you will explain the purpose of the study, potential risks and benefits, and how you will protect their privacy.
  • Address confidentiality : Describe how you will protect the confidentiality of the participants’ personal information and data, including any measures you will take to ensure that the data is kept secure and confidential.
  • Consider potential risks and benefits : Describe any potential risks or harms to participants that could result from the study and how you will minimize those risks. Also, discuss the potential benefits of the study, both to the participants and to society.
  • Discuss the use of animals : If the research involves the use of animals, address the ethical considerations related to animal welfare. Explain how you will minimize any potential harm to the animals and ensure that they are treated ethically.
  • Mention the ethical approval : Finally, it’s essential to acknowledge that the research has received ethical approval from the relevant institutional review board or ethics committee. State the name of the committee, the date of approval, and any specific conditions or requirements that were imposed.

When to Write Ethical Considerations

Ethical considerations should be written whenever research involves human subjects or has the potential to impact human beings, animals, or the environment in some way. Ethical considerations are also important when research involves sensitive topics, such as mental health, sexuality, or religion.

In general, ethical considerations should be an integral part of any research project, regardless of the field or subject matter. This means that they should be considered at every stage of the research process, from the initial planning and design phase to data collection, analysis, and dissemination.

Ethical considerations should also be written in accordance with the guidelines and standards set by the relevant regulatory bodies and professional associations. These guidelines may vary depending on the discipline, so it is important to be familiar with the specific requirements of your field.

Purpose of Ethical Considerations

Ethical considerations are an essential aspect of many areas of life, including business, healthcare, research, and social interactions. The primary purposes of ethical considerations are:

  • Protection of human rights: Ethical considerations help ensure that people’s rights are respected and protected. This includes respecting their autonomy, ensuring their privacy is respected, and ensuring that they are not subjected to harm or exploitation.
  • Promoting fairness and justice: Ethical considerations help ensure that people are treated fairly and justly, without discrimination or bias. This includes ensuring that everyone has equal access to resources and opportunities, and that decisions are made based on merit rather than personal biases or prejudices.
  • Promoting honesty and transparency : Ethical considerations help ensure that people are truthful and transparent in their actions and decisions. This includes being open and honest about conflicts of interest, disclosing potential risks, and communicating clearly with others.
  • Maintaining public trust: Ethical considerations help maintain public trust in institutions and individuals. This is important for building and maintaining relationships with customers, patients, colleagues, and other stakeholders.
  • Ensuring responsible conduct: Ethical considerations help ensure that people act responsibly and are accountable for their actions. This includes adhering to professional standards and codes of conduct, following laws and regulations, and avoiding behaviors that could harm others or damage the environment.

Advantages of Ethical Considerations

Here are some of the advantages of ethical considerations:

  • Builds Trust : When individuals or organizations follow ethical considerations, it creates a sense of trust among stakeholders, including customers, clients, and employees. This trust can lead to stronger relationships and long-term loyalty.
  • Reputation and Brand Image : Ethical considerations are often linked to a company’s brand image and reputation. By following ethical practices, a company can establish a positive image and reputation that can enhance its brand value.
  • Avoids Legal Issues: Ethical considerations can help individuals and organizations avoid legal issues and penalties. By adhering to ethical principles, companies can reduce the risk of facing lawsuits, regulatory investigations, and fines.
  • Increases Employee Retention and Motivation: Employees tend to be more satisfied and motivated when they work for an organization that values ethics. Companies that prioritize ethical considerations tend to have higher employee retention rates, leading to lower recruitment costs.
  • Enhances Decision-making: Ethical considerations help individuals and organizations make better decisions. By considering the ethical implications of their actions, decision-makers can evaluate the potential consequences and choose the best course of action.
  • Positive Impact on Society: Ethical considerations have a positive impact on society as a whole. By following ethical practices, companies can contribute to social and environmental causes, leading to a more sustainable and equitable society.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on 7 May 2022 by Pritha Bhandari .

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviours, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to:

  • Protect the rights of research participants
  • Enhance research validity
  • Maintain scientific integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research aims with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Prevent plagiarism, run a free check.

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process, so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

  • What the study is about
  • The risks and benefits of taking part
  • How long the study will take
  • Your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymise data collection. For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymisation is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants, but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study, as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources, counselling, or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine scientific integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 07). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved 29 April 2024, from https://www.scribbr.co.uk/research-methods/ethical-considerations/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, data collection methods | step-by-step guide & examples, how to avoid plagiarism | tips on citing sources.

case study ethical considerations

  • Business Ethics Cases
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Business Ethics
  • Business Ethics Resources

Find ethics case studies on bribery, sourcing, intellectual property, downsizing, and other topics in business ethics, corporate governance, and ethical leadership. (For permission to reprint articles, submit requests to [email protected] .)

In this business ethics case study, Swedish multinational company IKEA faced accusations relating to child labor abuses in the rug industry in Pakistan which posed a serious challenge for the company and its supply chain management goals.

A dog may be humanity’s best friend. But that may not always be the case in the workplace.

A recent college graduate works in the finance and analytics department of a large publicly traded software company and discovers an alarming discrepancy in sales records, raising concerns about the company’s commitment to truthful reporting to investors. 

What responsibility does an employee have when information they obtained in confidence from a coworker friend may be in conflict with the needs of the company or raises legal and ethical questions.

A manager at a prominent multinational company is ethically challenged by a thin line between opportunity for economic expansion in a deeply underserved community, awareness of child labor practices, and cultural relativism.

A volunteer providing service in the Dominican Republic discovered that the non-profit he had partnered with was exchanging his donor money on the black market, prompting him to navigate a series of complex decisions with significant ethical implications.

The CFO of a family business faces difficult decisions about how to proceed when the COVID-19 pandemic changes the business revenue models, and one family shareholder wants a full buyout.

An employee at an after-school learning institution must balance a decision to accept or decline an offered gift, while considering the cultural norms of the client, upholding the best interests of all stakeholders, and following the operational rules of his employer. 

A senior vice president for a Fortune 500 savings and loan company is tasked with the crucial responsibility of representing the buyer in a multi-million dollar loan purchase deal and faces several ethical challenges from his counterpart representing the seller.

Extensive teaching note based on interviews with Theranos whistleblower Tyler Shultz. The teaching note can be used to explore issues around whistleblowing, leadership, the blocks to ethical behavior inside organizations, and board governance.

  • More pages:

Transformative Potentials and Ethical Considerations of AI Tools in Higher Education: Case Studies and Reflections

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. Case Study Business Ethics Solution

    case study ethical considerations

  2. Ethical Considerations

    case study ethical considerations

  3. Ethical considerations. Source: Designed by Authors

    case study ethical considerations

  4. Making Ethical Decisions

    case study ethical considerations

  5. Ethical Considerations in Psychological Research Studies Essay Example

    case study ethical considerations

  6. ETHICAL CONSIDERATIONS IN RESEARCH

    case study ethical considerations

VIDEO

  1. Ethical Considerations in Research

  2. NCLEX NGN Case Study: Heart Failure Exacerbation Nursing Care

  3. Case Study Analysis Ethical Considerations and Cultural Impact 1

  4. Ethical Dilemmas

  5. CASE STUDY ETHICAL AND NON FINANCIAL CONSIDERATION IN INVESTMENT DECISIONS

  6. Class 2 Ethical Issues in Counseling / Psychotherapy Practice

COMMENTS

  1. Ethical Considerations in Research

    Revised on June 22, 2023. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective ...

  2. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    Case study—life stories on Israeli conflict: The author is a member of the society in conflict. It presents solutions found and emphasizes that it is necessary to work in collaboration with other researchers to better deal with dilemmas: Diniz D. Brazil. 2008: III: To discuss principles of research ethics in Social Sciences: Case study ...

  3. Making a Case for the Case: An Introduction

    Case studies are particularly helpful with ethical issues to provide crucial context and explore (and evaluate) how ethical decisions have been made or need to be made. Classic cases include the Tuskegee public health syphilis study, the Henrietta Lacks human cell line case, the Milgram and Zimbardo psychology cases, the Tea Room Trade case ...

  4. Ethical considerations in qualitative case study research recruiting

    A case study's design can be considerably less predetermined than many other forms of research, evolving in many ways dependent on the nature of the eventual case (Thomas, 2011). Issues to be explored, nature and exact number of participants, plans for recruitment and methods of data collection can be difficult to describe with complete ...

  5. Ethical Considerations in Case-Centered Qualitative Research

    Both case study and narrative research gather a great deal of highly detailed information on each case, e.g., a case study may collect a detailed account of a particular social program; or a narrative inquiry may result in long, very personal stories associated with a chronic illness. Beyond the ethical dilemma associated with drawing ...

  6. Ethical Considerations for Qualitative Research Methods During the

    Qualitative modes of inquiry are especially valuable for understanding and promoting health and well-being, and mitigating risk, among populations most vulnerable in the pandemic (Teti et al., 2020).However, the implementation of qualitative studies, as with any social research (Doerr & Wagner, 2020), demands careful planning and continuous evaluation in the context of research ethics in a ...

  7. (PDF) Ethical Considerations in Qualitative Study

    The protectio n of human subjects through the. application of appropriat e ethical princi ples is. important in any research study (1). In a. qualitative study, et hical considerations have a ...

  8. Ethical Considerations in Case Studies

    Liang [10] has gone through the literature review to extract ethical considerations required in the case study design phases. Through the literature, a total of 21 ethical considerations have been ...

  9. Case Studies

    More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

  10. Introduction: Case Studies in the Ethics of Mental Health Research

    The current literature contains some other collections of ethics case studies that may be useful to mental health researchers. I note four important collections here, to which interested scholars may want to refer. ... The choice of study population implicates considerations of justice. The Belmont Report, which lays out the ethical foundations ...

  11. Research and Practice of AI Ethics: A Case Study Approach ...

    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ...

  12. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Ethical considerations. ... no Indigenous or gender diverse researchers participated in the study. Given the ethical issues and systemic injustices that many people from these groups ... Individual, institutional, societal. Theoretical model and case studies. Kansas Cuty, Sheed & Ward; Godrie B, Dos Santos M. Présentation: inégalités ...

  13. Addressing ethical issues in your research proposal

    Research ethics relates to your moral conduct as a doctoral researcher and will apply throughout your study from design to dissemination (British Psychological Society, 2021). When you apply to undertake a doctorate, you will need to clearly indicate in your proposal that you understand these ethical principles and are committed to upholding them.

  14. Ethics Cases

    A Business Ethics Case Study. The CFO of a family business faces difficult decisions about how to proceed when the COVID-19 pandemic changes the business revenue models, and one family shareholder wants a full buyout. Case studies and scenarios illustrating ethical dilemmas in business, medicine, technology, government, and education.

  15. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  16. Ethical Considerations

    These considerations are designed to protect the rights, safety, and well-being of research participants, as well as the integrity and credibility of the research itself. Some of the key ethical considerations in research include: Informed consent: Researchers must obtain informed consent from study participants, which means they must inform ...

  17. Ethical Considerations in Psychology Research

    The research team. There are examples of researchers being intimidated because of the line of research they are in. The institution in which the research is conducted. salso suggest there are 4 main ethical concerns when conducting SSR: The research question or hypothesis. The treatment of individual participants.

  18. Ethical Considerations in Case Studies

    High-level ethical principles have been introduced in software engineering. However, ethical concerns in empirical case studies require further investigation. This study aims to investigate ethical considerations in case studies of software engineering and evaluate the identified ethical concerns through a multi-case study in industry. 12 papers were selected and reviewed to extract ethical ...

  19. The patient suicide attempt

    Nurses face more and more ethical dilemmas during their practice nowadays, especially when nurses have responsibility to take care of patients with terminal diseases such as cancer [1].The case study demonstrates an ethical dilemma faced by a nursing staff taking care of an end stage aggressive prostate cancer patient Mr Green who confided to the nurse his suicide attempt and ask the nurse to ...

  20. Ethical Considerations in Research

    Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating ...

  21. Business Ethics Cases

    Student Project Showcase 2023 Generative AI Ethics Student Project Showcase 2022 The Ethics of Guns Ethical Considerations for COVID-19 Vaccination Hackworth Fellowships Project Showcase 2021 Racism, ... Find ethics case studies on bribery, sourcing, intellectual property, downsizing, and other topics in business ethics, corporate governance ...

  22. PDF Ethical Dilemmas Case Studies

    4 Ethical Dilemmas Case Studies Professional Accountants in Public Practice • Objectivity - not to compromise professional or business judgements because of bias, conflict of interest or undue influence of others. • Professional competence and due care - to: (i) Attain and maintain professional knowledge and skill at the level required to ensure that a client or employing

  23. Ethical issues in managing the COVID‐19 pandemic

    The COVID‐19 pandemic has had an immense and worldwide impact. In light of future pandemics or subsequent waves of COVID‐19 it is crucial to focus on the ethical issues that were and still are raised in this COVID‐19 crisis. In this paper, we look at issues that are raised in the testing and tracing of patients with COVID‐19.

  24. Transformative Potentials and Ethical Considerations of AI Tools in

    Transformative Potentials and Ethical Considerations of AI Tools in Higher Education: Case Studies and Reflections ... it navigates through the benefits and ethical concerns, such as privacy issues, overreliance on the technology itself, and potential biases. The article's core comprises two practical case studies-one in computer science, where ...