U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

Marie-josée drolet.

1 Department of Occupational Therapy (OT), Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Eugénie Rose-Derouin

2 Bachelor OT program, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Julie-Claude Leblanc

Mélanie ruest, bryn williams-jones.

3 Department of Social and Preventive Medicine, School of Public Health, Université de Montréal, Montréal (Québec), Canada

In the context of academic research, a diversity of ethical issues, conditioned by the different roles of members within these institutions, arise. Previous studies on this topic addressed mainly the perceptions of researchers. However, to our knowledge, no studies have explored the transversal ethical issues from a wider spectrum, including other members of academic institutions as the research ethics board (REB) members, and the research ethics experts. The present study used a descriptive phenomenological approach to document the ethical issues experienced by a heterogeneous group of Canadian researchers, REB members, and research ethics experts. Data collection involved socio-demographic questionnaires and individual semi-structured interviews. Following the triangulation of different perspectives (researchers, REB members and ethics experts), emerging ethical issues were synthesized in ten units of meaning: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. This study highlighted several problematic elements that can support the identification of future solutions to resolve transversal ethical issues in research that affect the heterogeneous members of the academic community.

Introduction

Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted ). University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity), insufficient access to research funds, and within a market economy that values productivity and speed often to the detriment of quality or rigour – this research context creates a perfect recipe for breaches in research ethics, like research misbehaviour or misconduct (i.e., conduct that is ethically questionable or unacceptable because it contravenes the accepted norms of responsible conduct of research or compromises the respect of core ethical values that are widely held by the research community) (Drolet & Girard, 2020 ; Sieber, 2004 ). Problematic ethics and integrity issues – e.g., conflicts of interest, falsification of data, non-respect of participants’ rights, and plagiarism, to name but a few – have the potential to both undermine the credibility of research and lead to negative consequences for many stakeholders, including researchers, research assistants and personnel, research participants, academic institutions, and society as a whole (Drolet & Girard, 2020 ). It is thus evident that the academic community should be able to identify these different ethical issues in order to evaluate the nature of the risks that they pose (and for whom), and then work towards their prevention or management (i.e., education, enhanced policies and procedures, risk mitigation strategies).

In this article, we define an “ethical issue” as any situation that may compromise, in whole or in part, the respect of at least one moral value (Swisher et al., 2005 ) that is considered socially legitimate and should thus be respected. In general, ethical issues occur at three key moments or stages of the research process: (1) research design (i.e., conception, project planning), (2) research conduct (i.e., data collection, data analysis) and (3) knowledge translation or communication (e.g., publications of results, conferences, press releases) (Drolet & Ruest, accepted ). According to Sieber ( 2004 ), ethical issues in research can be classified into five categories, related to: (a) communication with participants and the community, (b) acquisition and use of research data, (c) external influence on research, (d) risks and benefits of the research, and (e) selection and use of research theories and methods. Many of these issues are related to breaches of research ethics norms, misbehaviour or research misconduct. Bruhn et al., ( 2002 ) developed a typology of misbehaviour and misconduct in academia that can be used to judge the seriousness of different cases. This typology takes into consideration two axes of reflection: (a) the origin of the situation (i.e., is it the researcher’s own fault or due to the organizational context?), and (b) the scope and severity (i.e., is this the first instance or a recurrent behaviour? What is the nature of the situation? What are the consequences, for whom, for how many people, and for which organizations?).

A previous detailed review of the international literature on ethical issues in research revealed several interesting findings (Beauchemin et al., 2021 ). Indeed, the current literature is dominated by descriptive ethics, i.e., the sharing by researchers from various disciplines of the ethical issues they have personally experienced. While such anecdotal documentation is relevant, it is insufficient because it does not provide a global view of the situation. Among the reviewed literature, empirical studies were in the minority (Table  1 ) – only about one fifth of the sample (n = 19) presented empirical research findings on ethical issues in research. The first of these studies was conducted almost 50 years ago (Hunt et al., 1984 ), with the remainder conducted in the 1990s. Eight studies were conducted in the United States (n = 8), five in Canada (n = 5), three in England (n = 3), two in Sweden (n = 2) and one in Ghana (n = 1).

Summary of Empirical Studies on Ethical Issues in Research by the year of publication

Further, the majority of studies in our sample (n = 12) collected the perceptions of a homogeneous group of participants, usually researchers (n = 14) and sometimes health professionals (n = 6). A minority of studies (n = 7) triangulated the perceptions of diverse research stakeholders (i.e., researchers and research participants, or students). To our knowledge, only one study has examined perceptions of ethical issues in research by research ethics board members (REB; Institutional Review Boards [IRB] in the USA), and none to date have documented the perceptions of research ethics experts. Finally, nine studies (n = 9) adopted a qualitative design, seven studies (n = 7) a quantitative design, and three (n = 3) a mixed-methods design.

More studies using empirical research methods are needed to better identify broader trends, to enrich discussions on the values that should govern responsible conduct of research in the academic community, and to evaluate the means by which these values can be supported in practice (Bahn, 2012 ; Beauchemin et al., 2021 ; Bruhn et al., 2002 ; Henderson et al., 2013 ; Resnik & Elliot, 2016; Sieber 2004 ). To this end, we conducted an empirical qualitative study to document the perceptions and experiences of a heterogeneous group of Canadian researchers, REB members, and research ethics experts, to answer the following broad question: What are the ethical issues in research?

Research Methods

Research design.

A qualitative research approach involving individual semi-structured interviews was used to systematically document ethical issues (De Poy & Gitlin, 2010 ; Hammell et al., 2000 ). Specifically, a descriptive phenomenological approach inspired by the philosophy of Husserl was used (Husserl, 1970 , 1999 ), as it is recommended for documenting the perceptions of ethical issues raised by various practices (Hunt & Carnavale, 2011 ).

Ethical considerations

The principal investigator obtained ethics approval for this project from the Research Ethics Board of the Université du Québec à Trois-Rivières (UQTR). All members of the research team signed a confidentiality agreement, and research participants signed the consent form after reading an information letter explaining the nature of the research project.

Sampling and recruitment

As indicated above, three types of participants were sought: (1) researchers from different academic disciplines conducting research (i.e., theoretical, fundamental or empirical) in Canadian universities; (2) REB members working in Canadian organizations responsible for the ethical review, oversight or regulation of research; and (3) research ethics experts, i.e., academics or ethicists who teach research ethics, conduct research in research ethics, or are scholars who have acquired a specialization in research ethics. To be included in the study, participants had to work in Canada, speak and understand English or French, and be willing to participate in the study. Following Thomas and Polio’s (2002) recommendation to recruit between six and twelve participants (for a homogeneous sample) to ensure data saturation, for our heterogeneous sample, we aimed to recruit approximately twelve participants in order to obtain data saturation. Having used this method several times in related projects in professional ethics, data saturation is usually achieved with 10 to 15 participants (Drolet & Goulet, 2018 ; Drolet & Girard, 2020 ; Drolet et al., 2020 ). From experience, larger samples only serve to increase the degree of data saturation, especially in heterogeneous samples (Drolet et al., 2017 , 2019 ; Drolet & Maclure, 2016 ).

Purposive sampling facilitated the identification of participants relevant to documenting the phenomenon in question (Fortin, 2010 ). To ensure a rich and most complete representation of perceptions, we sought participants with varied and complementary characteristics with regards to the social roles they occupy in research practice (Drolet & Girard, 2020 ). A triangulation of sources was used for the recruitment (Bogdan & Biklen, 2006 ). The websites of Canadian universities and Canadian health institution REBs, as well as those of major Canadian granting agencies (i.e., the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Social Sciences and Humanities Research Council of Canada, Fonds de recherche du Quebec), were searched to identify individuals who might be interested in participating in the study. Further, people known by the research team for their knowledge and sensitivity to ethical issues in research were asked to participate. Research participants were also asked to suggest other individuals who met the study criteria.

Data Collection

Two tools were used for data collecton: (a) a socio-demographic questionnaire, and (b) a semi-structured individual interview guide. English and French versions of these two documents were used and made available, depending on participant preferences. In addition, although the interview guide contained the same questions, they were adapted to participants’ specific roles (i.e., researcher, REB member, research ethics expert). When contacted by email by the research assistant, participants were asked to confirm under which role they wished to participate (because some participants might have multiple, overlapping responsibilities) and they were sent the appropriate interview guide.

The interview guides each had two parts: an introduction and a section on ethical issues. The introduction consisted of general questions to put the participant at ease (i.e., “Tell me what a typical day at work is like for you”). The section on ethical issues was designed to capture the participant’s perceptions through questions such as: “Tell me three stories you have experienced at work that involve an ethical issue?” and “Do you feel that your organization is doing enough to address, manage, and resolve ethical issues in your work?”. Although some interviews were conducted in person, the majority were conducted by videoconference to promote accessibility and because of the COVID-19 pandemic. Interviews were digitally recorded so that the verbatim could be transcribed in full, and varied between 40 and 120 min in duration, with an average of 90 min. Research assistants conducted the interviews and transcribed the verbatim.

Data Analysis

The socio-demographic questionnaires were subjected to simple descriptive statistical analyses (i.e., means and totals), and the semi-structured interviews were subjected to qualitative analysis. The steps proposed by Giorgi ( 1997 ) for a Husserlian phenomenological reduction of the data were used. After collecting, recording, and transcribing the interviews, all verbatim were analyzed by at least two analysts: a research assistant (2nd author of this article) and the principal investigator (1st author) or a postdoctoral fellow (3rd author). The repeated reading of the verbatim allowed the first analyst to write a synopsis, i.e., an initial extraction of units of meaning. The second analyst then read the synopses, which were commented and improved if necessary. Agreement between analysts allowed the final drafting of the interview synopses, which were then analyzed by three analysts to generate and organize the units of meaning that emerged from the qualitative data.

Participants

Sixteen individuals (n = 16) participated in the study, of whom nine (9) identified as female and seven (7) as male (Table  2 ). Participants ranged in age from 22 to 72 years, with a mean age of 47.5 years. Participants had between one (1) and 26 years of experience in the research setting, with an average of 14.3 years of experience. Participants held a variety of roles, including: REB members (n = 11), researchers (n = 10), research ethics experts (n = 4), and research assistant (n = 1). As mentioned previously, seven (7) participants held more than one role, i.e., REB member, research ethics expert, and researcher. The majority (87.5%) of participants were working in Quebec, with the remaining working in other Canadian provinces. Although all participants considered themselves to be francophone, one quarter (n = 4) identified themselves as belonging to a cultural minority group.

Description of Participants

With respect to their academic background, most participants (n = 9) had a PhD, three (3) had a post-doctorate, two (2) had a master’s degree, and two (2) had a bachelor’s degree. Participants came from a variety of disciplines: nine (9) had a specialty in the humanities or social sciences, four (4) in the health sciences and three (3) in the natural sciences. In terms of their knowledge of ethics, five (5) participants reported having taken one university course entirely dedicated to ethics, four (4) reported having taken several university courses entirely dedicated to ethics, three (3) had a university degree dedicated to ethics, while two (2) only had a few hours or days of training in ethics and two (2) reported having no knowledge of ethics.

Ethical issues

As Fig.  1 illustrates, ten units of meaning emerge from the data analysis, namely: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. To illustrate the results, excerpts from verbatim interviews are presented in the following sub-sections. Most of the excerpts have been translated into English as the majority of interviews were conducted with French-speaking participants.

An external file that holds a picture, illustration, etc.
Object name is 10805_2022_9455_Fig1_HTML.jpg

Ethical issues in research according to the participants

Research Integrity

The research environment is highly competitive and performance-based. Several participants, in particular researchers and research ethics experts, felt that this environment can lead both researchers and research teams to engage in unethical behaviour that reflects a lack of research integrity. For example, as some participants indicated, competition for grants and scientific publications is sometimes so intense that researchers falsify research results or plagiarize from colleagues to achieve their goals.

Some people will lie or exaggerate their research findings in order to get funding. Then, you see it afterwards, you realize: “ah well, it didn’t work, but they exaggerated what they found and what they did” (participant 14). Another problem in research is the identification of authors when there is a publication. Very often, there are authors who don’t even know what the publication is about and that their name is on it. (…) The time that it surprised me the most was just a few months ago when I saw someone I knew who applied for a teaching position. He got it I was super happy for him. Then I looked at his publications and … there was one that caught my attention much more than the others, because I was in it and I didn’t know what that publication was. I was the second author of a publication that I had never read (participant 14). I saw a colleague who had plagiarized another colleague. [When the colleague] found out about it, he complained. So, plagiarism is a serious [ethical breach]. I would also say that there is a certain amount of competition in the university faculties, especially for grants (…). There are people who want to win at all costs or get as much as possible. They are not necessarily going to consider their colleagues. They don’t have much of a collegial spirit (participant 10).

These examples of research misbehaviour or misconduct are sometimes due to or associated with situations of conflicts of interest, which may be poorly managed by certain researchers or research teams, as noted by many participants.

Conflict of interest

The actors and institutions involved in research have diverse interests, like all humans and institutions. As noted in Chap. 7 of the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS2, 2018),

“researchers and research students hold trust relationships, either directly or indirectly, with participants, research sponsors, institutions, their professional bodies and society. These trust relationships can be put at risk by conflicts of interest that may compromise independence, objectivity or ethical duties of loyalty. Although the potential for such conflicts has always existed, pressures on researchers (i.e., to delay or withhold dissemination of research outcomes or to use inappropriate recruitment strategies) heighten concerns that conflicts of interest may affect ethical behaviour” (p. 92).

The sources of these conflicts are varied and can include interpersonal conflicts, financial partnerships, third-party pressures, academic or economic interests, a researcher holding multiple roles within an institution, or any other incentive that may compromise a researcher’s independence, integrity, and neutrality (TCPS2, 2018). While it is not possible to eliminate all conflicts of interest, it is important to manage them properly and to avoid temptations to behave unethically.

Ethical temptations correspond to situations in which people are tempted to prioritize their own interests to the detriment of the ethical goods that should, in their own context, govern their actions (Swisher et al., 2005 ). In the case of researchers, this refers to situations that undermine independence, integrity, neutrality, or even the set of principles that govern research ethics (TCPS2, 2018) or the responsible conduct of research. According to study participants, these types of ethical issues frequently occur in research. Many participants, especially researchers and REB members, reported that conflicts of interest can arise when members of an organization make decisions to obtain large financial rewards or to increase their academic profile, often at the expense of the interests of members of their research team, research participants, or even the populations affected by their research.

A company that puts money into making its drug work wants its drug to work. So, homeopathy is a good example, because there are not really any consequences of homeopathy, there are not very many side effects, because there are no effects at all. So, it’s not dangerous, but it’s not a good treatment either. But some people will want to make it work. And that’s a big issue when you’re sitting at a table and there are eight researchers, and there are two or three who are like that, and then there are four others who are neutral, and I say to myself, this is not science. I think that this is a very big ethical issue (participant 14). There are also times in some research where there will be more links with pharmaceutical companies. Obviously, there are then large amounts of money that will be very interesting for the health-care institutions because they still receive money for clinical trials. They’re still getting some compensation because its time consuming for the people involved and all that. The pharmaceutical companies have money, so they will compensate, and that is sometimes interesting for the institutions, and since we are a bit caught up in this, in the sense that we have no choice but to accept it. (…) It may not be the best research in the world, there may be a lot of side effects due to the drugs, but it’s good to accept it, we’re going to be part of the clinical trial (participant 3). It is integrity, what we believe should be done or said. Often by the pressure of the environment, integrity is in tension with the pressures of the environment, so it takes resistance, it takes courage in research. (…) There were all the debates there about the problems of research that was funded and then the companies kept control over what was written. That was really troubling for a lot of researchers (participant 5).

Further, these situations sometimes have negative consequences for research participants as reported by some participants.

Respect for research participants

Many research projects, whether they are psychosocial or biomedical in nature, involve human participants. Relationships between the members of research teams and their research participants raise ethical issues that can be complex. Research projects must always be designed to respect the rights and interests of research participants, and not just those of researchers. However, participants in our study – i.e., REB members, researchers, and research ethics experts – noted that some research teams seem to put their own interests ahead of those of research participants. They also emphasized the importance of ensuring the respect, well-being, and safety of research participants. The ethical issues related to this unit of meaning are: respect for free, informed and ongoing consent of research participants; respect for and the well-being of participants; data protection and confidentiality; over-solicitation of participants; ownership of the data collected on participants; the sometimes high cost of scientific innovations and their accessibility; balance between the social benefits of research and the risks to participants (particularly in terms of safety); balance between collective well-being (development of knowledge) and the individual rights of participants; exploitation of participants; paternalism when working with populations in vulnerable situations; and the social acceptability of certain types of research. The following excerpts present some of these issues.

Where it disturbs me ethically is in the medical field – because it’s more in the medical field that we’re going to see this – when consent forms are presented to patients to solicit them as participants, and then [these forms] have an average of 40 pages. That annoys me. When they say that it has to be easy to understand and all that, adapted to the language, and then the hyper-technical language plus there are 40 pages to read, I don’t understand how you’re going to get informed consent after reading 40 pages. (…) For me, it doesn’t work. I read them to evaluate them and I have a certain level of education and experience in ethics, and there are times when I don’t understand anything (participant 2). There is a lot of pressure from researchers who want to recruit research participants (…). The idea that when you enter a health care institution, you become a potential research participant, when you say “yes to a research, you check yes to all research”, then everyone can ask you. I think that researchers really have this fantasy of saying to themselves: “as soon as people walk through the door of our institution, they become potential participants with whom we can communicate and get them involved in all projects”. There’s a kind of idea that, yes, it can be done, but it has to be somewhat supervised to avoid over-solicitation (…). Researchers are very interested in facilitating recruitment and making it more fluid, but perhaps to the detriment of confidentiality, privacy, and respect; sometimes that’s what it is, to think about what type of data you’re going to have in your bank of potential participants? Is it just name and phone number or are you getting into more sensitive information? (participant 9).

In addition, one participant reported that their university does not provide the resources required to respect the confidentiality of research participants.

The issue is as follows: researchers, of course, commit to protecting data with passwords and all that, but we realize that in practice, it is more difficult. It is not always as protected as one might think, because professor-researchers will run out of space. Will the universities make rooms available to researchers, places where they can store these things, especially when they have paper documentation, and is there indeed a guarantee of confidentiality? Some researchers have told me: “Listen; there are even filing cabinets in the corridors”. So, that certainly poses a concrete challenge. How do we go about challenging the administrative authorities? Tell them it’s all very well to have an ethics committee, but you have to help us, you also have to make sure that the necessary infrastructures are in place so that what we are proposing is really put into practice (participant 4).

If the relationships with research participants are likely to raise ethical issues, so too are the relationships with students, notably research assistants. On this topic, several participants discussed the lack of supervision or recognition offered to research assistants by researchers as well as the power imbalances between members of the research team.

Lack of Supervision and Power Imbalances

Many research teams are composed not only of researchers, but also of students who work as research assistants. The relationship between research assistants and other members of research teams can sometimes be problematic and raise ethical issues, particularly because of the inevitable power asymmetries. In the context of this study, several participants – including a research assistant, REB members, and researchers – discussed the lack of supervision or recognition of the work carried out by students, psychological pressure, and the more or less well-founded promises that are sometimes made to students. Participants also mentioned the exploitation of students by certain research teams, which manifest when students are inadequately paid, i.e., not reflective of the number of hours actually worked, not a fair wage, or even a wage at all.

[As a research assistant], it was more of a feeling of distress that I felt then because I didn’t know what to do. (…) I was supposed to get coaching or be supported, but I didn’t get anything in the end. It was like, “fix it by yourself”. (…) All research assistants were supposed to be supervised, but in practice they were not (participant 1). Very often, we have a master’s or doctoral student that we put on a subject and we consider that the project will be well done, while the student is learning. So, it happens that the student will do a lot of work and then we realize that the work is poorly done, and it is not necessarily the student’s fault. He wasn’t necessarily well supervised. There are directors who have 25 students, and they just don’t supervise them (participant 14). I think it’s really the power relationship. I thought to myself, how I saw my doctorate, the beginning of my research career, I really wanted to be in that laboratory, but they are the ones who are going to accept me or not, so what do I do to be accepted? I finally accept their conditions [which was to work for free]. If these are the conditions that are required to enter this lab, I want to go there. So, what do I do, well I accepted. It doesn’t make sense, but I tell myself that I’m still privileged, because I don’t have so many financial worries, one more reason to work for free, even though it doesn’t make sense (participant 1). In research, we have research assistants. (…). The fact of using people… so that’s it, you have to take into account where they are, respect them, but at the same time they have to show that they are there for the research. In English, we say “carry” or take care of people. With research assistants, this is often a problem that I have observed: for grant machines, the person is the last to be found there. Researchers, who will take, use student data, without giving them the recognition for it (participant 5). The problem at our university is that they reserve funding for Canadian students. The doctoral clientele in my field is mostly foreign students. So, our students are poorly funded. I saw one student end up in the shelter, in a situation of poverty. It ended very badly for him because he lacked financial resources. Once you get into that dynamic, it’s very hard to get out. I was made aware of it because the director at the time had taken him under her wing and wanted to try to find a way to get him out of it. So, most of my students didn’t get funded (participant 16). There I wrote “manipulation”, but it’s kind of all promises all the time. I, for example, was promised a lot of advancement, like when I got into the lab as a graduate student, it was said that I had an interest in [this particular area of research]. I think there are a lot of graduate students who must have gone through that, but it is like, “Well, your CV has to be really good, if you want to do a lot of things and big things. If you do this, if you do this research contract, the next year you could be the coordinator of this part of the lab and supervise this person, get more contracts, be paid more. Let’s say: you’ll be invited to go to this conference, this big event”. They were always dangling something, but you have to do that first to get there. But now, when you’ve done that, you have to do this business. It’s like a bit of manipulation, I think. That was very hard to know who is telling the truth and who is not (participant 1).

These ethical issues have significant negative consequences for students. Indeed, they sometimes find themselves at the mercy of researchers, for whom they work, struggling to be recognized and included as authors of an article, for example, or to receive the salary that they are due. For their part, researchers also sometimes find themselves trapped in research structures that can negatively affect their well-being. As many participants reported, researchers work in organizations that set very high productivity standards and in highly competitive contexts, all within a general culture characterized by individualism.

Individualism and performance

Participants, especially researchers, discussed the culture of individualism and performance that characterizes the academic environment. In glorifying excellence, some universities value performance and productivity, often at the expense of psychological well-being and work-life balance (i.e., work overload and burnout). Participants noted that there are ethical silences in their organizations on this issue, and that the culture of individualism and performance is not challenged for fear of retribution or simply to survive, i.e., to perform as expected. Participants felt that this culture can have a significant negative impact on the quality of the research conducted, as research teams try to maximize the quantity of their work (instead of quality) in a highly competitive context, which is then exacerbated by a lack of resources and support, and where everything must be done too quickly.

The work-life balance with the professional ethics related to work in a context where you have too much and you have to do a lot, it is difficult to balance all that and there is a lot of pressure to perform. If you don’t produce enough, that’s it; after that, you can’t get any more funds, so that puts pressure on you to do more and more and more (participant 3). There is a culture, I don’t know where it comes from, and that is extremely bureaucratic. If you dare to raise something, you’re going to have many, many problems. They’re going to make you understand it. So, I don’t talk. It is better: your life will be easier. I think there are times when you have to talk (…) because there are going to be irreparable consequences. (…) I’m not talking about a climate of terror, because that’s exaggerated, it’s not true, people are not afraid. But people close their office door and say nothing because it’s going to make their work impossible and they’re not going to lose their job, they’re not going to lose money, but researchers need time to be focused, so they close their office door and say nothing (participant 16).

Researchers must produce more and more, and they feel little support in terms of how to do such production, ethically, and how much exactly they are expected to produce. As this participant reports, the expectation is an unspoken rule: more is always better.

It’s sometimes the lack of a clear line on what the expectations are as a researcher, like, “ah, we don’t have any specific expectations, but produce, produce, produce, produce.” So, in that context, it’s hard to be able to put the line precisely: “have I done enough for my work?” (participant 3).

Inadequate ethical Guidance

While the productivity expectation is not clear, some participants – including researchers, research ethics experts, and REB members – also felt that the ethical expectations of some REBs were unclear. The issue of the inadequate ethical guidance of research includes the administrative mechanisms to ensure that research projects respect the principles of research ethics. According to those participants, the forms required for both researchers and REB members are increasingly long and numerous, and one participant noted that the standards to be met are sometimes outdated and disconnected from the reality of the field. Multicentre ethics review (by several REBs) was also critiqued by a participant as an inefficient method that encumbers the processes for reviewing research projects. Bureaucratization imposes an ever-increasing number of forms and ethics guidelines that actually hinder researchers’ ethical reflection on the issues at stake, leading the ethics review process to be perceived as purely bureaucratic in nature.

The ethical dimension and the ethical review of projects have become increasingly bureaucratized. (…) When I first started working (…) it was less bureaucratic, less strict then. I would say [there are now] tons of forms to fill out. Of course, we can’t do without it, it’s one of the ways of marking out ethics and ensuring that there are ethical considerations in research, but I wonder if it hasn’t become too bureaucratized, so that it’s become a kind of technical reflex to fill out these forms, and I don’t know if people really do ethical reflection as such anymore (participant 10). The fundamental structural issue, I would say, is the mismatch between the normative requirements and the real risks posed by the research, i.e., we have many, many requirements to meet; we have very long forms to fill out but the research projects we evaluate often pose few risks (participant 8). People [in vulnerable situations] were previously unable to participate because of overly strict research ethics rules that were to protect them, but in the end [these rules] did not protect them. There was a perverse effect, because in the end there was very little research done with these people and that’s why we have very few results, very little evidence [to support practices with these populations] so it didn’t improve the quality of services. (…) We all understand that we have to be careful with that, but when the research is not too risky, we say to ourselves that it would be good because for once a researcher who is interested in that population, because it is not a very popular population, it would be interesting to have results, but often we are blocked by the norms, and then we can’t accept [the project] (participant 2).

Moreover, as one participant noted, accessing ethics training can be a challenge.

There is no course on research ethics. […] Then, I find that it’s boring because you go through university and you come to do your research and you know how to do quantitative and qualitative research, but all the research ethics, where do you get this? I don’t really know (participant 13).

Yet, such training could provide relevant tools to resolve, to some extent, the ethical issues that commonly arise in research. That said, and as noted by many participants, many ethical issues in research are related to social injustices over which research actors have little influence.

Social Injustices

For many participants, notably researchers, the issues that concern social injustices are those related to power asymmetries, stigma, or issues of equity, diversity, and inclusion, i.e., social injustices related to people’s identities (Blais & Drolet, 2022 ). Participants reported experiencing or witnessing discrimination from peers, administration, or lab managers. Such oppression is sometimes cross-sectional and related to a person’s age, cultural background, gender or social status.

I have my African colleague who was quite successful when he arrived but had a backlash from colleagues in the department. I think it’s unconscious, nobody is overtly racist. But I have a young person right now who is the same, who has the same success, who got exactly the same early career award and I don’t see the same backlash. He’s just as happy with what he’s doing. It’s normal, they’re young and they have a lot of success starting out. So, I think there is discrimination. Is it because he is African? Is it because he is black? I think it’s on a subconscious level (participant 16).

Social injustices were experienced or reported by many participants, and included issues related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when one researcher is a woman.

If you do international research, there are things you can’t talk about (…). It is really a barrier to research to not be able to (…) address this question [i.e. the question of inequalities between men and women]. Women’s inequality is going to be addressed [but not within the country where the research takes place as if this inequality exists elsewhere but not here]. There are a lot of women working on inequality issues, doing work and it’s funny because I was talking to a young woman who works at Cairo University and she said to me: “Listen, I saw what you had written, you’re right. I’m willing to work on this but guarantee me a position at your university with a ticket to go”. So yes, there are still many barriers [for women in research] (participant 16).

Because of the varied contextual characteristics that intervene in their occurrence, these social injustices are also related to distributive injustices, as discussed by many participants.

Distributive Injustices

Although there are several views of distributive justice, a classical definition such as that of Aristotle ( 2012 ), describes distributive justice as consisting in distributing honours, wealth, and other social resources or benefits among the members of a community in proportion to their alleged merit. Justice, then, is about determining an equitable distribution of common goods. Contemporary theories of distributive justice are numerous and varied. Indeed, many authors (e.g., Fraser 2011 ; Mills, 2017 ; Sen, 2011 ; Young, 2011 ) have, since Rawls ( 1971 ), proposed different visions of how social burdens and benefits should be shared within a community to ensure equal respect, fairness, and distribution. In our study, what emerges from participants’ narratives is a definite concern for this type of justice. Women researchers, francophone researchers, early career researchers or researchers belonging to racialized groups all discussed inequities in the distribution of research grants and awards, and the extra work they need to do to somehow prove their worth. These inequities are related to how granting agencies determine which projects will be funded.

These situations make me work 2–3 times harder to prove myself and to show people in power that I have a place as a woman in research (participant 12). Number one: it’s conservative thinking. The older ones control what comes in. So, the younger people have to adapt or they don’t get funded (participant 14).

Whether it is discrimination against stigmatized or marginalized populations or interest in certain hot topics, granting agencies judge research projects according to criteria that are sometimes questionable, according to those participants. Faced with difficulties in obtaining funding for their projects, several strategies – some of which are unethical – are used by researchers in order to cope with these situations.

Sometimes there are subjects that everyone goes to, such as nanotechnology (…), artificial intelligence or (…) the therapeutic use of cannabis, which are very fashionable, and this is sometimes to the detriment of other research that is just as relevant, but which is (…), less sexy, less in the spirit of the time. (…) Sometimes this can lead to inequities in the funding of certain research sectors (participant 9). When we use our funds, we get them given to us, we pretty much say what we think we’re going to do with them, but things change… So, when these things change, sometimes it’s an ethical decision, but by force of circumstances I’m obliged to change the project a little bit (…). Is it ethical to make these changes or should I just let the money go because I couldn’t use it the way I said I would? (participant 3).

Moreover, these distributional injustices are not only linked to social injustices, but also epistemic injustices. Indeed, the way in which research honours and grants are distributed within the academic community depends on the epistemic authority of the researchers, which seems to vary notably according to their language of use, their age or their gender, but also to the research design used (inductive versus deductive), their decision to use (or not use) animals in research, or to conduct activist research.

Epistemic injustices

The philosopher Fricker ( 2007 ) conceptualized the notions of epistemic justice and injustice. Epistemic injustice refers to a form of social inequality that manifests itself in the access, recognition, and production of knowledge as well as the various forms of ignorance that arise (Godrie & Dos Santos, 2017 ). Addressing epistemic injustice necessitates acknowledging the iniquitous wrongs suffered by certain groups of socially stigmatized individuals who have been excluded from knowledge, thus limiting their abilities to interpret, understand, or be heard and account for their experiences. In this study, epistemic injustices were experienced or reported by some participants, notably those related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when a researcher is a woman or an early career researcher.

I have never sent a grant application to the federal government in English. I have always done it in French, even though I know that when you receive the review, you can see that reviewers didn’t understand anything because they are English-speaking. I didn’t want to get in the boat. It’s not my job to translate, because let’s be honest, I’m not as good in English as I am in French. So, I do them in my first language, which is the language I’m most used to. Then, technically at the administrative level, they are supposed to be able to do it, but they are not good in French. (…) Then, it’s a very big Canadian ethical issue, because basically there are technically two official languages, but Canada is not a bilingual country, it’s a country with two languages, either one or the other. (…) So I was not funded (participant 14).

Researchers who use inductive (or qualitative) methods observed that their projects are sometimes less well reviewed or understood, while research that adopts a hypothetical-deductive (or quantitative) or mixed methods design is better perceived, considered more credible and therefore more easily funded. Of course, regardless of whether a research project adopts an inductive, deductive or mixed-methods scientific design, or whether it deals with qualitative or quantitative data, it must respect a set of scientific criteria. A research project should achieve its objectives by using proven methods that, in the case of inductive research, are credible, reliable, and transferable or, in the case of deductive research, generalizable, objective, representative, and valid (Drolet & Ruest, accepted ). Participants discussing these issues noted that researchers who adopt a qualitative design or those who question the relevance of animal experimentation or are not militant have sometimes been unfairly devalued in their epistemic authority.

There is a mini war between quantitative versus qualitative methods, which I think is silly because science is a method. If you apply the method well, it doesn’t matter what the field is, it’s done well and it’s perfect ” (participant 14). There is also the issue of the place of animals in our lives, because for me, ethics is human ethics, but also animal ethics. Then, there is a great evolution in society on the role of the animal… with the new law that came out in Quebec on the fact that animals are sensitive beings. Then, with the rise of the vegan movement, [we must ask ourselves]: “Do animals still have a place in research?” That’s a big question and it also means that there are practices that need to evolve, but sometimes there’s a disconnection between what’s expected by research ethics boards versus what’s expected in the field (participant 15). In research today, we have more and more research that is militant from an ideological point of view. And so, we have researchers, because they defend values that seem important to them, we’ll talk for example about the fight for equality and social justice. They have pressure to defend a form of moral truth and have the impression that everyone thinks like them or should do so, because they are defending a moral truth. This is something that we see more and more, namely the lack of distance between ideology and science (participant 8).

The combination or intersectionality of these inequities, which seems to be characterized by a lack of ethical support and guidance, is experienced in the highly competitive and individualistic context of research; it provides therefore the perfect recipe for researchers to experience ethical distress.

Ethical distress

The concept of “ethical distress” refers to situations in which people know what they should do to act ethically, but encounter barriers, generally of an organizational or systemic nature, limiting their power to act according to their moral or ethical values (Drolet & Ruest, 2021 ; Jameton, 1984 ; Swisher et al., 2005 ). People then run the risk of finding themselves in a situation where they do not act as their ethical conscience dictates, which in the long term has the potential for exhaustion and distress. The examples reported by participants in this study point to the fact that researchers in particular may be experiencing significant ethical distress. This distress takes place in a context of extreme competition, constant injunctions to perform, and where administrative demands are increasingly numerous and complex to complete, while paradoxically, they lack the time to accomplish all their tasks and responsibilities. Added to these demands are a lack of resources (human, ethical, and financial), a lack of support and recognition, and interpersonal conflicts.

We are in an environment, an elite one, you are part of it, you know what it is: “publish or perish” is the motto. Grants, there is a high level of performance required, to do a lot, to publish, to supervise students, to supervise them well, so yes, it is clear that we are in an environment that is conducive to distress. (…). Overwork, definitely, can lead to distress and eventually to exhaustion. When you know that you should take the time to read the projects before sharing them, but you don’t have the time to do that because you have eight that came in the same day, and then you have others waiting… Then someone rings a bell and says: “ah but there, the protocol is a bit incomplete”. Oh yes, look at that, you’re right. You make up for it, but at the same time it’s a bit because we’re in a hurry, we don’t necessarily have the resources or are able to take the time to do things well from the start, we have to make up for it later. So yes, it can cause distress (participant 9). My organization wanted me to apply in English, and I said no, and everyone in the administration wanted me to apply in English, and I always said no. Some people said: “Listen, I give you the choice”, then some people said: “Listen, I agree with you, but if you’re not [submitting] in English, you won’t be funded”. Then the fact that I am young too, because very often they will look at the CV, they will not look at the project: “ah, his CV is not impressive, we will not finance him”. This is complete nonsense. The person is capable of doing the project, the project is fabulous: we fund the project. So, that happened, organizational barriers: that happened a lot. I was not eligible for Quebec research funds (…). I had big organizational barriers unfortunately (participant 14). At the time of my promotion, some colleagues were not happy with the type of research I was conducting. I learned – you learn this over time when you become friends with people after you enter the university – that someone was against me. He had another candidate in mind, and he was angry about the selection. I was under pressure for the first three years until my contract was renewed. I almost quit at one point, but another colleague told me, “No, stay, nothing will happen”. Nothing happened, but these issues kept me awake at night (participant 16).

This difficult context for many researchers affects not only the conduct of their own research, but also their participation in research. We faced this problem in our study, despite the use of multiple recruitment methods, including more than 200 emails – of which 191 were individual solicitations – sent to potential participants by the two research assistants. REB members and organizations overseeing or supporting research (n = 17) were also approached to see if some of their employees would consider participating. While it was relatively easy to recruit REB members and research ethics experts, our team received a high number of non-responses to emails (n = 175) and some refusals (n = 5), especially by researchers. The reasons given by those who replied were threefold: (a) fear of being easily identified should they take part in the research, (b) being overloaded and lacking time, and (c) the intrusive aspect of certain questions (i.e., “Have you experienced a burnout episode? If so, have you been followed up medically or psychologically?”). In light of these difficulties and concerns, some questions in the socio-demographic questionnaire were removed or modified. Talking about burnout in research remains a taboo for many researchers, which paradoxically can only contribute to the unresolved problem of unhealthy research environments.

Returning to the research question and objective

The question that prompted this research was: What are the ethical issues in research? The purpose of the study was to describe these issues from the perspective of researchers (from different disciplines), research ethics board (REB) members, and research ethics experts. The previous section provided a detailed portrait of the ethical issues experienced by different research stakeholders: these issues are numerous, diverse and were recounted by a range of stakeholders.

The results of the study are generally consistent with the literature. For example, as in our study, the literature discusses the lack of research integrity on the part of some researchers (Al-Hidabi et al., 2018 ; Swazey et al., 1993 ), the numerous conflicts of interest experienced in research (Williams-Jones et al., 2013 ), the issues of recruiting and obtaining the free and informed consent of research participants (Provencher et al., 2014 ; Keogh & Daly, 2009 ), the sometimes difficult relations between researchers and REBs (Drolet & Girard, 2020 ), the epistemological issues experienced in research (Drolet & Ruest, accepted; Sieber 2004 ), as well as the harmful academic context in which researchers evolve, insofar as this is linked to a culture of performance, an overload of work in a context of accountability (Berg & Seeber, 2016 ; FQPPU; 2019 ) that is conducive to ethical distress and even burnout.

If the results of the study are generally in line with those of previous publications on the subject, our findings also bring new elements to the discussion while complementing those already documented. In particular, our results highlight the role of systemic injustices – be they social, distributive or epistemic – within the environments in which research is carried out, at least in Canada. To summarize, the results of our study point to the fact that the relationships between researchers and research participants are likely still to raise worrying ethical issues, despite widely accepted research ethics norms and institutionalized review processes. Further, the context in which research is carried out is not only conducive to breaches of ethical norms and instances of misbehaviour or misconduct, but also likely to be significantly detrimental to the health and well-being of researchers, as well as research assistants. Another element that our research also highlighted is the instrumentalization and even exploitation of students and research assistants, which is another important and worrying social injustice given the inevitable power imbalances between students and researchers.

Moreover, in a context in which ethical issues are often discussed from a micro perspective, our study helps shed light on both the micro- and macro-level ethical dimensions of research (Bronfenbrenner, 1979 ; Glaser 1994 ). However, given that ethical issues in research are not only diverse, but also and above all complex, a broader perspective that encompasses the interplay between the micro and macro dimensions can enable a better understanding of these issues and thereby support the identification of the multiple factors that may be at their origin. Triangulating the perspectives of researchers with those of REB members and research ethics experts enabled us to bring these elements to light, and thus to step back from and critique the way that research is currently conducted. To this end, attention to socio-political elements such as the performance culture in academia or how research funds are distributed, and according to what explicit and implicit criteria, can contribute to identifying the sources of the ethical issues described above.

Contemporary culture characterized by the social acceleration

The German sociologist and philosopher Rosa (2010) argues that late modernity – that is, the period between the 1980s and today – is characterized by a phenomenon of social acceleration that causes various forms of alienation in our relationship to time, space, actions, things, others and ourselves. Rosa distinguishes three types of acceleration: technical acceleration , the acceleration of social changes and the acceleration of the rhythm of life . According to Rosa, social acceleration is the main problem of late modernity, in that the invisible social norm of doing more and faster to supposedly save time operates unchallenged at all levels of individual and collective life, as well as organizational and social life. Although we all, researchers and non-researchers alike, perceive this unspoken pressure to be ever more productive, the process of social acceleration as a new invisible social norm is our blind spot, a kind of tyrant over which we have little control. This conceptualization of the contemporary culture can help us to understand the context in which research is conducted (like other professional practices). To this end, Berg & Seeber ( 2016 ) invite faculty researchers to slow down in order to better reflect and, in the process, take care of their health and their relationships with their colleagues and students. Many women professors encourage their fellow researchers, especially young women researchers, to learn to “say No” in order to protect their mental and physical health and to remain in their academic careers (Allaire & Descheneux, 2022 ). These authors also remind us of the relevance of Kahneman’s ( 2012 ) work which demonstrates that it takes time to think analytically, thoroughly, and logically. Conversely, thinking quickly exposes humans to cognitive and implicit biases that then lead to errors in thinking (e.g., in the analysis of one’s own research data or in the evaluation of grant applications or student curriculum vitae). The phenomenon of social acceleration, which pushes the researcher to think faster and faster, is likely to lead to unethical bad science that can potentially harm humankind. In sum, Rosa’s invitation to contemporary critical theorists to seriously consider the problem of social acceleration is particularly insightful to better understand the ethical issues of research. It provides a lens through which to view the toxic context in which research is conducted today, and one that was shared by the participants in our study.

Clark & Sousa ( 2022 ) note, it is important that other criteria than the volume of researchers’ contributions be valued in research, notably quality. Ultimately, it is the value of the knowledge produced and its influence on the concrete lives of humans and other living beings that matters, not the quantity of publications. An interesting articulation of this view in research governance is seen in a change in practice by Australia’s national health research funder: they now restrict researchers to listing on their curriculum vitae only the top ten publications from the past ten years (rather than all of their publications), in order to evaluate the quality of contributions rather than their quantity. To create environments conducive to the development of quality research, it is important to challenge the phenomenon of social acceleration, which insidiously imposes a quantitative normativity that is both alienating and detrimental to the quality and ethical conduct of research. Based on our experience, we observe that the social norm of acceleration actively disfavours the conduct of empirical research on ethics in research. The fact is that researchers are so busy that it is almost impossible for them to find time to participate in such studies. Further, operating in highly competitive environments, while trying to respect the values and ethical principles of research, creates ethical paradoxes for members of the research community. According to Malherbe ( 1999 ), an ethical paradox is a situation where an individual is confronted by contradictory injunctions (i.e., do more, faster, and better). And eventually, ethical paradoxes lead individuals to situations of distress and burnout, or even to ethical failures (i.e., misbehaviour or misconduct) in the face of the impossibility of responding to contradictory injunctions.

Strengths and Limitations of the study

The triangulation of perceptions and experiences of different actors involved in research is a strength of our study. While there are many studies on the experiences of researchers, rarely are members of REBs and experts in research ethics given the space to discuss their views of what are ethical issues. Giving each of these stakeholders a voice and comparing their different points of view helped shed a different and complementary light on the ethical issues that occur in research. That said, it would have been helpful to also give more space to issues experienced by students or research assistants, as the relationships between researchers and research assistants are at times very worrying, as noted by a participant, and much work still needs to be done to eliminate the exploitative situations that seem to prevail in certain research settings. In addition, no Indigenous or gender diverse researchers participated in the study. Given the ethical issues and systemic injustices that many people from these groups face in Canada (Drolet & Goulet, 2018 ; Nicole & Drolet, in press ), research that gives voice to these researchers would be relevant and contribute to knowledge development, and hopefully also to change in research culture.

Further, although most of the ethical issues discussed in this article may be transferable to the realities experienced by researchers in other countries, the epistemic injustice reported by Francophone researchers who persist in doing research in French in Canada – which is an officially bilingual country but in practice is predominantly English – is likely specific to the Canadian reality. In addition, and as mentioned above, recruitment proved exceedingly difficult, particularly amongst researchers. Despite this difficulty, we obtained data saturation for all but two themes – i.e., exploitation of students and ethical issues of research that uses animals. It follows that further empirical research is needed to improve our understanding of these specific issues, as they may diverge to some extent from those documented here and will likely vary across countries and academic research contexts.

Conclusions

This study, which gave voice to researchers, REB members, and ethics experts, reveals that the ethical issues in research are related to several problematic elements as power imbalances and authority relations. Researchers and research assistants are subject to external pressures that give rise to integrity issues, among others ethical issues. Moreover, the current context of social acceleration influences the definition of the performance indicators valued in academic institutions and has led their members to face several ethical issues, including social, distributive, and epistemic injustices, at different steps of the research process. In this study, ten categories of ethical issues were identified, described and illustrated: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. The triangulation of the perspectives of different members (i.e., researchers from different disciplines, REB members, research ethics experts, and one research assistant) involved in the research process made it possible to lift the veil on some of these ethical issues. Further, it enabled the identification of additional ethical issues, especially systemic injustices experienced in research. To our knowledge, this is the first time that these injustices (social, distributive, and epistemic injustices) have been clearly identified.

Finally, this study brought to the fore several problematic elements that are important to address if the research community is to develop and implement the solutions needed to resolve the diverse and transversal ethical issues that arise in research institutions. A good starting point is the rejection of the corollary norms of “publish or perish” and “do more, faster, and better” and their replacement with “publish quality instead of quantity”, which necessarily entails “do less, slower, and better”. It is also important to pay more attention to the systemic injustices within which researchers work, because these have the potential to significantly harm the academic careers of many researchers, including women researchers, early career researchers, and those belonging to racialized groups as well as the health, well-being, and respect of students and research participants.

Acknowledgements

The team warmly thanks the participants who took part in the research and who made this study possible. Marie-Josée Drolet thanks the five research assistants who participated in the data collection and analysis: Julie-Claude Leblanc, Élie Beauchemin, Pénéloppe Bernier, Louis-Pierre Côté, and Eugénie Rose-Derouin, all students at the Université du Québec à Trois-Rivières (UQTR), two of whom were active in the writing of this article. MJ Drolet and Bryn Williams-Jones also acknowledge the financial contribution of the Social Sciences and Humanities Research Council of Canada (SSHRC), which supported this research through a grant. We would also like to thank the reviewers of this article who helped us improve it, especially by clarifying and refining our ideas.

Competing Interests and Funding

As noted in the Acknowledgements, this research was supported financially by the Social Sciences and Humanities Research Council of Canada (SSHRC).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Al-Hidabi, Abdulmalek, M. D., & The, P. L. (2018). Multiple Publications: The Main Reason for the Retraction of Papers in Computer Science. In K. Arai, S. Kapoor, & R. Bhatia (eds), Future of Information and Communication Conference (FICC): Advances in Information and Communication, Advances in Intelligent Systems and Computing (AISC), Springer, vol. 886, pp. 511–526
  • Allaire, S., & Deschenaux, F. (2022). Récits de professeurs d’université à mi-carrière. Si c’était à refaire… . Presses de l’Université du Québec
  • Aristotle . Aristotle’s Nicomachean Ethics. Chicago: The University of Chicago Press; 2012. [ Google Scholar ]
  • Bahn S. Keeping Academic Field Researchers Safe: Ethical Safeguards. Journal of Academic Ethics. 2012; 10 :83–91. doi: 10.1007/s10805-012-9159-2. [ CrossRef ] [ Google Scholar ]
  • Balk DE. Bereavement Research Using Control Groups: Ethical Obligations and Questions. Death Studies. 1995; 19 :123–138. doi: 10.1080/07481189508252720. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beauchemin, É., Côté, L. P., Drolet, M. J., & Williams-Jones, B. (2021). Conceptualizing Ethical Issues in the Conduct of Research: Results from a Critical and Systematic Literature Review. Journal of Academic Ethics , Early Online. 10.1007/s10805-021-09411-7
  • Berg, M., & Seeber, B. K. (2016). The Slow Professor . University of Toronto Press
  • Birchley G, Huxtable R, Murtagh M, Meulen RT, Flach P, Gooberman-Hill R. Smart homes, private homes? An empirical study of technology researchers’ perceptions of ethical issues in developing smart-home health technologies. BMC Medical Ethics. 2017; 18 (23):1–13. doi: 10.1186/s12910-017-0183-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blais, J., & Drolet, M. J. (2022). Les injustices sociales vécues en camp de réfugiés: les comprendre pour mieux intervenir auprès de personnes ayant séjourné dans un camp de réfugiés. Recueil annuel belge d’ergothérapie , 14, 37–48
  • Bogdan, R. C., & Biklen, S. K. (2006). Qualitative research in education: An introduction to theory and methods . Allyn & Bacon
  • Bouffard C. Le développement des pratiques de la génétique médicale et la construction des normes bioéthiques. Anthropologie et Sociétés. 2000; 24 (2):73–90. doi: 10.7202/015650ar. [ CrossRef ] [ Google Scholar ]
  • Bronfenbrenner, U. (1979). The Ecology of Human development. Experiments by nature and design . Harvard University Press
  • Bruhn JG, Zajac G, Al-Kazemi AA, Prescott LD. Moral positions and academic conduct: Parameters of tolerance for ethics failure. Journal of Higher Education. 2002; 73 (4):461–493. doi: 10.1353/jhe.2002.0033. [ CrossRef ] [ Google Scholar ]
  • Clark, A., & Sousa (2022). It’s time to end Canada’s obsession with research quantity. University Affairs/Affaires universitaires , February 14th. https://www.universityaffairs.ca/career-advice/effective-successfull-happy-academic/its-time-to-end-canadas-obsession-with-research-quantity/?utm_source=University+Affairs+e-newsletter&utm_campaign=276a847f 70-EMAIL_CAMPAIGN_2022_02_16&utm_medium=email&utm_term=0_314bc2ee29-276a847f70-425259989
  • Colnerud G. Ethical dilemmas in research in relation to ethical review: An empirical study. Research Ethics. 2015; 10 (4):238–253. doi: 10.1177/1747016114552339. [ CrossRef ] [ Google Scholar ]
  • Davison J. Dilemmas in Research: Issues of Vulnerability and Disempowerment for the Social Workers/Researcher. Journal of Social Work Practice. 2004; 18 (3):379–393. doi: 10.1080/0265053042000314447. [ CrossRef ] [ Google Scholar ]
  • DePoy E, Gitlin LN. Introduction to Research. St. Louis: Elsevier Mosby; 2010. [ Google Scholar ]
  • Drolet, M. J., & Goulet, M. (2018). Travailler avec des patients autochtones du Canada ? Perceptions d’ergothérapeutes du Québec des enjeux éthiques de cette pratique. Recueil annuel belge francophone d’ergothérapie , 10 , 25–56
  • Drolet MJ, Girard K. Les enjeux éthiques de la recherche en ergothérapie: un portrait préoccupant. Revue canadienne de bioéthique. 2020; 3 (3):21–40. doi: 10.7202/1073779ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Girard K, Gaudet R. Les enjeux éthiques de l’enseignement en ergothérapie: des injustices au sein des départements universitaires. Revue canadienne de bioéthique. 2020; 3 (1):22–36. [ Google Scholar ]
  • Drolet MJ, Maclure J. Les enjeux éthiques de la pratique de l’ergothérapie: perceptions d’ergothérapeutes. Revue Approches inductives. 2016; 3 (2):166–196. doi: 10.7202/1037918ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Pinard C, Gaudet R. Les enjeux éthiques de la pratique privée: des ergothérapeutes du Québec lancent un cri d’alarme. Ethica – Revue interdisciplinaire de recherche en éthique. 2017; 21 (2):173–209. [ Google Scholar ]
  • Drolet MJ, Ruest M. De l’éthique à l’ergothérapie: un cadre théorique et une méthode pour soutenir la pratique professionnelle. Québec: Presses de l’Université du Québec; 2021. [ Google Scholar ]
  • Drolet, M. J., & Ruest, M. (accepted). Quels sont les enjeux éthiques soulevés par la recherche scientifique? In M. Lalancette & J. Luckerhoff (dir). Initiation au travail intellectuel et à la recherche . Québec: Presses de l’Université du Québec, 18 p
  • Drolet MJ, Sauvageau A, Baril N, Gaudet R. Les enjeux éthiques de la formation clinique en ergothérapie. Revue Approches inductives. 2019; 6 (1):148–179. doi: 10.7202/1060048ar. [ CrossRef ] [ Google Scholar ]
  • Fédération québécoise des professeures et des professeurs d’université (FQPPU) Enquête nationale sur la surcharge administrative du corps professoral universitaire québécois. Principaux résultats et pistes d’action. Montréal: FQPPU; 2019. [ Google Scholar ]
  • Fortin MH. Fondements et étapes du processus de recherche. Méthodes quantitatives et qualitatives. Montréal, QC: Chenelière éducation; 2010. [ Google Scholar ]
  • Fraser DM. Ethical dilemmas and practical problems for the practitioner researcher. Educational Action Research. 1997; 5 (1):161–171. doi: 10.1080/09650799700200014. [ CrossRef ] [ Google Scholar ]
  • Fraser, N. (2011). Qu’est-ce que la justice sociale? Reconnaissance et redistribution . La Découverte
  • Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing . Oxford University Press
  • Giorgi A, et al. De la méthode phénoménologique utilisée comme mode de recherche qualitative en sciences humaines: théories, pratique et évaluation. In: Poupart J, Groulx LH, Deslauriers JP, et al., editors. La recherche qualitative: enjeux épistémologiques et méthodologiques. Boucherville, QC: Gaëtan Morin; 1997. pp. 341–364. [ Google Scholar ]
  • Giorgini V, Mecca JT, Gibson C, Medeiros K, Mumford MD, Connelly S, Devenport LD. Researcher Perceptions of Ethical Guidelines and Codes of Conduct. Accountability in Research. 2016; 22 (3):123–138. doi: 10.1080/08989621.2014.955607. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glaser, J. W. (1994). Three realms of ethics: Individual, institutional, societal. Theoretical model and case studies . Kansas Cuty, Sheed & Ward
  • Godrie B, Dos Santos M. Présentation: inégalités sociales, production des savoirs et de l’ignorance. Sociologie et sociétés. 2017; 49 (1):7. doi: 10.7202/1042804ar. [ CrossRef ] [ Google Scholar ]
  • Hammell KW, Carpenter C, Dyck I. Using Qualitative Research: A Practical Introduction for Occupational and Physical Therapists. Edinburgh: Churchill Livingstone; 2000. [ Google Scholar ]
  • Henderson M, Johnson NF, Auld G. Silences of ethical practice: dilemmas for researchers using social media. Educational Research and Evaluation. 2013; 19 (6):546–560. doi: 10.1080/13803611.2013.805656. [ CrossRef ] [ Google Scholar ]
  • Husserl E. The crisis of European sciences and transcendental phenomenology. Evanston, IL: Northwestern University Press; 1970. [ Google Scholar ]
  • Husserl E. The train of thoughts in the lectures. In: Polifroni EC, Welch M, editors. Perspectives on Philosophy of Science in Nursing. Philadelphia, PA: Lippincott; 1999. [ Google Scholar ]
  • Hunt SD, Chonko LB, Wilcox JB. Ethical problems of marketing researchers. Journal of Marketing Research. 1984; 21 :309–324. doi: 10.1177/002224378402100308. [ CrossRef ] [ Google Scholar ]
  • Hunt MR, Carnevale FA. Moral experience: A framework for bioethics research. Journal of Medical Ethics. 2011; 37 (11):658–662. doi: 10.1136/jme.2010.039008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jameton, A. (1984). Nursing practice: The ethical issues . Englewood Cliffs, Prentice-Hall
  • Jarvis K. Dilemmas in International Research and the Value of Practical Wisdom. Developing World Bioethics. 2017; 17 (1):50–58. doi: 10.1111/dewb.12121. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kahneman D. Système 1, système 2: les deux vitesses de la pensée. Paris: Flammarion; 2012. [ Google Scholar ]
  • Keogh B, Daly L. The ethics of conducting research with mental health service users. British Journal of Nursing. 2009; 18 (5):277–281. doi: 10.12968/bjon.2009.18.5.40539. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lierville AL, Grou C, Pelletier JF. Enjeux éthiques potentiels liés aux partenariats patients en psychiatrie: État de situation à l’Institut universitaire en santé mentale de Montréal. Santé mentale au Québec. 2015; 40 (1):119–134. doi: 10.7202/1032386ar. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lynöe N, Sandlund M, Jacobsson L. Research ethics committees: A comparative study of assessment of ethical dilemmas. Scandinavian Journal of Public Health. 1999; 27 (2):152–159. doi: 10.1177/14034948990270020401. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Malherbe JF. Compromis, dilemmes et paradoxes en éthique clinique. Anjou: Éditions Fides; 1999. [ Google Scholar ]
  • McGinn R. Discernment and denial: Nanotechnology researchers’ recognition of ethical responsibilities related to their work. NanoEthics. 2013; 7 :93–105. doi: 10.1007/s11569-013-0174-6. [ CrossRef ] [ Google Scholar ]
  • Mills, C. W. (2017). Black Rights / White rongs. The Critique of Racial Liberalism . Oxford University Press
  • Miyazaki AD, Taylor KA. Researcher interaction biases and business ethics research: Respondent reactions to researcher characteristics. Journal of Business Ethics. 2008; 81 (4):779–795. doi: 10.1007/s10551-007-9547-5. [ CrossRef ] [ Google Scholar ]
  • Mondain N, Bologo E. L’intentionnalité du chercheur dans ses pratiques de production des connaissances: les enjeux soulevés par la construction des données en démographie et santé en Afrique. Cahiers de recherche sociologique. 2009; 48 :175–204. doi: 10.7202/039772ar. [ CrossRef ] [ Google Scholar ]
  • Nicole, M., & Drolet, M. J. (in press). Fitting transphobia and cisgenderism in occupational therapy, Occupational Therapy Now
  • Pope KS, Vetter VA. Ethical dilemmas encountered by members of the American Psychological Association: A national survey. The American Psychologist. 1992; 47 (3):397–411. doi: 10.1037/0003-066X.47.3.397. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Provencher V, Mortenson WB, Tanguay-Garneau L, Bélanger K, Dagenais M. Challenges and strategies pertaining to recruitment and retention of frail elderly in research studies: A systematic review. Archives of Gerontology and Geriatrics. 2014; 59 (1):18–24. doi: 10.1016/j.archger.2014.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rawls, J. (1971). A Theory of Justice . Harvard University Press
  • Resnik DB, Elliott KC. The Ethical Challenges of Socially Responsible Science. Accountability in Research. 2016; 23 (1):31–46. doi: 10.1080/08989621.2014.1002608. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosa, H. (2010). Accélération et aliénation. Vers une théorie critique de la modernité tardive . Paris, Découverte
  • Sen, A. K. (2011). The Idea of Justice . The Belknap Press of Harvard University Press
  • Sen, A. K. (1995). Inegality Reexaminated . Oxford University Press
  • Sieber JE. Empirical Research on Research Ethics. Ethics & Behavior. 2004; 14 (4):397–412. doi: 10.1207/s15327019eb1404_9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sigmon ST. Ethical practices and beliefs of psychopathology researchers. Ethics & Behavior. 1995; 5 (4):295–309. doi: 10.1207/s15327019eb0504_1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Swazey JP, Anderson MS, Lewis KS. Ethical Problems in Academic Research. American Scientist. 1993; 81 (6):542–553. [ Google Scholar ]
  • Swisher LL, Arsalanian LE, Davis CM. The realm-individual-process-situation (RIPS) model of ethical decision-making. HPA Resource. 2005; 5 (3):3–8. [ Google Scholar ]
  • Tri-Council Policy Statement (TCPS2) (2018). Ethical Conduct for Research Involving Humans . Government of Canada, Secretariat on Responsible Conduct of Research. https://ethics.gc.ca/eng/documents/tcps2-2018-en-interactive-final.pdf
  • Thomas SP, Pollio HR. Listening to Patients: A Phenomenological Approach to Nursing Research and Practice. New York: Springer Publishing Company; 2002. [ Google Scholar ]
  • Wiegand DL, Funk M. Consequences of clinical situations that cause critical care nurses to experience moral distress. Nursing Ethics. 2012; 19 (4):479–487. doi: 10.1177/0969733011429342. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams-Jones B, Potvin MJ, Mathieu G, Smith E. Barriers to research on research ethics review and conflicts of interest. IRB: Ethics & Human Research. 2013; 35 (5):14–20. [ PubMed ] [ Google Scholar ]
  • Young, I. M. (2011). Justice and the Politics of difference . Princeton University Press

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

First do no harm: An exploration of researchers’ ethics of conduct in Big Data behavioral studies

Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft

* E-mail: [email protected]

Affiliation Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

ORCID logo

Roles Formal analysis, Methodology, Supervision, Validation, Writing – original draft

Roles Validation, Writing – review & editing

Affiliation Division of Clinical Psychology and Psychotherapy, Faculty of Psychology, University of Basel, Basel, Switzerland

Roles Funding acquisition, Supervision, Validation, Writing – review & editing

  • Maddalena Favaretto, 
  • Eva De Clercq, 
  • Jens Gaab, 
  • Bernice Simone Elger

PLOS

  • Published: November 5, 2020
  • https://doi.org/10.1371/journal.pone.0241865
  • Reader Comments

Table 1

Research ethics has traditionally been guided by well-established documents such as the Belmont Report and the Declaration of Helsinki. At the same time, the introduction of Big Data methods, that is having a great impact in behavioral research, is raising complex ethical issues that make protection of research participants an increasingly difficult challenge. By conducting 39 semi-structured interviews with academic scholars in both Switzerland and United States, our research aims at exploring the code of ethics and research practices of academic scholars involved in Big Data studies in the fields of psychology and sociology to understand if the principles set by the Belmont Report are still considered relevant in Big Data research. Our study shows how scholars generally find traditional principles to be a suitable guide to perform ethical data research but, at the same time, they recognized and elaborated on the challenges embedded in their practical application. In addition, due to the growing introduction of new actors in scholarly research, such as data holders and owners, it was also questioned whether responsibility to protect research participants should fall solely on investigators. In order to appropriately address ethics issues in Big Data research projects, education in ethics, exchange and dialogue between research teams and scholars from different disciplines should be enhanced. In addition, models of consultancy and shared responsibility between investigators, data owners and review boards should be implemented in order to ensure better protection of research participants.

Citation: Favaretto M, De Clercq E, Gaab J, Elger BS (2020) First do no harm: An exploration of researchers’ ethics of conduct in Big Data behavioral studies. PLoS ONE 15(11): e0241865. https://doi.org/10.1371/journal.pone.0241865

Editor: Daniel Jeremiah Hurst, Rowan University School of Osteopathic Medicine, UNITED STATES

Received: July 22, 2020; Accepted: October 21, 2020; Published: November 5, 2020

Copyright: © 2020 Favaretto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The raw data and the transcripts related to the project cannot be openly released due to ethical constraints (such as easy re-identification of the participants and the sensitive nature of parts of the interviews). The main points of contact for fielding data access requests for this manuscript are: the Head of the Institute for Biomedical Ethics (Bernice Elger: [email protected] ), the corresponding author (Maddalena Favaretto: [email protected] ), and Anne-Christine Loschnigg ( [email protected] ). Data sharing is contingent on the data being handled appropriately by the data requester and in accordance with all applicable local requirements. Upon request, a data sharing agreement will be stipulated between the Institute for Biomedical Ethics and the one requesting the data that will state that: 1) The shared data must be deleted by the end of 2023 as stipulated in the recruitment email sent to the study participants designed in accordance to the project proposal of the NRP 75 sent to the Ethics Committee northwest/central Switzerland (EKNZ); 2) The people requesting the data agree to ensure its confidentiality, they should not attempt to re-identify the participants and the data should not be shared with any further third stakeholder not involved in the data sharing agreement signed between the Institute for Biomedical Ethics and those requesting the data; 3) The data will be shared only after the Institute for Biomedical Ethics has received specific written consent for data sharing from the study participants.

Funding: The funding for this study was provided by the Swiss National Science Foundation in the framework of the National Research Program “Big Data”, NRP 75 (Grant-No: 407540_167211, recipient: Prof. Bernice Simone Elger). We confirm that the Swiss National Science Foundation had no involvement in the study design, collection, analysis, and interpretation of data, the writing of the manuscript and the decision to submit the paper for publication.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Big Data methods have a great impact in behavioral sciences [ 1 – 3 ], but challenge the traditional interpretation and validity of research principles in psychology and sociology by raising new and unpredictable ethical concerns. Traditionally, research ethics have been guided by well-established reports and declarations such as the Belmont Report and the Declaration of Helsinki [ 4 – 6 ]. At the core of these documents are three fundamental principles– respect for persons , beneficence , and justice –and their related interpretations and practices, such as the acknowledgment of participants’ autonomous participation and the need to obtain informed consent, minimization of harm, risk benefit assessment, fairness in distribution and dissemination of research outcomes, and fair participant selection (e.g. to avoid additional burden to vulnerable populations) [ 7 ].

As data stemming from human interactions is more and more available to scholars, thanks to a) the increased distribution of technological devices, b) the growing use of digital services, and c) the implementation of new digital technologies [ 8 , 9 ], researchers and institutional bodies are confronted with novel ethical questions. These encompass harm, that might be caused by the linkage of publicly available datasets on research participants [ 10 ], the level of privacy users expect in digital platforms such as social media [ 11 ], the level of protection that investigators should ensure for the anonymity of their participants in research using sensing devices and tracking technologies [ 12 ], and the role of individuals in consenting in participating in large scale data studies [ 13 ].

Consent is one of the most challenged practices in data research. In this context subjects are often unaware of the fact that their data is collected and analyzed and lack the appropriate control over their data, preventing them the possibility to withdraw from a study, that allows for autonomous participation [ 14 , 15 ]. When it comes to the principle of beneficence , Big Data brings about issues with regard to the appropriate risk-benefit ratio for participants as it becomes more difficult for researchers to anticipate unintended harmful consequences [ 8 ]. For example, it is increasingly complicated to ensure anonymity of the participant as risks of re-identification abound in Big Data practices [ 12 ]. Finally, interventions and knowledge developed from Big Data research might benefit only part of the population thus creating issues of justice and fairness [ 10 ]; this is mainly due to the deepening of the digital divide between people who have access to digital resources and those who do not, on the basis of a significant number of demographic variables such as income, ethnicity, age, skills, geographical location and gender [ 10 , 16 ].

There is evidence that researchers and regulatory bodies are struggling to appropriately address these novel ethical questions raised by Big Data. For instance, a group of researchers based at Queen’s Mary University in the UK used a model of geographic profiling on a series of publicly available datasets in order to reveal the identity of famous British artist Banksy [ 17 ]. The study was criticized by scholars for being disrespectful of the privacy of a private citizen and their family and a deliberate violation of the artist’s right of and preference for remaining anonymous [ 18 ]. Another example is the now infamous case of the Emotional Contagion study. Using a specific software, a research team manipulated the News Feeds of 689,003 Facebook users in order investigate how “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness” [ 19 ]. Ethics scholars and the public criticized this study because it was performed without obtaining the appropriate consent from Facebook users and it could have cause psychological harm by showing participants only negative feeds on their homepage [ 20 , 21 ].

Given these substantial challenges, it is legitimate to ask whether the principles set by the Belmont Report are still relevant for digital research practices. Scholars advocate for the construction of flexible guidelines and for the need to revise, reshape and update the guiding principles of research ethics in order to overcome the challenges raised in data research and provide adequate assistance to investigators [ 22 – 24 ].

As ethics governance of Big Data research is currently at debate, researchers’ own ethical attitudes influence significantly how ethical issues are presently dealt with. As researchers are experts on the technical details of their own research, it is also useful for research ethicists and members of ethical committees and Institutional Review Boards (IRB) to be knowledgeable of these attitudes. Therefore, this paper aims to explore the code of ethics and research practices of behavioral scientists involved in Big Data studies in the behavioral sciences in order to investigate perceived strategies to promote ethical and responsible conduct of Big Data research. We have conducted interviews with researchers in the fields of sociology and psychology from eminent universities both in Switzerland and the United States, where we asked them to share details about the type of strategies they develop to protect research participants in their projects; what ethical principles they apply to their projects; their opinion on how Big Data research should ideally be conducted and what ethical challenges they have faced in their research. The present study aims to contribute to the existing literature on the code of conduct of researchers involved in digital research in different countries and the value of traditional ethical principles [ 14 , 22 , 23 ] in order to contribute to the discussion around the construction of harmonized and applicable principles for Big Data studies. This manuscript aims at investigating the following research questions: 1) what are the ethical principles that can still be considered relevant for Big Data research in the behavioral sciences; 2) what are the challenges that Big data methods are posing to traditional ethical principles; 3) what are the investigators’ responsibilities and roles in reflecting upon strategies to protect research participants.

Material and methods

This study is part of a larger research project that investigated the ethical and regulatory challenges of Big Data research. We decided to focus on behavioral sciences, specifically phycology and sociology, for two main reasons. First, the larger research project aimed at investigating the challenges introduced by Big Data methods for regulatory bodies such as Research Ethics Committees (RECs) and Institutional Review Boards (IRBs) [ 25 ]. Both in Switzerland and the United States, Big Data research methods in these two fields are questioning the concept of human research subject–due to the increased distance and detachment between research subjects and investigators brought by digitalized means for data collection (e.g. social media profiles, data networks, transaction logs etc.) and analysis [ 18 ]. As a consequence current legislation in charge of regulating academic research, such as the Human Research Act (HRA) [ 26 ], the Federal Act of Data Protection [ 27 ] and the Common Rule [ 18 ], is being increasingly challenged. Second, especially in Switzerland, behavioral studies using Big Data methods are at the moment among the most underregulated types of research projects [ 26 , 28 , 29 ]. In fact, the current definition of human subject leaves many Big Data projects out of the scope of regulatory overview despite the possible ethical challenges they pose. For instance, according to the HRA research that involves anonymized data from research participants does not need ethics approval [ 26 ].

In addition, we selected Switzerland and the United States to recruit participants: Switzerland, where Big Data research is a quite recent phenomenon, was chosen because the study was designed, funded and conducted there. The United States were selected as a as a comparative sample, where advanced Big Data research has been taking place for several years in the academic environment, as evidenced by the numerous grants placed for Big Data research projects by federal institutions, such as the National Science Foundation (NSF) [ 30 , 31 ] and the National Institute of Health (NIH) [ 32 ].

For the purpose of our study we defined Big Data as an overarching umbrella term that designates a set of advanced digital techniques (e.g. data mining, neural networks, deep learning, artificial intelligence, natural language processing, profiling, scoring systems) that are increasingly used in research to analyze large datasets with the aim of revealing patterns, trends and associations about individuals, groups and society in general [ 33 ]. Within this definition we selected participants that conducted heterogeneous Big Data research projects: from internet-based research and social media studies, to aggregate analysis of corporate datasets, to behavioral research using sensing devices. Participant selection was based on their involvement in Big Data research and was conducted systematically by browsing the professional pages of all professors affiliated to the departments of psychology and sociology of all twelve Swiss Universities and the top ten American Universities according to the Times Higher Education University Ranking 2018. Other candidates were identified through snowballing. Through our systematic selection we also identified a consistent number of researchers with a background in data science that were involved in research projects in behavioral sciences (in sociology, psychology and similar fields) during the time of their interview. Since their profile matched the selection criteria, we included them in our sample.

We conducted 39 semi structured interviews with academic scholars involved in research projects that adopt Big Data methodologies. Twenty participants were from Swiss universities and 29 came from American institutions. They comprised of a majority of professors (n = 34) and a few senior researchers or postdocs (n = 5). Ethics approval was sought from the Ethics Committee northwest/central Switzerland (EKNZ) who deemed our study exempt. Oral informed consent was sought prior the start of each interview. Interviews were administered using a semi-structured interview guide developed, through consensus and discussion, after the research team had the time to familiarize with the literature and studies on Big Data research and data ethics. The questions explored topics like: ethical issues related to Big Data studies in the behavioral sciences; ethics of conduct with regards to Big Data research project; institutional regulatory practices; definition and understanding of the term Big Data; and opinions towards data driven studies ( Table 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0241865.t001

Interviews were tape recorded and transcribed ad-verbatim. We subsequently transferred the transcripts into the qualitative software MAXQDA (version 2018) to support with data management and the analytic process [ 34 ]. Analysis of the dataset was done using thematic analysis [ 35 ]. The first four interviews were independently read and coded by two members of the research team in order to explore the thematic elements of the interviews. To ensure consistency during the analysis process, the two researchers subsequently confronted the preliminary open-ended coding and they developed an expanded coding scheme that was used for all of the remaining transcripts. Several themes relevant for this study were agreed upon during the coding sessions such as: a) responsibility and the role of the researcher in Big Data research; b) research standards for Big Data studies; c) attitudes towards the use of publicly available data; d) emerging ethical issues from Big Data studies. Since part of the data has already been published, we refer to a previous publication [ 33 ] for additional information on methodology, project design, data collection and data analysis.

Researcher’s code of ethics for Big Data studies was chosen as a topic to explore since participants, by identifying several ethical challenges related to Big Data, expressed concerns regarding the protection of the human subject in digital research and expressed shared strategies and opinions on how to ethically conduct Big Data studies. Consequently, all the interviews that were coded within the aforementioned topics were read again, analyzed and sorted into sub-topics. This phase was performed by the first author while the second author supervised this phase by checking for consistency and accuracy.

For this study we conducted 39 interviews with respectively 21 sociologists (9 from CH and 12 from the US), 11 psychologists (6 from CH and 5 from the US), and 7 data scientists (5 from CH and 2 from the US). Among them, 27 scholars (12 from CH and 21 from US) stated that they were working on Big Data research projects or on projects that involve Big Data methodologies, four participants (all from CH) noted that they were not involved in Big Data research and eight (7 from CH and one from the US) were unsure whether their research could be described or considered as Big Data research ( Table 2 ).

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t002

Respondents, while discussing codes of ethics and ethical practices for Big Data research, both a) shared their personal strategies that they implemented in their own research projects to protect research subjects, and b) generally discussed the appropriate research practices to be implemented in Big Data research. Table 3 illustrates the type of Big Data our participants were working with at the time of the interview.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t003

Our analysis identified several themes and subthemes. They were then divided and analyzed within three major thematic clusters: a) ethical principles for Big Data research; b) challenges that Big Data is introducing for research principles; c) ethical reflection and responsibility in research. Table 4 reports the themes and subthemes that emerged from the interviews and their occurrence in the dataset. Representative anonymized quotes were taken from the interviews to further illustrate the reported results.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t004

Ethical principles for digital research

Belmont principles, beneficence and avoiding harm..

First, many of the respondents shared their opinions on what ethical guidelines and principles they consider important to conduct ethical research in the digital era. Table 5 illustrates the number of researchers that mentioned a specific ethical principle or research practice as relevant for Big Data research.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t005

Three of our participants, generally referred to the principles stated in the Belmont Report and the ones related to the Declaration of Helsinki.

I think the Belmont Report principles. The starting point so. . . .you know beneficence, respect for the individuals, justice… and applying those and they would take some work for how to apply those exactly or what it would mean translating to this context but that would be the starting point (P18, US–data science).

A common concern was minimization of harm for research participants and the importance of beneficence as prominent components of scholarly research.

And…on an ethical point of view… and I guess we should be careful that experiment doesn’t harm people or not offend people for example if it's about religion or something like that it can be tricky (P25, CH–psychology).

Beneficence, in the context of digital Big Data research, was sometimes associated with the possibility of giving back to the community as a sort of tradeoff for the inconvenience that research might cause to research participants. On this, P9, an American sociologist, shared:

I mean it's interesting that the ethical challenges that I faced… (pause) had more to do with whether I feel, for instance in working in the developing world…is it really beneficial to the people that I'm working with, I mean what I'm doing. You know I make heavy demands on these people so one of the ethical challenges that I face is, am I giving back enough to the community.

While another American scholar, a psychologist, was concerned about how to define acceptable risks in digital research and finding the right balance between benefit and risks for research projects.

P17: Expecting benefit from a study that should outweigh the respective risks. I mean, I think that's a pretty clear one. This is something I definitely I don't know the answer to and I'm curious about how much other people have thought about it. Because like what is an acceptable sort of variation in expected benefits and risks. Like, you could potentially say “on average my study is expected to deliver higher benefits than risks”… there's an open question of like, … some individuals might regardless suffer under your research or be hurt. Even if some others are benefitting in some sense.

For two researchers, respect for the participant and their personhood was deemed particularly important irrespective of the type of research conducted. P19, an American sociologist, commented:

What I would like to see is integrity and personhood of every single individual who is researched, whether they are dead or alive, that that be respected in a very fundamental way. And that is the case whether it's Big Data, and whether is interviews, archival, ethnographic, textual or what have you. And I think this is a permanent really deep tension in wissenshaftlich ( scientific research ) activities because we are treating the people as data. And that's a fundamental tension. And I think it would be deeply important to explicitly sanitize that tension from the get-go and to hang on to that personhood and the respect for that personhood.

Informed consent and transparency.

Consent was by far the most prominent practice that emerged from the interviews as three quarters of our participants mentioned it, equally distributed among American and Swiss researchers. Numerous scholars emphasized how informed consent is at the foundation of appropriate research practices. P2, a Swiss psychologist, noted:

But of course it's pretty clear to me informed consent is very important and it’s crucial that people know what it is what kind of data is collected and when they would have the possibility of saying no and so on. I think that’s pretty standard for any type of data. (…) I mean it all goes down to informed consent.

For a few of our participants, in the era of Big Data, it becomes not really a matter of consent but a matter of awareness. Since research with Big Data could theoretically be performed without the knowledge of the participant, research subjects at least have to be made aware that they are part of a research project as claimed by P38 a Swiss sociologist who said:

I think that everything comes down to the awareness of the subject about what is collected about them. I mean, we have collected data for ages, right? And I mean, before it was using pen and paper questionnaires, phone interviews or…there’s been data collection about private life of people for, I mean, since social science exists. So, I think the only difference now is the awareness.

Another practice that was considered fundamental by our participants was the right of participants to withdraw from a research study that, in turn, was translated in giving the participants more control over their data in the context of Big Data research. For example, while describing their study with social media, a Swiss sociologist (P38) explained that”the condition was that everybody who participated was actually able to look at his own data and decide to drop from the survey any time”. Another Swiss sociologist (P37), when describing a study design in which they asked participants to install an add-on on their browser to collect data on their Facebook interactions, underlined the importance of giving participants control over their data and to teach them how to manage them, in order to create a trust based exchange between them and the investigators:

And there you'd have to be sure that people…it's not just anonymizing them, people also need to have a control over their data, that's kind of very important because you need kind of an established trust between the research and its subjects as it were. So they would have the opportunity of uninstall the…if they're willing to take part, that's kind of the first step, and they would need to download that add-on and they'd also be instructed on how to uninstall the add-on at any point in time. They'd be also instructed on how to pause the gathering of their data at any point in time and then again also delete data that well…at first I thought it was a great study now I'm not so sure about, I want to delete everything I've ever collected.

The same researcher suggested to create regulations that ensure ownership of research data to participants in order to allow them to have actual power over their participation past the point of initial consent.

And legal parameters then should be constructed as such that it has to be transparent, that it guards the rights of the individual (…) in terms of having ownership of their data. Particularly if it's private data they agree to give away. And they become part of a research process that only ends where their say. And they can always withdraw the data at any point in time and not just at the beginning with agreeing or not agreeing to taking part in that. But also at different other points in time. So that i think the…you have to include them more throughout your research process. Which is more of a hassle, costs more money and more time, but in the end you kind of. . . .it makes it more transparent and perhaps it makes it more interesting for them as well and that would have kind of beneficial effects for the larger public I suppose.

In addition, transparency of motives and practices was also considered a fundamental principle for digital research. For instance, transparency was seen as a way for research participants to be fully informed about the research procedures and methods used by investigators. According to a few participants transparency is key to guarantee people’s trust the research system and to minimize their worry and reservations about participating in research studies. On this P14, an American psychologist, noted:

I think we need to have greater transparency and more. . . . You know our system, we have in the United States is that…well not a crisis, the problem that we face in the United States which you also face I'm sure, is that…you know, people have to believe that this is good stuff to do (participating in a study). And if they don't believe that this is good stuff to do then it's a problem. And so. . . .so I think that that. . . .and I think that the consent process is part of it but I think that the other part of it is that the investigators and the researchers, the investigators and the institutions, you know, need to be more transparent and more accountable and make the case that this is something worth doing and that they're being responsible about it.

A Swiss sociologist, P38, who described how they implemented transparency in their research project by giving control to participants over the data they were collecting on them, highlighted that the fear individuals might have towards digital and Big Data research might come from lack of information and understanding about what data investigators are collecting on them and how they are using it. In this sense transparency of practices not only ensures that more individuals trust the research systems, but it will also assist them in making a truly informed decision about their participation in a study.

And if I remember correctly the conditions were: transparency, so every subject had to have access to the full data that we were collecting. They had also the possibility to erase everything if they wanted to and to drop from the campaign. I guess it's about transparency. (…) So, I think this is key, so you need to be transparent about what kind of data you collect and why and maybe what will happen to the data. Because people are afraid of things they don't understand so the better they understand what's happening the more they would be actually. . . . not only they will be willing to participate but also the more they will put the line in the right place. So, this I agree, this I don't agree. But the less you understand the further away you put the line and you just want to be on the safe side. So, the better they understand the better they can draw the line at the right place, and say ok: this is not your business, this I'm willing to share with you.

In addition, one of our participants considered transparency to be an important value also between scholars from different research teams. According to this participant, open and transparent communication and exchange between research would help implement appropriate ethical norms for digital research. They shared:

But I think part of it is just having more transparency among researchers themselves. I think you need to have like more discussions like: here's what I'm doing…here's what I'm doing…just more sharing in general, I think, and more discussion. (…) People being more transparent on how they're doing their work would just create more norms around it. Because I think in many cases people don't know what other people have been doing. And that's part of the issues that, you know, it's like how do I apply these abstract standards to this case, I mean that can be though. But if you know what everybody is doing it makes a little bit easier. (P3-US, Sociologist)

On the other hand, however, a sociologist from Switzerland (P37), noted that the drive towards research transparency might become problematic for ensuring the anonymity of research participants as more information you share about research practices and methods the more possibilities of backtracking and re-identifying the participants to the study.

It’s problematic also because modern social science, or science anyway, has a strong and very good drive towards transparency. But transparency also means, that the more we become transparent the less we can guarantee anonymity (…) If you say: "well, we did a crawl study", people will ask "well, where are you starting, what are your seeds for the crawler?". And it's important to, you know, to be transparent in that respect.

Privacy and anonymity.

Respect for the privacy of research participants, and protection from possible identification, usually achieved through anonymization of data, were the second most mentioned standards to be considered while conducting Big Data research. P33, a Swiss sociologist, underlined how “If ever, then privacy has…like it’s never been more important than now”, since information about individuals is becoming increasingly available thanks to digital technologies, and how institutions now have a responsibility to ensure that such privacy is respected. A Swiss data scientist, P29, described the privacy aspect embedded in their research with social media and how their team is constantly developing strategies to ensure anonymity of research subjects. They told:

Yeah, there is a privacy aspect of course, that's the main concern, that you basically…if you’re able to reconstruct like the name of the person and then the age of the person, the address of the person, of course you can link it then to the partner of the person, right? If she or he has, they're sharing the same address. And then you can easily create the story out of that, right? And then this could be an issue but…again, like we try to reapply some kind of anonymization techniques. We have some people working mostly on that. There is a postdoc in our group who is working on anonymization techniques.

Similarly, an American researcher, P6 Sociologist, underlined how it should become a routine practice for every research project to consider and implement practices to protect human participants from possible re-identification:

In the social science world people have to be at least sensitive to the fact that they could be collecting data that allows for the deductive identification of individuals. And that probably…that should be a key focus of every proposal of how do you protect against that.

Challenges introduced by Big Data to research ethics and ethical principles

A consistent number of our researcher, on the other hand, recognized how Big Data research and methods are introducing numerous challenges related to the principles and practices they consider fundamental for ethical research and reflected upon the limits of the traditional ethical principles.

When discussing informed consent, participants noted that that it might not be the main standard to refer to when creating ethical frameworks for research practices as it cannot be ensured anymore in much digital research. For instance, P14, an American psychologist noted:

I think that that the kind of informed consent that we, you know, when we sign on to Facebook or Reddit or Twitter or whatever, you know, people have no idea of what that means and they don't have any idea of what they're agreeing to. And so, you know the idea that that can bear the entire weight of all this research is, I think…I think notification is really important, you can ask for consent but the idea that that can bear the whole weight for allowing people to do whatever/ researchers to do whatever they want, I think it's misguided.

Similarly, P18, an American scholar with a background in data science, felt that although there is still a place for informed consent in the digital era, this practice should be appropriately revisited and reconsidered as it cannot be applied anymore in the stricter sense, for instance when analyzing aggregated databases where personal identifiers are removed and it would be impossible to trace back the individual to ask them for consent. Data aggregation is the process of gathering data from multiple sources and presenting it in a summarized format. Through the process of data aggregation, data can be stripped from personal identifiers thus ensuring anonymization of the dataset and analyzing aggregate data should, theoretically not reveal personal information about the user. The participant shared:

Certainly, I think there is [space for informed consent in digital research]. And like I said I think we should require people to have informed consent about their data being used in aggregate analysis. And I think right now we do not have informed consent. (…) So, I think again, under the strictest interpretation even to consent to have one’s data involved in an aggregate analysis should involve that. But I don't know, short of that, what would be an acceptable tradeoff or level of treatment. Whether simply aggregating the analysis is good enough and if so what level of aggregation is necessary.

As for consent, many of our participants while recognizing the importance of privacy and anonymity, also reflected on some of the challenges that Big Data and digitalization of research are creating for these research standards. First, a few respondents highlighted how in digital research the risk of identification of participants is quite high as anonymized datasets could almost always be de-anonymized, especially if data is not adequately secured. On this, P1, an American sociologist explained:

I understand and recognize that there are limits to anonymization. And that under certain circumstances almost every anonymized dataset can be de-anonymized. That's what the research that shows us. I mean sometimes that requires significant effort and then you ask yourself would someone really invest like, you know, supercomputers to solve this problem to de-anonymize…

A Swiss sociologist (P38) described how anonymization practices towards the protection of the privacy of the research participant could, on the other hand, diminish the value of the data for research as anonymization would destroy some of the information the researcher is actually interested in.

You know, we cannot do much about it. So… there is a tendency now to anonymize the data but basically ehm…anonymization means destruction of information in the data. And sometimes the information that is destroyed is really the information we need…

Moreover, it was also claimed how digital practices in research are currently blurring the line between private and public spaces creating additional challenges for the protection of the privacy of the research participant and practices of informed consent. A few of our researchers highlighted how research subjects might have an expectation of privacy even in public digital spaces such as social media and public records. In this context, an American sociologist, P9, noted how participants could have a problem in allowing researchers to link together publicly available datasets as they would prefer information stemming from this linkage to remain private:

P9USR: Well because the question is…even if you have no expectation of privacy in your Twitter account, you know Twitter is public. And even if you have no expectation of privacy in terms of whether you voted or not, I don't know, in Italy maybe it's a public record whether if you show up at the pool or not. Right? I can go to the city government and see who voted in the last elections right? (…) So…who voted is listed or what political party they're member of is listed, is public information. But you might have expectation of privacy when it comes to linking those data. So even though you don't expect privacy in Twitter and you don't expect privacy in your voting records, maybe you don't like it when someone links those things together.

In addition, a sociologist, P19 from the US, noted how even with just linking information of some publicly available data, research subjects could be easily identified.

However, when one goes to the trouble of linking up some of the aspects of these publicly available sets it may make some individuals identifiable in a way that they haven't been before. Even though one is purely using publicly available data. So, you might say that it kind of falls into an intermediate zone. And raises practical and ethical questions on protection when working with publicly available data. I don't know how many other people you have interviewed who are working in this particular grey zone.

Two of our participants while describing personal strategies to handle matters of expectation of privacy and consent, discussed the increased blur between private and public spaces and how it is becoming increasingly contextual to adequately handle matters of privacy on social media.

P2USR: So, for example when I study journalists, I assume that their Tweets are public data just because Twitter is the main platform for journalists to kind of present their public and professional accomplishments and so I feel fine kind of using their tweets, like in the context of my research. I will say the same thing, about Facebook data for example. So, some of the journalists kind of… that I interviewed are… are not on Facebook anymore, but at the time we became friends on Facebook and there were postings and I… I wouldn't feel as comfortable, I wouldn't use their Facebook data. I just think that somehow besides the norms of the Facebook platform is that it's more private data, from…especially when it's not a public page so… But it's like… it's fuzzy.

Responsibility and ethical reflection in research

Due to the challenges introduced by digital methods, some of our participants elaborated on their opinions regarding the role of ethical reflection and their responsibility in addressing such challenges in order to ensure the protection of research participants.

Among them, some researches emphasized the importance for investigators to apply ethical standards to appropriately perform their research projects. However, a couple of them recognized how not all researchers might have the background and expertise to acknowledge the ethical issues stemming from their research projects or to be adequately familiar with ethical frameworks. On this, P12, an American sociologist, highlighted the importance of education in ethics for research practitioners:

I also want to re-emphasize that I think that as researchers in this field we need to have training in ethics because a lot of the work that we're doing (pause) you know can be on the border of infringing on people’s privacy.

In addition, self-reflection, ethical interrogation and evaluation about the appropriateness of certain research practices was a theme that emerged quite often during our interviews. For an American psychologist, P4, concerned about issues of consent in digital research, it is paramount that investigator begin to interrogate themselves upon what type of analysis would be ethically appropriate without explicit consent of participants.

And it is interesting by the way around Big Data because in many cases those data were generated by people who didn't sign any consent form. And they have their data used for research. Even (for the) secondary analysis of our own data the question is: what can you do without consent?

Similarly, P26, a sociologist from Switzerland, reflected upon the difficulties that researchers might encounter in evaluating what type of data investigators can consider unproblematic to collect and analyze even in digital public spaces, like social media:

Even though again, it's often not as clear cut, but I think if people make information public that is slightly different from when you are posting privately within a network and assume that the only people really seeing that are your friends. I see that this has its own limits as well because certain things…well A: something like a profile image I think is always by default public on Facebook…so… there you don't really have a choice to post it privately. I guess your only choice is not to change it ever. And then the other thing is that…I know it because I study (…) internet skills, I know a lot of people are not very skilled. So, there are a lot of instances where people don't realize they're posting publicly. So even if something is public you can't assume people had meant it to be public.

Moreover, reflection and evaluation of the intent behind a research study was considered important by P31, a Swiss data scientist, for ethical research in Big Data. The researcher recognized that this is difficult to put into practice as investigators with ill intent might lie about their motivations and you could have negative consequences even with the noblest of intents.

I find it really difficult to answer that. I would say, the first thing that comes to my mind is the evaluation of intent… rather than other technicality. And I think that's a lacking point. But also the reason why I don't give that answer immediately is like…intent is really difficult to probe… and it's probably for some people quite easy to know what is the accepted intent. And then I can of course give you a story that is quite acceptable to you. And also with good intent you can do evil things. So, it's difficult but I would say that discussion about the intent is very important. So that would be maybe for me a minimal requirement. At least in the discussions.

In this context, some scholars also discussed their perception regarding responsibility of protecting research participants in digital studies and the role investigators play in overcoming ethical issues.

For a few of them it was clear that the responsibility of protecting the data subjects should fall on the investigators themselves. For instance, an American scholar, P22 sociologist, while discussing the importance of creating an ethical framework for digital research that uses publicly available data of citizens shared:

So, I do think (the responsibility) it's on researchers (…) and I get frustrated sometimes when people say "well it's not up to us, if they post it there then it's public". It's like well it is up to us, it's literally our job, we do it all day, try to decide, you know, what people want known about them and what people don't. So, we should apply those same metrics here.

However, other researchers also pointed out how the introduction of digital technologies and digital methods for behavioral research is currently shifting the perceived responsibility scholars have. P16, an American sociologist, shared some concerns regarding the use of sensor devices for behavioral research and reflected on how much responsibility they, as investigators, have in assuring data protection of their research subjects since the data they work with is owned by the company that provided the device for data collection:

There's still seems to be this question about…whether. . . .what the Fitbit corporation is doing with those data and whether we as researchers should be concerned about that. We're asking people to wear Fitbits for a study. Or whether that's just a separate issue. And I don't know what the answer to that is, I just know that it seems like the type of question that it's going to come up over and over and over again.

One a similar note, P14, an American psychologist, noted that while researchers actually have a responsibility of preventing harm that might derive from data research, it should be a responsibility in part shared with data holders. They claimed:

Do I think that the holders of data have a responsibility to try to you know, try to prevent misuse of data? Yeah, I think they probably do. (…) I think there is a notion of stewardship there. Then I think that investigators also have an independent obligation to make sure to think about the data they're analyzing and trying to get and think about what they're using it for. So not to use data in order to harm other people or those kinds of things.

Finally, a few participants hinted at the fact that research ethics boards like Institutional Review Boards (IRBs) and Ethics Committees (ECs) should play a bigger role of responsibility in ensuring that investigators actually perform their research ethically. For instance, P16, an American sociologist, complained that IRBs do not provide adequate follow-up to researchers to ensure that they are appropriately following the approved research protocols.

There does seem to be kind of a big gap even in the existing system. Which is that a researcher proposes a project, the IRB hopefully works with the researcher and the project gets approved and there's very little follow-up and very little support for sort of making sure that the things that are laid out at the IRB actually in the proposal and the project protocol actually happen. And not that I don't believe that most researchers have good intensions to follow the rules and all of that but there are so many of kind of different projects and different pressures that things can slip by and there's… there's nobody.

As Big Data methodologies are becoming widespread in research, it is important to reach international consensus on whether and how traditional principles for research ethics, such as the ones described in the Belmont Report, are still relevant for the new ethical questions introduced by Big Data and internet research [ 22 , 23 ]. Our study offers a relevant contribution to this debate as it investigated the methodological strategies and code of ethics researchers from different jurisdictions—Swiss and American investigators—apply in their Big Data research projects. It is interesting to notice how, despite regional difference, participants shared very similar ethical priorities. This might be due to the international nature of academic research, where scholars share similar codes of ethics and apply similar strategies for the protection of research participants.

Our results point out that in their code of conduct, researchers mainly referred to the traditional ethical principles enshrined in the Belmont report and the Declaration of Helsinki, like respect for persons in the practice of informed consent, beneficence, minimization of harm through protection of privacy and anonymization, and justice. This finding shows that such principles are still considered relevant in behavioral sciences to address the ethical issues of Big Data research, despite the critique of some that rules designed for medical research cannot be applied in sociological research [ 36 ]. Even before the advent of Big Data, the practical implementation of the Belmont Report principles has never been an easy endeavor as they were originally conceived to be flexible to accommodate a wide range of different research settings and methods. However it has been argued that exactly this flexibility makes them the perfect framework in which investigators can “clarify trade-offs, suggest improvements to research designs, and enable researchers to explain their reasoning to each other and the public” in digital behavioral research [ 2 ].

Our study shows how scholars still place great importance on the practice of informed consent. They considered crucial that participants are appropriately notified of their research participation, are adequately informed about at least some of the details and procedures of the study, and are given the possibility to withdraw at any point in time. A recent study, however, has highlighted that there is currently no consensus among investigators on how to collect meaningful informed consent among participants in digital research [ 37 ]. Similarly, a few researchers from our study recognized that consent, although preferable in theory, might not be the most adequate practice to refer to when designing ethical frameworks. In the era of Big Data behavioral research, informed consent becomes an extremely complex practice that is intrinsically dependent on the context of the study and the type of Big Data used. For instance, in certain behavioral studies that analyze track data from devices related to a limited number of participants, it would be feasible to ask for consent prior to beginning of the study. However, recombination and reanalysis of the data, possibly across ecosystems far removed from the original source of the data, makes it very difficult to fully inform participants about the range of uses to which their data would be put through, the type of information that could emerge from the analysis of the data, and the unforeseeable harms that the disclosure of such information could cause [ 38 ]. In online studies and internet-mediated research, consent often amounts to an agreement to unread terms of service or a vague privacy policy provided by digital platforms [ 18 ]. Sometimes valid informed consent is not even required by official guidelines when the analyzed data can be considered ‘in the public domain’ [ 39 ], leaving participants unaware that research is performed on their data. It has been argued however that researchers should not just assume that public information is freely accessible for collection and research just because it is public. Researchers should take into consideration what the subject might have intended or desired regarding the possibility for their data to be used for research purposes [ 40 ]. At the same level, we can also argue that even when information is harvested with consent, the subject might a) not wish for their data to be analyzed or reused outside of the purview of the original research purpose and b) fail to understand what is the extent of the information that the analysis of the dataset might reveal about them.

Matzner and Ochs argue that practices of informed consent “are widely accepted since they cohere with notions of the individual that we have been trained to adopt for several centuries” [ 41 ], however they also emphasize how such notions are being altered and challenged by the openness and transience of data-analytics that prevent us from continuing to consider the subject and the researcher within a self-contained dynamic. Since respect for persons , in the form of informed consent, is just one of the principles that needs to be balanced when considering research ethics [ 42 ], it becomes of outmost importance to find the right balance between the perceived necessity of still ensuring consent from participants and the reality that such consent is sometimes impossible to obtain properly. Salganik [ 2 ], for instance, suggests that in the context of digital behavioral research rather than “informed consent for everything”, researchers should follow a more complex rule: “some form of consent for most things”. This means that, assuming informed consent is required, it should be evaluated on a case by case basis whether consent is a) practically feasible and b) actually necessary. This practice might however leave too much space to the discretion of the investigator who might not have the skills to appropriately evaluate the ethical facets of their research projects [ 43 ].

Next to consent, participants from our study also argued in favor of ensuring more control to participants over their own data. In the past years, in fact, it has been argued that individuals often lack the control to manage, protect and delete their data [ 20 , 28 ]. Strategies of dynamic consent could be considered a potential tool to address ethical issues related to consent in Big Data behavioral research. Dynamic consent, a model where online tools are developed to have individuals engage in decisions about how their personal information should be used and which allows them some degree of control over the use of their data, are currently mainly developed for biomedical Big Data research [ 44 , 45 ]. Additional research could be performed to investigate if such models can be translated and applied also for behavioral digital research.

Strictly linked to consent is the matter of privacy. Many researchers underlined the importance of respecting the privacy and anonymity of research participants to protect them from possible harm. At the same time, they also recognized the many challenges related to such practice. They highlighted the difficulty of ensuring complete anonymity of the data and prevent re-identification of participants in Big Data research, especially since high level of anonymization could cause the loss of essential information for the research project. The appropriate trade-off between ensuring maximum anonymization for participants while maintaining quality of the dataset is still hotly debated [ 12 ]. Growing research in data science strives towards developing data models to ensure maximum protection for participants [ 46 ]. On the other hand, our participants also referred to the current debate surrounding the private nature of personal data as opposed to publicly available data and how Big Data and digital technologies are blurring the line between private and public spheres. Some respondents expressed concern or reservation towards the analysis of publicly available data–especially without informed consent–as it could still be considered an infringement of the privacy of research participants and also cause them harm. This shows how researchers are well aware of the problems of considering privacy a binary concept (private vs public data) and that they are also willing to reflect upon strategies to protect the identity of participants even when handling publicly available data. According to Zook et al. [ 47 ], breaches of privacy are the main means by which Big Data can do harm as it might reveal sensitive information about people. Besides the already mentioned “Tagging Banksy” project [ 17 ], another distressing example is what happened in 2013, after the New York City Taxi & Limousine Commission released an anonymized dataset of 173 million individual cab rides–including the pickup and drop-off times, locations, fare and tip amount. Many researchers who freely accessed this database showed how easy it was to elaborate the dataset so that it revealed private information about the taxi-drivers, such as their religious belief, average income and even an estimation of their home address [ 48 ]. It becomes therefore increasingly crucial that investigators in the behavioral sciences recognize how privacy is contextual, situational and changes over time as it depends on multiple factors such as the context in which the data were created and obtained, and the expectations of those whose data is used [ 2 , 47 , 49 , 50 ]. For instance, as reported by one of our respondents, users might not have expectations of privacy on some publicly available information when taken singularly or separately–e.g. social media and voter data, but they might have privacy concerns on the information that the linkage of this data might reveal–e.g. who they voted for. This difficulty, if not impossibility, of defining a widespread single norm or rule for protecting privacy, shows again the intrinsic context dependency of Big Data studies, and highlights how researchers are increasingly called to critically evaluate their decisions on a case by case basis rather than by blindingly applying a common rule.

As new methods of data collection and analysis in behavioral sciences create controversy and appropriately balancing and evaluating ethical principles is becoming a source of difficult decisions for researchers [ 2 ], our participants underlined the importance of ethical reflection and education towards the appropriate development of research projects. They also recognized how investigators are called to critically reflect about the design of their studies and the consequences they might have for research participants [ 51 ]. However, as claimed by one of our participants, not all researchers, especially those coming from more technical disciplines like data science, might have the expertise and tools to proactively think about ethical issues when designing a research project [ 22 ] and might need additional guidance. We therefore argue that education in ethics, exchange and dialogue between research teams and scholars from different disciplines must be implemented. As suggested by Zook et al. [ 47 ] discussion and debate of ethical issues are an essential part of establishing a community of ethical practitioners and integrating ethical reflection into coursework and training can enable a bigger number of scholars to raise appropriate ethical questions when reviewing or developing a project.

Within the current discussion, we have seen how context-dependency, although never spelled out explicitly by our participants, becomes a major theme in the debate over ethical practices in Big Data studies. Our results have in fact highlighted that a one-size fits all approach to research ethics, or a definite overarching set of norms or rules to protect research participants, is not opportune to appropriately handle the multifaceted ethical issues of Big Data. The context-dependent nature of some of the ethical challenges of Big Data studies, such as consent and privacy, might require a higher level of flexibility together with a more situational and dialogic approach to research ethics [ 23 ]. For instance, the Association of Internet Researchers (AoIR) in the development of their Ethical Guidelines for Internet research agrees that the adequate process approach for ethical internet research is one that is reflective and dialogical “as it begins with reflection on own research practices and associated risks and is continuously discussed against the accumulated experience and ethical reflections of researchers in the field and existing studies carried out” [ 52 ]. As a consequence we argue that applying context specific assessments increases the chances of solving ethical issues and appropriately protecting research participants [ 53 ]. Many authors in the field are thus promoting methodological approaches that focus on contextually-driven decision-making for Big Data research. Zimmer, for example, suggests the application of contextual integrity’s decision heuristic on different research studies to appropriately assess the ethical impact of the study on the privacy of its participants and consequently overcome the conceptual gaps left by the Belmont Report for Big Data research ethics [ 50 ]. Similarly, Steinmann et al. [ 53 ] provide an heuristic tool in the form of a “privacy matrix” to assist researchers in the contextual assessment of their research projects.

But what should drive investigators’ ethical reflection and decision making? Despite the multifaceted challenges introduced by Big Data and digital research, we argue that the principles stated in the Belmont Report can still be considered a valuable guidance for academic investigators. As argued by Rothstein [ 28 ], we believe Big Data exceptionalism is no viable option and new challenges should not serve as a catalyst for abandoning foundational principles of research ethics. This is in line with the current best practices suggested by institutional bodies like the American Psychological Association (APA), that claim that the core ethical principles set by the Belmont report should be expanded to address the risks and benefits of today’s data [ 6 ]. Numerous research groups are striving towards the design of ethical frameworks in Big Data research that stay true to the foundational principles of research ethics, but at the same time accommodate the needs and changes introduced by Big Data methods. Steinmann et al. [ 53 ], for instance, suggest to consider five principles (non-maleficence, beneficence, justice, autonomy, and trust) as a well-defined pluralism of values that, by having clear and direct utility in designating practical strategies for protecting privacy, should guide researchers in the evaluation of their research projects. Xafis et al. [ 38 ], in the development of an ethical framework for Biomedical Big Data research, provide a set of 16 values relevant for many Big Data uses divided in substantive values (such as justice, public benefit, solidarity or minimization of harm) and procedural values (accountability, consistency, transparency and trustworthiness) that should be used by investigators to identify and solve ethical issues within their research project. Vitak et al. [ 22 ] recommend the implementation of the principle of transparency, intended as a flexible principle that finds application in different ethical components related both to intent of research (what you are doing with data and why) and practice (how you’re getting the data–informed consent (disclosing purpose and potential use) and how you are processing the data–data anonymity). Also, according to some of our participants, enhancement of transparency in research practices would be positive on different levels. First, it would assist participants in trusting the research system and minimize their worry about participating in research studies; in addition, enhanced transparency between research teams would assist in building up the knowledge to face the ethical issues that emerge in heterogeneous research projects. Although the principle of transparency is becoming increasingly embedded in research practices as something highly recommended, there is still some uncertainty regarding how this principle would actually translate in practice, in order to overcome challenges posed to ethical practices like consent. At the moment much of the debate on transparency mainly focuses on the implementation of algorithmic transparency with Big Data [ 54 ], more research should focus on how put research transparency in practice

Finally, a very relevant theme that our participants reflected upon, that it is rarely addressed by the current literature on Big Data studies, was the topic of responsibility. Some of our respondents in fact interrogated themselves whether the introduction of digital technologies and methods implies a shift of responsibility in protecting research participants. Although all those who discussed responsibility admitted that at least part of it should definitely fall on investigators themselves, some pointed that also other actors involved in Big Data research could share some of this responsibility such as data holders, data owners–in case of the use of corporate data. Digital research has in fact changed the traditional research subject/investigator dynamic [ 18 ] by introducing other factors/actors in the process (social media platforms, private firms etc.) and therefore raises ethical challenges for which researchers do not always have the necessary skills to either anticipate or face [ 25 , 43 ]. To the best of our knowledge, it seems that this aspect of responsibility has not yet entered the ethics debate. This might be due to the practical difficulties that such a debate would necessarily imply such as communication, coordination and compromise between stakeholders with very different goals and interests at stake [ 55 , 56 ]. However, our results show that there are relevant questions and issues that should be further addressed such as: who should bear the responsibility of protecting the research subject in Big Data studies? How much should data owners, data holders, ethics committees and even users be involved in sharing such responsibility? We believe that academic investigators should not bear all the responsibility of the ethical design of research projects alone, or singularly confront themselves with the ethical implications of digital research [ 57 ]. At the moment, models of consultancy between ethics committees and researchers are advocated to assist investigators foresee ethical issues [ 25 , 43 ]. These models, together with the implementation of sustainable and transparent collaboration/partnership with data holders and owners [ 58 ], could assist the creation of appropriate paradigms of shared responsibility that could definitely play a significant role in the development of ethically sound research projects.

Limitations

First, since our respondents were mainly from the fields of psychology and sociology, the study might have overlooked the perspectives of other relevant fields for human subject research that make use of Big Data methodologies (e.g., medicine, nursing sciences, geography, urban planning, computer science, linguistics, etc.). In addition, the findings of this study are based on a small sample of researchers from only two countries that share similar ethical norms and values. For these reasons, the findings from this analysis are not generalizable globally. Future research that takes into account additional disciplines and different countries might contribute to delivering a more comprehensive understanding of the opinions and attitudes of researchers. Finally, a limitation must be acknowledged regarding the definition of Big Data used for this study. Using the term Big Data as an umbrella term prevented us from undertaking a more nuanced analysis of the different types of data used by our participants and their specific characteristics (for instance the different ethical challenges posed by online social media data as compared to sensor data obtained with the consent of the participants). In our discussion we referred to the contextual dependency of the ethical issues of Big Data and the necessity of a continuous ethical reflection that assesses the specific nuances of the different types of Big Data in heterogeneous research projects. However we already recognized the risks of conceptualizing Big Data as a broad overarching concept [ 33 ]. As a consequence, we believe that future research on Big Data ethics will benefit from a deconstruction of the term into its different constituents in order to provide a more nuanced analysis of the topic.

This study investigated the code of ethics and the research strategies that researchers apply when performing Big Data research in the behavioral sciences and it also illustrates some of the challenges scholars encounter in practically applying ethical principles and practices. Our results point out how researchers find the traditional principles of the Belmont Report to be a suitable guide to perform ethical data research. At the same time, they also recognized how Big Data methods and practices are increasingly challenging such principles. Consent and protection of privacy were considered still paramount practices in research. However, they were also considered the most challenged practices since digitalization of research has blurred the boundary between “public and private” and made obtaining consent from participants impossible in certain cases.

Based the results and discussion of our study, we suggest three key items that future research and policymaking should focus on:

  • Development of research ethics frameworks that stay true to the principles of the Belmont Report but also accommodate the context dependent nature of the ethical issues of Big Data research;
  • Implementation of education in ethical reasoning and training in ethics for investigators from diversified curricula: from social science and psychology to more technical fields such as data science and informatics;
  • Design of models of consultancy and shared responsibility between the different stakeholders involved in the research endeavor (e.g. investigators, data owners and review boards) in order to enhance protection of research participants.

Supporting information

S1 file. interview guide..

Semi structured interview guide that illustrates the main questions and themes that the researchers asked to the participants (questions relevant for this study are highlighted in yellow).

https://doi.org/10.1371/journal.pone.0241865.s001

  • View Article
  • Google Scholar
  • 2. Salganik MJ. Bit by bit: Social research in the digital age. Princeton: Princeton University Press; 2019. https://doi.org/10.1002/sim.7973 pmid:30259528
  • PubMed/NCBI
  • 6. Paxton A. The Belmont Report in the Age of Big Data: Ethics at the Intersection of Psychological Science and Data Science. In: Sang Eun Woo LT, and Proctor Robert W., editor. Big data methods for psychological research: New horizons and challenges: American Psychological Association; 2020.
  • 16. Hargittai E, editor Whose data traces, whose voices? Inequality in online participation and why it matters for recommendation systems research. Proceedings of the 13th ACM Conference on Recommender Systems; 2019.
  • 22. Vitak J, Shilton K, Ashktorab Z, editors. Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing; 2016.
  • 24. Markham A, Buchanan E. Ethical decision-making and internet research: Version 2.0. recommendations from the AoIR ethics working committee. Available online: aoir org/reports/ethics2 pdf. 2012.
  • 26. Research with human subjects. A manual for practitioners. Bern: Swiss Academy of Medical Sciences (SAMS); 2015.
  • 30. National Science Foundation. Core Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-12-499) 2012. Available from: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf12499 (Accessed July 2019).
  • 31. National Science Foundation. Critical Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-14-543) 2014. Available from: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf14543&org=NSF (Accessed July 2019).
  • 32. National Institute of Health. Big Data to Knowledge 2019. Available from: https://commonfund.nih.gov/bd2k (Accessed November 19, 2019).
  • 34. Guest G, MacQueen KM, Namey EE. Applied thematic analysis: Sage Publications; 2011.
  • 37. Shilton K, Sayles S, editors. " We Aren't All Going to Be on the Same Page about Ethics": Ethical Practices and Challenges in Research on Digital and Social Media. 2016 49th Hawaii International Conference on System Sciences (HICSS); 2016: IEEE.
  • 39. British Psychological Society. Ethics Guidelines for Internet-mediated Research 2017. Available from: www.bps.org.uk/publications/policy-and-guidelines/research-guidelines-policy-documents/research-guidelines-poli (Accessed September 2020).
  • 41. Matzner T, Ochs C. Sorting Things Out Ethically, Privacy as a Research Issue beyond the Individual In: Zimmer M, Kinder-Kurlanda K, editors. Internet Research Ethics for the Social Age. Oxford: Peter Lang; 2017.
  • 48. Franceschi-Bicchierai L. Redditor cracks anonymous data trove to pinpoint Muslim cab drivers 2015. Available from: https://mashable.com/2015/01/28/redditor-muslim-cab-drivers/#0_uMsT8dnPqP (Accessed June 2020).
  • 49. Nissenbaum H. Privacy in context: Technology, policy, and the integrity of social life: Stanford University Press; 2009. https://doi.org/10.1007/s00259-009-1337-0 pmid:20033153
  • 51. Goel V. As Data Overflows Online, Researchers Grapple With Ethics 2014. Available from: https://www.nytimes.com/2014/08/13/technology/the-boon-of-online-data-puts-social-science-in-a-quandary.html (Accessed May 2020).
  • 52. Franzke AS, Bechmann A, Zimmer M, Ess C, Researchers" AoI. Internet Research: Ethical Guidelines 3.0 2019. Available from: https://aoir.org/reports/ethics3.pdf (Accessed July 2020).
  • 53. Steinmann M, Matei SA, Collmann J. A theoretical framework for ethical reflection in big data research. In: Collmann J, Matei SA, editors. Ethical Reasoning in Big Data. Switzerland: Springer; 2016. p. 11–27.
  • 54. Rader E, Cotter K, Cho J, editors. Explanations as mechanisms for supporting algorithmic transparency. Proceedings of the 2018 CHI conference on human factors in computing systems; 2018.
  • Original article
  • Open access
  • Published: 13 July 2021

Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures

  • Shivadas Sivasubramaniam 1 ,
  • Dita Henek Dlabolová 2 ,
  • Veronika Kralikova 3 &
  • Zeenath Reza Khan 3  

International Journal for Educational Integrity volume  17 , Article number:  14 ( 2021 ) Cite this article

18k Accesses

12 Citations

4 Altmetric

Metrics details

Ethics and ethical behaviour are the fundamental pillars of a civilised society. The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law. In fact, ethics gets precedence with anything that would include, affect, transform, or influence upon individuals, communities or any living creatures. Many institutions within Europe have set up their own committees to focus on or approve activities that have ethical impact. In contrast, lesser-developed countries (worldwide) are trying to set up these committees to govern their academia and research. As the first European consortium established to assist academic integrity, European Network for Academic Integrity (ENAI), we felt the importance of guiding those institutions and communities that are trying to conduct research with ethical principles. We have established an ethical advisory working group within ENAI with the aim to promote ethics within curriculum, research and institutional policies. We are constantly researching available data on this subject and committed to help the academia to convey and conduct ethical behaviour. Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications of research projects among peers. Therefore, this short paper preliminarily aims to critically review the available information on ethics, the history behind establishing ethical principles and its international guidelines to govern research.

The paper is based on the workshop conducted in the 5th International conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019. During the workshop, we have detailed a) basic needs of an ethical committee within an institution; b) a typical ethical approval process (with examples from three different universities); and c) the ways to obtain informed consent with some examples. These are summarised in this paper with some example comparisons of ethical approval processes from different universities. We believe this paper will provide guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Introduction

Ethics and ethical behaviour (often linked to “responsible practice”) are the fundamental pillars of a civilised society. Ethical behaviour with integrity is important to maintain academic and research activities. It affects everything we do, and gets precedence with anything that would include/affect, transform, or impact upon individuals, communities or any living creatures. In other words, ethics would help us improve our living standards (LaFollette, 2007 ). The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law, but is also gaining recognition in all disciplines engaged in research. Therefore, institutions are expected to develop ethical guidelines in research to maintain quality, initiate/own integrity and above all be transparent to be successful by limiting any allegation of misconduct (Flite and Harman, 2013 ). This is especially true for higher education organisations that promote research and scholarly activities. Many European institutions have developed their own regulations for ethics by incorporating international codes (Getz, 1990 ). The lesser developed countries are trying to set up these committees to govern their academia and research. World Health Organization has stated that adhering to “ ethical principles … [is central and important]... in order to protect the dignity, rights and welfare of research participants ” (WHO, 2021 ). Ethical guidelines taught to students can help develop ethical researchers and members of society who uphold values of ethical principles in practice.

As the first European-wide consortium established to assist academic integrity (European Network for Academic Integrity – ENAI), we felt the importance of guiding those institutions and communities that are trying to teach, research, and include ethical principles by providing overarching understanding of ethical guidelines that may influence policy. Therefore, we set up an advisory working group within ENAI in 2018 to support matters related to ethics, ethical committees and assisting on ethics related teaching activities.

Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications among peers. This became the premise for this research paper. We first carried out a literature survey to review and summarise existing ethical governance (with historical perspectives) and procedures that are already in place to guide researchers in different discipline areas. By doing so, we attempted to consolidate, document and provide important steps in a typical ethical application process with example procedures from different universities. Finally, we attempted to provide insights and findings from practical workshops carried out at the 5th International Conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019, focussing on:

• highlighting the basic needs of an ethical committee within an institution,

• discussing and sharing examples of a typical ethical approval process,

• providing guidelines on the ways to teach research ethics with some examples.

We believe this paper provides guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Background literature survey

Responsible research practice (RRP) is scrutinised by the aspects of ethical principles and professional standards (WHO’s Code of Conduct for responsible Research, 2017). The Singapore statement on research integrity (The Singapore Statement on Research integrity, 2010) has provided an internationally acceptable guidance for RRP. The statement is based on maintaining honesty, accountability, professional courtesy in all aspects of research and maintaining fairness during collaborations. In other words, it does not simply focus on the procedural part of the research, instead covers wider aspects of “integrity” beyond the operational aspects (Israel and Drenth, 2016 ).

Institutions should focus on providing ethical guidance based on principles and values reflecting upon all aspects/stages of research (from the funding application/project development stage upto or beyond project closing stage). Figure  1 summarizes the different aspects/stages of a typical research and highlights the needs of RRP in compliance with ethical governance at each stage with examples (the figure is based on Resnik, 2020 ; Žukauskas et al., 2018 ; Anderson, 2011 ; Fouka and Mantzorou, 2011 ).

figure 1

Summary of the enabling ethical governance at different stages of research. Note that it is imperative for researchers to proactively consider the ethical implications before, during and after the actual research process. The summary shows that RRP should be in line with ethical considerations even long before the ethical approval stage

Individual responsibilities to enhance RRP

As explained in Fig.  1 , a successfully governed research should consider ethics at the planning stages prior to research. Many international guidance are compatible in enforcing/recommending 14 different “responsibilities” that were first highlighted in the Singapore Statement (2010) for researchers to follow and achieve competency in RRP. In order to understand the purpose and the expectation of these ethical guidelines, we have carried out an initial literature survey on expected individual responsibilities. These are summarised in Table  1 .

By following these directives, researchers can carry out accountable research by maximising ethical self-governance whilst minimising misconducts. In our own experiences of working with many researchers, their focus usually revolves around ethical “clearance” rather than behaviour. In other words, they perceive this as a paper exercise rather than trying to “own” ethical behaviour in everything they do. Although the ethical principles and responsibilities are explicitly highlighted in the majority of international guidelines [such as UK’s Research Governance Policy (NICE, 2018 ), Australian Government’s National Statement on Ethical Conduct in Human Research (Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR), 2018 ), the Singapore Statement (2010) etc.]; and the importance of holistic approach has been argued in ethical decision making, many researchers and/or institutions only focus on ethics linked to the procedural aspects.

Studies in the past have also highlighted inconsistencies in institutional guidelines pointing to the fact that these inconsistencies may hinder the predicted research progress (Desmond & Dierickx 2021 ; Alba et al., 2020 ; Dellaportas et al., 2014 ; Speight 2016 ). It may also be possible that these were and still are linked to the institutional perceptions/expectations or the pre-empting contextual conditions that are imposed by individual countries. In fact, it is interesting to note many research organisations and HE institutions establish their own policies based on these directives.

Research governance - origins, expectations and practices

Ethical governance in clinical medicine helps us by providing a structure for analysis and decision-making. By providing workable definitions of benefits and risks as well as the guidance for evaluating/balancing benefits over risks, it supports the researchers to protect the participants and the general population.

According to the definition given by National Institute of Clinical care Excellence, UK (NICE 2018 ), “ research governance can be defined as the broad range of regulations, principles and standards of good practice that ensure high quality research ”. As stated above, our literature-based research survey showed that most of the ethical definitions are basically evolved from the medical field and other disciplines have utilised these principles to develop their own ethical guidance. Interestingly, historical data show that the medical research has been “self-governed” or in other words implicated by the moral behaviour of individual researchers (Fox 2017 ; Shaw et al., 2005 ; Getz, 1990 ). For example, early human vaccination trials conducted in 1700s used the immediate family members as test subjects (Fox, 2017 ). Here the moral justification might have been the fact that the subjects who would have been at risk were either the scientists themselves or their immediate families but those who would reap the benefits from the vaccination were the general public/wider communities. However, according to the current ethical principles, this assumption is entirely not acceptable.

Historically, ambiguous decision-making and resultant incidences of research misconduct have led to the need for ethical research governance in as early as the 1940’s. For instance, the importance of an international governance was realised only after the World War II, when people were astonished to note the unethical research practices carried out by Nazi scientists. As a result of this, in 1947 the Nuremberg code was published. The code mainly focussed on the following:

Informed consent and further insisted the research involving humans should be based on prior animal work,

The anticipated benefits should outweigh the risk,

Research should be carried out only by qualified scientists must conduct research,

Avoiding physical and mental suffering and.

Avoiding human research that would result in which death or disability.

(Weindling, 2001 ).

Unfortunately, it was reported that many researchers in the USA and elsewhere considered the Nuremberg code as a document condemning the Nazi atrocities, rather than a code for ethical governance and therefore ignored these directives (Ghooi, 2011 ). It was only in 1964 that the World Medical Association published the Helsinki Declaration, which set the stage for ethical governance and the implementation of the Institutional Review Board (IRB) process (Shamoo and Irving, 1993 ). This declaration was based on Nuremberg code. In addition, the declaration also paved the way for enforcing research being conducted in accordance with these guidelines.

Incidentally, the focus on research/ethical governance gained its momentum in 1974. As a result of this, a report on ethical principles and guidelines for the protection of human subjects of research was published in 1979 (The Belmont Report, 1979 ). This report paved the way to the current forms of ethical governance in biomedical and behavioural research by providing guidance.

Since 1994, the WHO itself has been providing several guidance to health care policy-makers, researchers and other stakeholders detailing the key concepts in medical ethics. These are specific to applying ethical principles in global public health.

Likewise, World Organization for Animal Health (WOAH), and International Convention for the Protection of Animals (ICPA) provide guidance on animal welfare in research. Due to this continuous guidance, together with accepted practices, there are internationally established ethical guidelines to carry out medical research. Our literature survey further identified freely available guidance from independent organisations such as COPE (Committee of Publication Ethics) and ALLEA (All European Academics) which provide support for maintaining research ethics in other fields such as education, sociology, psychology etc. In reality, ethical governance is practiced differently in different countries. In the UK, there is a clinical excellence research governance, which oversees all NHS related medical research (Mulholland and Bell, 2005 ). Although, the governance in other disciplines is not entirely centralised, many research funding councils and organisations [such as UKRI (UK-Research and Innovation; BBSC (Biotechnology and Biological Sciences Research Council; MRC (Medical Research Council); EPSRC (Economic and Social Research Council)] provide ethical governance and expect institutional adherence and monitoring. They expect local institutional (i.e. university/institutional) research governance for day-to-day monitoring of the research conducted within the organisation and report back to these funding bodies, monthly or annually (Department of Health, 2005). Likewise, there are nationally coordinated/regulated ethics governing bodies such as the US Office for Human Research Protections (US-OHRP), National Institute of Health (NIH) and the Canadian Institutes for Health Research (CIHR) in the USA and Canada respectively (Mulholland and Bell, 2005 ). The OHRP in the USA formally reviews all research activities involving human subjects. On the other hand, in Canada, CIHR works with the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC). They together have produced a Tri-Council Policy Statement (TCPS) (Stephenson et al., 2020 ) as ethical governance. All Canadian institutions are expected to adhere to this policy for conducting research. As for Australia, the research is governed by the Australian code for the responsible conduct of research (2008). It identifies the responsibilities of institutions and researchers in all areas of research. The code has been jointly developed by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia (UA). This information is summarized in Table  2 .

Basic structure of an institutional ethical advisory committee (EAC)

The WHO published an article defining the basic concepts of an ethical advisory committee in 2009 (WHO, 2009 - see above). According to this, many countries have established research governance and monitor the ethical practice in research via national and/or regional review committees. The main aims of research ethics committees include reviewing the study proposals, trying to understand the justifications for human/animal use, weighing the merits and demerits of the usage (linking to risks vs. potential benefits) and ensuring the local, ethical guidelines are followed Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research, 2020 ; Guide for Research Ethics - Council of Europe, 2014 ). Once the research has started, the committee needs to carry out periodic surveillance to ensure the institutional ethical norms are followed during and beyond the study. They may also be involved in setting up and/or reviewing the institutional policies.

For these aspects, IRB (or institutional ethical advisory committee - IEAC) is essential for local governance to enhance best practices. The advantage of an IRB/EEAC is that they understand the institutional conditions and can closely monitor the ongoing research, including any changes in research directions. On the other hand, the IRB may be overly supportive to accept applications, influenced by the local agenda for achieving research excellence, disregarding ethical issues (Kotecha et al., 2011 ; Kayser-Jones, 2003 ) or, they may be influenced by the financial interests in attracting external funding. In this respect, regional and national ethics committees are advantageous to ensure ethical practice. Due to their impartiality, they would provide greater consistency and legitimacy to the research (WHO, 2009 ). However, the ethical approval process of regional and national ethics committees would be time consuming, as they do not have the local knowledge.

As for membership in the IRBs, most of the guidelines [WHO, NICE, Council of Europe, (2012), European Commission - Facilitating Research Excellence in FP7 ( 2013 ) and OHRP] insist on having a variety of representations including experts in different fields of research, and non-experts with the understanding of local, national/international conflicts of interest. The former would be able to understand/clarify the procedural elements of the research in different fields; whilst the latter would help to make neutral and impartial decisions. These non-experts are usually not affiliated to the institution and consist of individuals representing the broader community (particularly those related to social, legal or cultural considerations). IRBs consisting of these varieties of representation would not only be in a position to understand the study procedures and their potential direct or indirect consequences for participants, but also be able to identify any community, cultural or religious implications of the study.

Understanding the subtle differences between ethics and morals

Interestingly, many ethical guidelines are based on society’s moral “beliefs” in such a way that the words “ethics”‘and “morals” are reciprocally used to define each other. However, there are several subtle differences between them and we have attempted to compare and contrast them herein. In the past, many authors have interchangeably used the words “morals”‘and “ethics”‘(Warwick, 2003 ; Kant, 2018 ; Hazard, GC (Jr)., 1994 , Larry, 1982 ). However, ethics is linked to rules governed by an external source such as codes of conduct in workplaces (Kuyare et al., 2014 ). In contrast, morals refer to an individual’s own principles regarding right and wrong. Quinn ( 2011 ) defines morality as “ rules of conduct describing what people ought and ought not to do in various situations … ” while ethics is “... the philosophical study of morality, a rational examination into people’s moral beliefs and behaviours ”. For instance, in a case of parents demanding that schools overturn a ban on use of corporal punishment of children by schools and teachers (Children’s Rights Alliance for England, 2005 ), the parents believed that teachers should assume the role of parent in schools and use corporal or physical punishment for children who misbehaved. This stemmed from their beliefs and what they felt were motivated by “beliefs of individuals or groups”. For example, recent media highlights about some parents opposing LGBT (Lesbian, Gay, Bisexual, and Transgender) education to their children (BBC News, 2019 ). One parent argued, “Teaching young children about LGBT at a very early stage is ‘morally’ wrong”. She argued “let them learn by themselves as they grow”. This behaviour is linked to and governed by the morals of an ethnic community. Thus, morals are linked to the “beliefs of individuals or group”. However, when it comes to the LGBT rights these are based on ethical principles of that society and governed by law of the land. However, the rights of children to be protected from “inhuman and degrading” treatment is based on the ethical principles of the society and governed by law of the land. Individuals, especially those who are working in medical or judicial professions have to follow an ethical code laid down by their profession, regardless of their own feelings, time or preferences. For instance, a lawyer is expected to follow the professional ethics and represent a defendant, despite the fact that his morals indicate the defendant is guilty.

In fact, we as a group could not find many scholarly articles clearly comparing or contrasting ethics with morals. However, a table presented by Surbhi ( 2015 ) (Difn website c ) tries to differentiate these two terms (see Table  3 ).

Although Table 3 gives some insight on the differences between these two terms, in practice many use these terms as loosely as possible mainly because of their ambiguity. As a group focussed on the application of these principles, we would recommend to use the term “ethics” and avoid “morals” in research and academia.

Based on the literature survey carried out, we were able to identify the following gaps:

there is some disparity in existing literature on the importance of ethical guidelines in research

there is a lack of consensus on what code of conduct should be followed, where it should be derived from and how it should be implemented

The mission of ENAI’s ethical advisory working group

The Ethical Advisory Working Group of ENAI was established in 2018 to promote ethical code of conduct/practice amongst higher educational organisations within Europe and beyond (European Network for Academic Integrity, 2018 ). We aim to provide unbiased advice and consultancy on embedding ethical principles within all types of academic, research and public engagement activities. Our main objective is to promote ethical principles and share good practice in this field. This advisory group aims to standardise ethical norms and to offer strategic support to activities including (but not exclusive to):

● rendering advice and assistance to develop institutional ethical committees and their regulations in member institutions,

● sharing good practice in research and academic ethics,

● acting as a critical guide to institutional review processes, assisting them to maintain/achieve ethical standards,

● collaborating with similar bodies in establishing collegiate partnerships to enhance awareness and practice in this field,

● providing support within and outside ENAI to develop materials to enhance teaching activities in this field,

● organising training for students and early-career researchers about ethical behaviours in form of lectures, seminars, debates and webinars,

● enhancing research and dissemination of the findings in matters and topics related to ethics.

The following sections focus on our suggestions based on collective experiences, review of literature provided in earlier sections and workshop feedback collected:

a) basic needs of an ethical committee within an institution;

b) a typical ethical approval process (with examples from three different universities); and

c) the ways to obtain informed consent with some examples. This would give advice on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Setting up an institutional ethical committee (ECs)

Institutional Ethical Committees (ECs) are essential to govern every aspect of the activities undertaken by that institute. With regards to higher educational organisations, this is vital to establish ethical behaviour for students and staff to impart research, education and scholarly activities (or everything) they do. These committees should be knowledgeable about international laws relating to different fields of studies (such as science, medicine, business, finance, law, and social sciences). The advantages and disadvantages of institutional, subject specific or common (statutory) ECs are summarised in Fig.  2 . Some institutions have developed individual ECs linked to specific fields (or subject areas) whilst others have one institutional committee that overlooks the entire ethical behaviour and approval process. There is no clear preference between the two as both have their own advantages and disadvantages (see Fig. 2 ). Subject specific ECs are attractive to medical, law and business provisions, as it is perceived the members within respective committees would be able to understand the subject and therefore comprehend the need of the proposed research/activity (Kadam, 2012 ; Schnyder et al., 2018 ). However, others argue, due to this “ specificity ”, the committee would fail to forecast the wider implications of that application. On the other hand, university-wide ECs would look into the wider implications. Yet they find it difficult to understand the purpose and the specific applications of that research. Not everyone understands dynamics of all types of research methodologies, data collection, etc., and therefore there might be a chance of a proposal being rejected merely because the EC could not understand the research applications (Getz, 1990 ).

figure 2

Summary of advantages and disadvantages of three different forms of ethical committees

[N/B for Fig. 2 : Examples of different types of ethical application procedures and forms used were discussed with the workshop attendees to enhance their understanding of the differences. GDPR = General Data Protection Regulation].

Although we recommend a designated EC with relevant professional, academic and ethical expertise to deal with particular types of applications, the membership (of any EC) should include some non-experts who would represent the wider community (see above). Having some non-experts in EC would not only help the researchers to consider explaining their research in layperson’s terms (by thinking outside the box) but also would ensure efficiency without compromising participants/animal safety. They may even help to address the common ethical issues outside research culture. Some UK universities usually offer this membership to a clergy, councillor or a parliamentarian who does not have any links to the institutions. Most importantly, it is vital for any EC members to undertake further training in addition to previous experience in the relevant field of research ethics.

Another issue that raises concerns is multi-centre research, involving several institutions, where institutionalised ethical approvals are needed from each partner. In some cases, such as clinical research within the UK, a common statutory EC called National Health Services (NHS) Research Ethics Committee (NREC) is in place to cover research ethics involving all partner institutions (NHS, 2018 ). The process of obtaining approval from this type of EC takes time, therefore advanced planning is needed.

Ethics approval forms and process

During the workshop, we discussed some anonymised application forms obtained from open-access sources for qualitative and quantitative research as examples. Considering research ethics, for the purpose of understanding, we arbitrarily divided this in two categories; research based on (a) quantitative and (b) qualitative methodologies. As their name suggests their research approach is extremely different from each other. The discussion elicited how ECs devise different types of ethical application form/questions. As for qualitative research, these are often conducted as “face-to-face” interviews, which would have implications on volunteer anonymity.

Furthermore, discussions posited when the interviews are replaced by on-line surveys, they have to be administered through registered university staff to maintain confidentiality. This becomes difficult when the research is a multi-centre study. These types of issues are also common in medical research regarding participants’ anonymity, confidentially, and above all their right to withdraw consent to be involved in research.

Storing and protecting data collected in the process of the study is also a point of consideration when applying for approval.

Finally, the ethical processes of invasive (involving human/animals) and non-invasive research (questionnaire based) may slightly differ from one another. Following research areas are considered as investigations that need ethical approval:

research that involves human participants (see below)

use of the ‘products’ of human participants (see below)

work that potentially impacts on humans (see below)

research that involves animals

In addition, it is important to provide a disclaimer even if an ethical approval is deemed unnecessary. Following word cloud (Fig.  3 ) shows the important variables that need to be considered at the brainstorming stage before an ethical application. It is worth noting the importance of proactive planning predicting the “unexpected” during different phases of a research project (such as planning, execution, publication, and future directions). Some applications (such as working with vulnerable individuals or children) will require safety protection clearance (such as DBS - Disclosure and Barring Service, commonly obtained from the local police). Please see section on Research involving Humans - Informed consents for further discussions.

figure 3

Examples of important variables that need to be considered for an ethical approval

It is also imperative to report or re-apply for ethical approval for any minor or major post-approval changes to original proposals made. In case of methodological changes, evidence of risk assessments for changes and/or COSHH (Control of Substances Hazardous to Health Regulations) should also be given. Likewise, any new collaborative partners or removal of researchers should also be notified to the IEAC.

Other findings include:

in case of complete changes in the project, the research must be stopped and new approval should be seeked,

in case of noticing any adverse effects to project participants (human or non-human), these should also be notified to the committee for appropriate clearance to continue the work, and

the completion of the project must also be notified with the indication whether the researchers may restart the project at a later stage.

Research involving humans - informed consents

While discussing research involving humans and based on literature review, findings highlight the human subjects/volunteers must willingly participate in research after being adequately informed about the project. Therefore, research involving humans and animals takes precedence in obtaining ethical clearance and its strict adherence, one of which is providing a participant information sheet/leaflet. This sheet should contain a full explanation about the research that is being carried out and be given out in lay-person’s terms in writing (Manti and Licari 2018 ; Hardicre 2014 ). Measures should also be in place to explain and clarify any doubts from the participants. In addition, there should be a clear statement on how the participants’ anonymity is protected. We provide below some example questions below to help the researchers to write this participant information sheet:

What is the purpose of the study?

Why have they been chosen?

What will happen if they take part?

What do they have to do?

What happens when the research stops?

What if something goes wrong?

What will happen to the results of the research study?

Will taking part be kept confidential?

How to handle “vulnerable” participants?

How to mitigate risks to participants?

Many institutional ethics committees expect the researchers to produce a FAQ (frequently asked questions) in addition to the information about research. Most importantly, the researchers also need to provide an informed consent form, which should be signed by each human participant. The five elements identified that are needed to be considered for an informed consent statement are summarized in Fig.  4 below (slightly modified from the Federal Policy for the Protection of Human Subjects ( 2018 ) - Diffn website c ).

figure 4

Five basic elements to consider for an informed consent [figure adapted from Diffn website c ]

The informed consent form should always contain a clause for the participant to withdraw their consent at any time. Should this happen all the data from that participant should be eliminated from the study without affecting their anonymity.

Typical research ethics approval process

In this section, we provide an example flow chart explaining how researchers may choose the appropriate application and process, as highlighted in Fig.  5 . However, it is imperative to note here that these are examples only and some institutions may have one unified application with separate sections to demarcate qualitative and quantitative research criteria.

figure 5

Typical ethical approval processes for quantitative and qualitative research. [N/B for Fig. 5 - This simplified flow chart shows that fundamental process for invasive and non-invasive EC application is same, the routes and the requirements for additional information are slightly different]

Once the ethical application is submitted, the EC should ensure a clear approval procedure with distinctly defined timeline. An example flow chart showing the procedure for an ethical approval was obtained from University of Leicester as open-access. This is presented in Fig.  6 . Further examples of the ethical approval process and governance were discussed in the workshop.

figure 6

An example ethical approval procedures conducted within University of Leicester (Figure obtained from the University of Leicester research pages - Difn website d - open access)

Strategies for ethics educations for students

Student education on the importance of ethics and ethical behaviour in research and scholarly activities is extremely essential. Literature posits in the area of medical research that many universities are incorporating ethics in post-graduate degrees but when it comes to undergraduate degrees, there is less appetite to deliver modules or even lectures focussing on research ethics (Seymour et al., 2004 ; Willison and O’Regan, 2007 ). This may be due to the fact that undergraduate degree structure does not really focus on research (DePasse et al., 2016 ). However, as Orr ( 2018 ) suggested, institutions should focus more on educating all students about ethics/ethical behaviour and their importance in research, than enforcing punitive measures for unethical behaviour. Therefore, as an advisory committee, and based on our preliminary literature survey and workshop results, we strongly recommend incorporating ethical education within undergraduate curriculum. Looking at those institutions which focus on ethical education for both under-and postgraduate courses, their approaches are either (a) a lecture-based delivery, (b) case study based approach or (c) a combined delivery starting with a lecture on basic principles of ethics followed by generating a debate based discussion using interesting case studies. The combined method seems much more effective than the other two as per our findings as explained next.

As many academics who have been involved in teaching ethics and/or research ethics agree, the underlying principles of ethics is often perceived as a boring subject. Therefore, lecture-based delivery may not be suitable. On the other hand, a debate based approach, though attractive and instantly generates student interest, cannot be effective without students understanding the underlying basic principles. In addition, when selecting case studies, it would be advisable to choose cases addressing all different types of ethical dilemmas. As an advisory group within ENAI, we are in the process of collating supporting materials to help to develop institutional policies, creating advisory documents to help in obtaining ethical approvals, and teaching materials to enhance debate-based lesson plans that can be used by the member and other institutions.

Concluding remarks

In summary, our literature survey and workshop findings highlight that researchers should accept that ethics underpins everything we do, especially in research. Although ethical approval is tedious, it is an imperative process in which proactive thinking is essential to identify ethical issues that might affect the project. Our findings further lead us to state that the ethical approval process differs from institution to institution and we strongly recommend the researchers to follow the institutional guidelines and their underlying ethical principles. The ENAI workshop in Vilnius highlighted the importance of ethical governance by establishing ECs, discussed different types of ECs and procedures with some examples and highlighted the importance of student education to impart ethical culture within research communities, an area that needs further study as future scope.

Declarations

The manuscript was entirely written by the corresponding author with contributions from co-authors who have also taken part in the delivery of the workshop. Authors confirm that the data supporting the findings of this study are available within the article. We can also confirm that there are no potential competing interests with other organisations.

Availability of data and materials

Authors confirm that the data supporting the findings of this study are available within the article.

Abbreviations

ALL European academics

Australian research council

Biotechnology and biological sciences research council

Canadian institutes for health research

Committee of publication ethics

Ethical committee

European network of academic integrity

Economic and social research council

International convention for the protection of animals

institutional ethical advisory committee

Institutional review board

Immaculata university of Pennsylvania

Lesbian, gay, bisexual, and transgender

Medical research council)

National health services

National health services nih national institute of health (NIH)

National institute of clinical care excellence

National health and medical research council

Natural sciences and engineering research council

National research ethics committee

National statement on ethical conduct in human research

Responsible research practice

Social sciences and humanities research council

Tri-council policy statement

World Organization for animal health

Universities Australia

UK-research and innovation

US office for human research protections

Alba S, Lenglet A, Verdonck K, Roth J, Patil R, Mendoza W, Juvekar S, Rumisha SF (2020) Bridging research integrity and global health epidemiology (BRIDGE) guidelines: explanation and elaboration. BMJ Glob Health 5(10):e003237. https://doi.org/10.1136/bmjgh-2020-003237

Article   Google Scholar  

Anderson MS (2011) Research misconduct and misbehaviour. In: Bertram Gallant T (ed) Creating the ethical academy: a systems approach to understanding misconduct and empowering change in higher education. Routledge, pp 83–96

BBC News. (2019). Birmingham school LGBT LESSONS PROTEST investigated. March 8, 2019. Retrieved February 14, 2021, available online. URL: https://www.bbc.com/news/uk-england-birmingham-47498446

Children’s Rights Alliance for England. (2005). R (Williamson and others) v Secretary of State for Education and Employment. Session 2004–05. [2005] UKHL 15. Available Online. URL: http://www.crae.org.uk/media/33624/R-Williamson-and-others-v-Secretary-of-State-for-Education-and-Employment.pdf

Council of Europe. (2014). Texts of the Council of Europe on bioethical matters. Available Online. https://www.coe.int/t/dg3/healthbioethic/Texts_and_documents/INF_2014_5_vol_II_textes_%20CoE_%20bio%C3%A9thique_E%20(2).pdf

Dellaportas S, Kanapathippillai S, Khan, A and Leung, P. (2014). Ethics education in the Australian accounting curriculum: a longitudinal study examining barriers and enablers. 362–382. Available Online. URL: https://doi.org/10.1080/09639284.2014.930694 , 23, 4, 362, 382

DePasse JM, Palumbo MA, Eberson CP, Daniels AH (2016) Academic characteristics of orthopaedic surgery residency applicants from 2007 to 2014. JBJS 98(9):788–795. https://doi.org/10.2106/JBJS.15.00222

Desmond H, Dierickx K (2021) Research integrity codes of conduct in Europe: understanding the divergences. https://doi.org/10.1111/bioe.12851

Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR). (2018). Available Online. URL: https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2018

Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research (2020, October 26). Available online. URL: https://www.enago.com/academy/importance-of-ethics-committees-in-scholarly-research/

Difn website c - Ethics vs Morals - Difference and Comparison. Retrieved July 14, 2020. Available online. URL: https://www.diffen.com/difference/Ethics_vs_Morals

Difn website d - University of Leicester. (2015). Staff ethics approval flowchart. May 1, 2015. Retrieved July 14, 2020. Available Online. URL: https://www2.le.ac.uk/institution/ethics/images/ethics-approval-flowchart/view

European Commission - Facilitating Research Excellence in FP7 (2013) https://ec.europa.eu/research/participants/data/ref/fp7/89888/ethics-for-researchers_en.pdf

European Network for Academic Integrity. (2018). Ethical advisory group. Retrieved February 14, 2021. Available online. URL: http://www.academicintegrity.eu/wp/wg-ethical/

Federal Policy for the Protection of Human Subjects. (2018). Retrieved February 14, 2021. Available Online. URL: https://www.federalregister.gov/documents/2017/01/19/2017-01058/federal-policy-for-the-protection-of-human-subjects#p-855

Flite, CA and Harman, LB. (2013). Code of ethics: principles for ethical leadership Perspect Health Inf Mana; 10(winter): 1d. PMID: 23346028

Fouka G, Mantzorou M (2011) What are the major ethical issues in conducting research? Is there a conflict between the research ethics and the nature of nursing. Health Sci J 5(1) Available Online. URL: https://www.hsj.gr/medicine/what-are-the-major-ethical-issues-in-conducting-research-is-there-a-conflict-between-the-research-ethics-and-the-nature-of-nursing.php?aid=3485

Fox G (2017) History and ethical principles. The University of Miami and the Collaborative Institutional Training Initiative (CITI) Program URL  https://silo.tips/download/chapter-1-history-and-ethical-principles # (Available Online)

Getz KA (1990) International codes of conduct: An analysis of ethical reasoning. J Bus Ethics 9(7):567–577

Ghooi RB (2011) The nuremberg code–a critique. Perspect Clin Res 2(2):72–76. https://doi.org/10.4103/2229-3485.80371

Hardicre, J. (2014) Valid informed consent in research: an introduction Br J Nurs 23(11). https://doi.org/10.12968/bjon.2014.23.11.564 , 567

Hazard, GC (Jr). (1994). Law, morals, and ethics. Yale law school legal scholarship repository. Faculty Scholarship Series. Yale University. Available Online. URL: https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=3322&context=fss_papers

Israel, M., & Drenth, P. (2016). Research integrity: perspectives from Australia and Netherlands. In T. Bretag (Ed.), Handbook of academic integrity (pp. 789–808). Springer, Singapore. https://doi.org/10.1007/978-981-287-098-8_64

Kadam R (2012) Proactive role for ethics committees. Indian J Med Ethics 9(3):216. https://doi.org/10.20529/IJME.2012.072

Kant I (2018) The metaphysics of morals. Cambridge University Press, UK https://doi.org/10.1017/9781316091388

Kayser-Jones J (2003) Continuing to conduct research in nursing homes despite controversial findings: reflections by a research scientist. Qual Health Res 13(1):114–128. https://doi.org/10.1177/1049732302239414

Kotecha JA, Manca D, Lambert-Lanning A, Keshavjee K, Drummond N, Godwin M, Greiver M, Putnam W, Lussier M-T, Birtwhistle R (2011) Ethics and privacy issues of a practice-based surveillance system: need for a national-level institutional research ethics board and consent standards. Can Fam physician 57(10):1165–1173.  https://europepmc.org/article/pmc/pmc3192088

Kuyare, MS., Taur, SR., Thatte, U. (2014). Establishing institutional ethics committees: challenges and solutions–a review of the literature. Indian J Med Ethics. https://doi.org/10.20529/IJME.2014.047

LaFollette, H. (2007). Ethics in practice (3rd edition). Blackwell

Larry RC (1982) The teaching of ethics and moral values in teaching. J High Educ 53(3):296–306. https://doi.org/10.1080/00221546.1982.11780455

Manti S, Licari A (2018) How to obtain informed consent for research. Breathe (Sheff) 14(2):145–152. https://doi.org/10.1183/20734735.001918

Mulholland MW, Bell J (2005) Research Governance and Research Funding in the USA: What the academic surgeon needs to know. J R Soc Med 98(11):496–502. https://doi.org/10.1258/jrsm.98.11.496

National Institute of Health (NIH) Ethics in Clinical Research. n.d. Available Online. URL: https://clinicalcenter.nih.gov/recruit/ethics.html

NHS (2018) Flagged Research Ethics Committees. Retrieved February 14, 2021. Available online. URL: https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/flagged-research-ethics-committees/

NICE (2018) Research governance policy. Retrieved February 14, 2021. Available online. URL: https://www.nice.org.uk/Media/Default/About/what-we-do/science-policy-and-research/research-governance-policy.pdf

Orr, J. (2018). Developing a campus academic integrity education seminar. J Acad Ethics 16(3), 195–209. https://doi.org/10.1007/s10805-018-9304-7

Quinn, M. (2011). Introduction to Ethics. Ethics for an Information Age. 4th Ed. Ch 2. 53–108. Pearson. UK

Resnik. (2020). What is ethics in Research & why is it Important? Available Online. URL: https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm

Schnyder S, Starring H, Fury M, Mora A, Leonardi C, Dasa V (2018) The formation of a medical student research committee and its impact on involvement in departmental research. Med Educ Online 23(1):1. https://doi.org/10.1080/10872981.2018.1424449

Seymour E, Hunter AB, Laursen SL, DeAntoni T (2004) Establishing the benefits of research experiences for undergraduates in the sciences: first findings from a three-year study. Sci Educ 88(4):493–534. https://doi.org/10.1002/sce.10131

Shamoo AE, Irving DN (1993) Accountability in research using persons with mental illness. Account Res 3(1):1–17. https://doi.org/10.1080/08989629308573826

Shaw, S., Boynton, PM., and Greenhalgh, T. (2005). Research governance: where did it come from, what does it mean? Research governance framework for health and social care, 2nd ed. London: Department of Health. https://doi.org/10.1258/jrsm.98.11.496 , 98, 11, 496, 502

Book   Google Scholar  

Speight, JG. (2016) Ethics in the university |DOI: https://doi.org/10.1002/9781119346449 scrivener publishing LLC

Stephenson GK, Jones GA, Fick E, Begin-Caouette O, Taiyeb A, Metcalfe A (2020) What’s the protocol? Canadian university research ethics boards and variations in implementing tri-Council policy. Can J Higher Educ 50(1)1): 68–81

Surbhi, S. (2015). Difference between morals and ethics [weblog]. March 25, 2015. Retrieved February 14, 2021. Available Online. URL: http://keydifferences.com/difference-between-morals-and-ethics.html

The Belmont Report (1979). Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Retrieved February 14, 2021. Available online. URL: https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

The Singapore Statement on Research Integrity. (2020). Nicholas Steneck and Tony Mayer, Co-chairs, 2nd World Conference on Research Integrity; Melissa Anderson, Chair, Organizing Committee, 3rd World Conference on Research Integrity. Retrieved February 14, 2021. Available online. URL: https://wcrif.org/documents/327-singapore-statement-a4size/file

Warwick K (2003) Cyborg morals, cyborg values, cyborg ethics. Ethics Inf Technol 5(3):131–137. https://doi.org/10.1023/B:ETIN.0000006870.65865.cf

Weindling P (2001) The origins of informed consent: the international scientific commission on medical war crimes, and the Nuremberg code. Bull Hist Med 75(1):37–71. https://doi.org/10.1353/bhm.2001.0049

WHO. (2009). Research ethics committees Basic concepts for capacity-building. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/Ethics_basic_concepts_ENG.pdf

WHO. (2021). Chronological list of publications. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/publications/year/en/

Willison, J. and O’Regan, K. (2007). Commonly known, commonly not known, totally unknown: a framework for students becoming researchers. High Educ Res Dev 26(4). 393–409. https://doi.org/10.1080/07294360701658609

Žukauskas P, Vveinhardt J, and Andriukaitienė R. (2018). Research Ethics In book: Management Culture and Corporate Social Responsibility Eds Jolita Vveinhardt IntechOpenEditors DOI: https://doi.org/10.5772/intechopen.70629 , 2018

Download references

Acknowledgements

Authors wish to thank the organising committee of the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania for accepting this paper to be presented in the conference.

Not applicable as this is an independent study, which is not funded by any internal or external bodies.

Author information

Authors and affiliations.

School of Human Sciences, University of Derby, DE22 1, Derby, GB, UK

Shivadas Sivasubramaniam

Department of Informatics, Mendel University in Brno, Zemědělská, 1665, Brno, Czechia

Dita Henek Dlabolová

Centre for Academic Integrity in the UAE, Faculty of Engineering & Information Sciences, University of Wollongong in Dubai, Dubai, UAE

Veronika Kralikova & Zeenath Reza Khan

You can also search for this author in PubMed   Google Scholar

Contributions

The manuscript was entirely written by the corresponding author with contributions from co-authors who have equally contributed to presentation of this paper in the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania. Authors have equally contributed for the information collection, which were then summarised as narrative explanations by the Corresponding author and Dr. Zeenath Reza Khan. Then checked and verified by Dr. Dlabolova and Ms. Králíková. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Shivadas Sivasubramaniam .

Ethics declarations

Competing interests.

We can also confirm that there are no potential competing interest with other organisations.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sivasubramaniam, S., Dlabolová, D.H., Kralikova, V. et al. Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures. Int J Educ Integr 17 , 14 (2021). https://doi.org/10.1007/s40979-021-00078-6

Download citation

Received : 17 July 2020

Accepted : 25 April 2021

Published : 13 July 2021

DOI : https://doi.org/10.1007/s40979-021-00078-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education
  • Ethical codes
  • Ethics committee
  • Post-secondary education
  • Institutional policies
  • Research ethics

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

scholarly article on research ethics

The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool

  • Original Research
  • Open access
  • Published: 27 May 2024

Cite this article

You have full access to this open access article

scholarly article on research ethics

  • David B. Resnik   ORCID: orcid.org/0000-0002-5139-9555 1 &
  • Mohammad Hosseini 2 , 3  

504 Accesses

Explore all metrics

Using artificial intelligence (AI) in research offers many important benefits for science and society but also creates novel and complex ethical issues. While these ethical issues do not necessitate changing established ethical norms of science, they require the scientific community to develop new guidance for the appropriate use of AI. In this article, we briefly introduce AI and explain how it can be used in research, examine some of the ethical issues raised when using it, and offer nine recommendations for responsible use, including: (1) Researchers are responsible for identifying, describing, reducing, and controlling AI-related biases and random errors; (2) Researchers should disclose, describe, and explain their use of AI in research, including its limitations, in language that can be understood by non-experts; (3) Researchers should engage with impacted communities, populations, and other stakeholders concerning the use of AI in research to obtain their advice and assistance and address their interests and concerns, such as issues related to bias; (4) Researchers who use synthetic data should (a) indicate which parts of the data are synthetic; (b) clearly label the synthetic data; (c) describe how the data were generated; and (d) explain how and why the data were used; (5) AI systems should not be named as authors, inventors, or copyright holders but their contributions to research should be disclosed and described; (6) Education and mentoring in responsible conduct of research should include discussion of ethical use of AI.

Similar content being viewed by others

scholarly article on research ethics

Biases and Ethical Considerations for Machine Learning Pipelines in the Computational Social Sciences

scholarly article on research ethics

Artificial intelligence and illusions of understanding in scientific research

scholarly article on research ethics

Measuring Ethics in AI with AI: A Methodology and Dataset Construction

Avoid common mistakes on your manuscript.

1 Introduction: exponential growth in the use of artificial intelligence in scientific research

In just a few years, artificial intelligence (AI) has taken the world of scientific research by storm. AI tools have been used to perform or augment a variety of scientific tasks, including Footnote 1 :

Analyzing data and images [ 34 , 43 , 65 , 88 , 106 , 115 , 122 , 124 , 149 , 161 ].

Interpreting data and images [ 13 , 14 , 21 , 41 ].

Generating hypotheses [ 32 , 37 , 41 , 107 , 149 ].

Modelling complex phenomena [ 32 , 41 , 43 , 122 , 129 ].

Designing molecules and materials [ 15 , 37 , 43 , 205 ].

Generating data for use in validation of hypotheses and models [ 50 , 200 ].

Searching and reviewing the scientific literature [ 30 , 72 ].

Writing and editing scientific papers, grant proposals, consent forms, and institutional review board applications [ 3 , 53 , 54 , 82 , 163 ].

Reviewing scientific papers and other research outputs [ 53 , 54 , 98 , 178 , 212 ].

The applications of AI in scientific research appears to be limitless, and in the next decade AI is likely to completely transform the process of scientific discovery and innovation [ 6 , 7 , 8 , 9 , 105 , 201 ].

Although using AI in scientific research has steadily grown, ethical guidance has lagged far behind. With the exception of using AI to draft or edit scientific papers (see discussion in Sect.  7.6 ), most codes and policies do not explicitly address ethical issues related to using AI in scientific research. For example, the 2023 revision of the European Code of Conduct for Research Integrity [ 4 ] briefly discusses the importance of transparency. The code stipulates that researchers should report “their results and methods including the use of external services or AI and automated tools” (Ibid., p. 7) and considers “hiding the use of AI or automated tools in the creation of content or drafting of publications” as a violation of research integrity (Ibid. p. 10). One of the most thorough and up-to-date institutional documents, the National Institutes of Health Guidelines and Policies for the Conduct of Research provides guidance for using AI to write and edit manuscripts but not for other tasks [ 158 ]. Footnote 2 Codes of AI ethics, such as UNESCO’s [ 223 ] Ethics of Artificial Intelligence and the Office of Science and Technology Policy’s [ 168 , 169 ] Blueprint for an AI Bill of Rights, provide useful guidance for the development and use of AI in general without including specific guidance concerning the development and use of AI in scientific research [ 215 ].

There is therefore a gap in ethical and policy guidance concerning AI use in scientific research that needs to be filled to promote its appropriate use. Moreover, the need for guidance is urgent because using AI raises novel epistemological and ethical issues related to objectivity, reproducibility, transparency, accountability, responsibility, and trust in science [ 9 , 102 ]. In this paper, we will examine important questions related to AI’s impact on ethics of science. We will argue that while the use of AI does not require a radical change in the ethical norms of science, it will require the scientific community to develop new guidance for the appropriate use of AI. To defend this thesis, we will provide an overview of AI and an account of ethical norms of science, and then we will discuss the implications of AI for ethical norms of science and offer recommendations for its appropriate use.

2 What is AI?

AI can be defined as “a technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives [ 114 ].” AI is a subfield within the discipline of computer science [ 144 ]. However, the term ‘AI’ is also commonly used to refer to technologies (or tools) that can perform human tasks that require intelligence, such as perception, judgment, reasoning, or decision-making. We will use both senses of ‘AI’ in this paper, depending on the context.

While electronic calculators, cell phone apps, and programs that run on personal computers can perform functions associated with intelligence, they are not generally considered to be AI because they do not “learn” from the data [ 108 ]. As discussed below, AI systems can learn from the data insofar as they can adapt their programming in response to input data. While applying the term ‘learning’ to a machine may seem misleadingly anthropomorphic, it does make sense to say that a machine can learn if learning is regarded as a change in response to information about the environment [ 151 ]. Many different entities can learn in this sense of the term, including the immune system, which changes after being exposed to molecular information about pathogens, foreign objects, and other things that provoke an immune response.

This paper will focus on what is commonly referred to as narrow (or weak) AI, which is already being extensively used in science. Narrow AI has been designed and developed to do a specific task, such as playing chess, modelling complex phenomena, or identifying possible brain tumors in diagnostic images [ 151 ]. See Fig.  1 . Footnote 3 Other types of AI discussed in the literature include broad AI (also known as artificial general intelligence or AGI), which is a machine than can perform multiple tasks requiring human-like intelligence; and artificial consciousness (AC), which is a form of AGI with characteristics widely considered to be essential for consciousness [ 162 , 219 ]. Because there are significant technical and conceptual obstacles to developing AGI and AC, it may be years before machines have this degree of human-like intelligence [ 206 , 227 ]. Footnote 4

figure 1

Levels of Artificial Intelligence, according to Turing [ 219 ]

3 What is machine learning?

Machine learning (ML) can be defined as a branch of AI “that focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy [ 112 ].” There are several types of ML, including support vector machines, decisions trees, and neural networks. In this paper we will focus on ML that uses artificial neural networks (ANNs).

An ANN is composed of artificial neurons, which are modelled after biological neurons. An artificial neuron receives a series of computational inputs, Footnote 5 applies a function, and produces an output. The inputs have different weightings. In most applications, a specific output is generated only when a certain threshold value for the inputs is reached. In the example below, an output of ‘1’ would be produced if the threshold is reached; otherwise, the output would be ‘0’. See Fig.  2 . A pair statements describing how a very simple artificial neuron processes inputs could be as follows:

where x1, x2, x3, and x4 are inputs; w1, w2, w3, and w4 are weightings, T is a threshold value; and U is an output value (1 or 0). An artificial neuron is represented schematically in Fig.  2 , below.

figure 2

Artificial neuron

A single neuron may have dozens of inputs. An ANN may consist of thousands of interconnected neurons. In a deep learning ANN, there may be many hidden layers of neurons between the input and output layers. See Fig.  3 .

figure 3

Deep learning artificial neural network [ 38 ]

Training (or reinforcement) occurs when the weightings on inputs are changed in response to system’s output. Changes in the weightings are based on their contribution to the neuron’s error, which can be understood as the difference between the output value and the correct value as determined by the human trainers (see discussion of error in Sect.  5 ). Training can occur via supervised or unsupervised learning. In supervised learning, the ANN works with labelled data and becomes adept at correctly representing structures in the data recognized by human trainers. In unsupervised learning, the ANN works with unlabeled data and discovers structures inherent in the data that might not have been recognized by humans [ 59 , 151 ]. For example, to use supervised learning to train an ANN to recognize dogs, human beings could present the system with various images and evaluate the accuracy of its output accordingly. If the ANN labels an image a “dog” that human beings recognize as a dog, then its output would be correct, otherwise, it would be incorrect (see discussion of error in Sects. 5.1 and 5.5). In unsupervised learning, the ANN would be presented with images and would be reinforced for accurately modelling structures inherent in the data, which may or may not correspond to patterns, properties, or relationships that humans would recognize or conceive of.

For an example of the disconnect between ML and human processing of information, consider research conducted by Roberts et al. [ 195 ]. In this study, researchers trained an ML system on radiologic images from hospital patients so that it would learn to identify patients with COVID-19 and predict the course of their illness. Since the patients who were sicker tended to laying down when their images were taken, the ML system identified laying down as a diagnostic criterion and disease predictor [ 195 ]. However, laying down is a confounding factor that has nothing to do with the likelihood of having COVID-19 or getting very sick from it [ 170 ]. The error occurred because the ML system did not account for this fundamental fact of clinical medicine.

Despite problems like the one discovered by Roberts et al. [ 195 ], the fact that ML systems process and analyze data differently from human beings can be a great benefit to science and society because these systems may be able to identify useful and innovative structures, properties, patterns, and relationships that human beings would not recognize. For example, ML systems have been able to design novel compounds and materials that human beings might not be able to conceive of [ 15 ]. That said, the disconnect between AI/ML and human information processing can also make it difficult to anticipate, understand, control, and reduce errors produced by ML systems. (See discussion of error in Sects. 5.1–5.5).

Training ANNs is a resource-intensive activity that involves gigabytes of data, thousands of computers, and hundreds of thousands of hours of human labor [ 182 , 229 ]. A system can continue to learn after the initial training period as it processes new data [ 151 ]. ML systems can be applied to any dataset that has been properly prepared for manipulation by computer algorithms, including digital images, audio and video recordings, natural language, medical records, chemical formulas, electromagnetic radiation, business transactions, stock prices, and games [ 151 ].

One of the most impressive feats accomplished by ML systems is their contribution to solving the protein folding problem [ 41 ]. See Fig.  4 . A protein is composed of one or more long chains of amino acids known as polypeptides. The three-dimensional (3-D) structure of the protein is produced by folding of the polypeptide(s), which is caused by the interplay of hydrogen bonds, Van der Waals attractive forces, and conformational entropy between different parts of the polypeptide [ 2 ]. Molecular biologists and biochemists have been trying to develop rules for predicting the 3-D structures of proteins from amino acid sequences since the 1960s, but this is, computationally speaking, a very hard problem, due to the immense number of possible ways that polypeptides can fold [ 52 , 76 ]. Tremendous progress on the protein-folding problem was made in 2022, when scientists demonstrated that an ML system, DeepMind’s AlphaFold, can predict 3-D structures from amino acid sequences with 92.4% accuracy [ 118 , 204 ]. AlphaFold, which built upon available knowledge of protein chemistry [ 176 ], was trained on thousands of amino acids sequences and their corresponding 3-D structures. Although human researchers still needed to test and refine AlphaFold’s output to ensure that the proposed structure is 100% accurate, the ML system greatly improves the efficiency of protein chemistry research [ 216 ]. Recently developed ML systems can generate new proteins by going in the opposite direction and predicting amino acids sequences from 3-D protein structures [ 156 ]. Since proteins play a key role in the structure and function of all living things, these advances in protein science are likely to have important applications in different areas of biology and medicine [ 204 ].

figure 4

Protein folding. CC BY-SA 4.0 DEED [ 45 ]

4 What is generative AI?

Not only can ML image processing systems recognize patterns in the data that correspond to objects (e.g., cat, dog, car), when coupled with appropriate algorithms they can also generate images in response to visual or linguistic prompts [ 87 ]. The term ‘generative AI’ refers to “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on” [ 111 ].

Perhaps the most well-known types of generative AI are those that are based on large language models (LLMs), such as chatbots like OpenAI’s ChatGPT and Google’s Gemini, which analyze, paraphrase, edit, translate, and generate text, images and other types of content. LLMs are statistical algorithms trained on huge sets of natural language data, such as text from the internet, books, journal articles, and magazines. By processing this data, LLMs can learn to estimate probabilities associated with possible responses to text and can rank responses according to the probability that they will be judged to be correct by human beings [ 151 ]. In just a few years, some types of generative AI, such as ChatGPT, have become astonishingly proficient at responding to text data. ChatGPT has passed licensing exams for medicine and law and scored in the 93rd percentile on the Scholastic Aptitude Test reading exam and in the 89th percentile on the math exam [ 133 , 138 , 232 ]. Some researchers have used ChatGPT to write scientific papers and have even named them as authors [ 48 , 53 , 54 , 167 ]. Footnote 6 Some LLMs are so adept at mimicking the type of discourse associated with conscious thought that computer scientists, philosophers, and cognitive psychologists are updating the Turing test (see Fig.  5 ) to more reliably distinguish between humans and machines [ 5 , 22 ].

figure 5

The Turing test. Computer scientist Alan Turing [ 220 ] proposed a famous test for determining whether a machine can think. The test involves a human interrogating another person, and a computer. The interrogator poses questions to the interviewees, who are in different rooms, so that interrogator cannot see where the answers are coming from. If the interrogator cannot distinguish between answers to questions given by another person and answers provided by a computer, then the computer passes the Turing test

5 Challenges of using AI

It has been long known that any AI systems are not error-free. To understand this topic, it is important to define ‘error’ and distinguish between systemic errors and random errors. The word ‘error’ has various meanings: we speak of grammatical errors, reasoning errors, typographical errors, measurement errors, etc. What these different senses of ‘error’ have in common is (1) errors involve divergence from a standard of correctness; and (2) errors, when committed by conscious beings, are unintentional; that is, they are accidents or mistakes and different from frauds, deceptions, or jokes.

If we set aside questions related to intent on the grounds that AI systems are not moral agents (see discussion in Sect. 7.6), we can think of AI error as the difference between the output of an AI system and the correct output . The difference between an AI output and the correct output can be measured quantitatively or qualitatively, depending on what is being measured and the purpose of the measurement [ 151 ]. For example, if a ML image recognition tool is presented with 50 images of wolves and 50 images of dogs, and it labels 98 of them correctly, we could measure its error quantitatively (i.e., 2%). In other cases, we might measure (or describe) error qualitatively. For example, if we ask ChatGPT to write a 12-line poem about a microwave oven in the style Edgar Allan Poe, we could rate its performance as ‘excellent,’ ‘very good,’ ‘good,’ ‘fair,’ or ‘poor.’ We could also assign numbers to these ratings to convert qualitative measurements into quantitative assessments (e.g., 5 = excellent, 4 = very good).

The correct output of an AI system is ultimately defined by its users and others who may be affected. For example, radiologists define correctness for reading diagnostic images; biochemists define the standard for modeling proteins; and attorneys, judges, clients, and law professors define the standard for writing legal briefs. In some contexts, such as testing hypotheses or reading radiologic images, ‘correct’ may mean ‘true’; in other contexts, such as generating text or creating models, it may simply mean ‘acceptable’ or ‘desirable.’ Footnote 7 While AI systems can play a key role in providing information that is used to define correct outputs (for example, when a system is used to discover new chemical compounds or solve complex math problems), human beings are ultimately responsible for determining whether outputs are correct (see discussion of moral agency in Sect.  7.6 ).

5.2 Random versus systemic errors ( Bias )

We can use an analogy with target shooting to think about the difference between random and systemic errors [ 94 ]. If error is understood as the distance of a bullet hole from a target, then random error would be a set of holes distributed randomly around the target without a discernable pattern (Fig.  6 A), while systemic error (or bias) would be a set of holes with a discernable pattern, for example holes skewed in a particular direction (Fig.  6 B). The accuracy of a set of bullet holes would be a function of their distance from the target, while their precision would be a function of their distance from each other [ 27 , 172 , 184 ].

figure 6

Random errors versus systemic errors

The difference between systemic and random errors can be ambiguous because errors that appear to be random may be shown to be systemic when one acquires more information about how they were generated or once a pattern is discerned. Footnote 8 Nevertheless, the distinction is useful. Systemic errors are often more detrimental to science and society than random ones, because they may negatively affect many different decisions involving people, projects, and paradigms. For example, racist biases distorted most research on human intelligence from the 1850s to the 1960s, including educational policies based on the applications of intelligence research. As will be discussed below, AI systems can make systemic and random errors [ 70 , 174 ].

5.3 AI biases

Since AI systems are designed to accurately represent the data on which they are trained, they can reproduce or even amplify racial, ethnic, gender, political, or other biases in the training data and subsequent data received [ 131 ]. The computer science maxim “garbage in, garbage out” applies here. Studies have shown that racial and ethnic biases impact the use of AI/ML in medical imaging, diagnosis, and prognosis due to biases in healthcare databases [ 78 , 154 ]. Bias is also a problem in using AI systems to find relationships between genomics and disease due to racial and ethnic prejudices in genomic databases [ 55 ]. LLMs are also impacted by various biases inherent in their training data, and when used in generative AI models like ChatGPT, can propagate biases related to race, ethnicity, nationality, gender, sexuality, age, and politics [ 25 , 171 ]. Footnote 9

Because scientific theories, hypotheses, and models are based on human perceptual categories, concepts, and assumptions, bias-free research is not possible [ 121 , 125 , 137 ]. Nevertheless, scientists can (and should) take steps to understand sources of bias and control them, especially those that can lead to discrimination, stigmatization, harm, or injustice [ 89 , 154 , 188 ]. Indeed, bias reduction and management is essential to promoting public trust in AI (discussed in Sects.  5.5 and 5.7 ).

Scientists have dealt with bias in research for years and have developed methods and strategies for minimizing and controlling bias in experimental design, data analysis, model building, and theory construction [ 79 , 89 , 104 ]. However, bias related to using AI in science can be subtle and difficult to detect due to the size and complexity of research data and interactions between data, algorithms, and applications [ 131 ]. See Fig.  7 . Scientists who use AI systems in research should take appropriate steps to anticipate, identify, control, and minimize biases by ensuring that datasets reflect the diversity of the investigated phenomena and disclosing the variables, algorithms, models, and parameters used in data analysis [ 56 ]. Managing bias related to the use of AI should involve continuous testing of the outputs in real world applications and adjusting systems accordingly [ 70 , 131 ]. For example, if a ML tool is used to read radiologic images, software developers, radiologists, and other stakeholders should continually evaluate the tool and its output to improve accuracy and precision.

figure 7

Sources of bias in AI/ML

5.4 Random errors in AI

AI systems can make random errors even after extensive training [ 51 , 151 ]. Nowhere has this problem been more apparent and concerning than in the use of LLMs in business, law, and scientific research. ChatGPT, for example, is prone to making random factual and citation errors. For example, Bhattacharyya et al. [ 24 ] used ChatGPT 3.5 to generate 30 short papers (200 words or less) on medical topics. 47% of the references produced by the chatbot were fabricated, 46% were authentic but inaccurately used, and only 7% were correct. Although ChatGPT 4.0 performs significantly better than ChatGPT 3.5, it still produces fabricated and inaccurate citations [ 230 ]. Another example of a random error was seen in a now-retracted paper published in Frontiers in Cell Development and Biology , which included an AI-generated image of a rat with unreal genitals [ 179 ]. Concerns raised by researchers led to OpenAI [ 173 ] warning users that “ChatGPT may produce inaccurate information about people, places, or facts.” The current interface includes the following disclaimer underneath the input box “ChatGPT can make mistakes. Consider checking important information.” Two US lawyers learned this lesson the hard way after a judge fined them $5000 for submitting court filing prepared by ChatGPT that included fake citations. The judge said that there was nothing improper about using ChatGPT but that the lawyers should exhibit due care in checking its work for accuracy [ 150 ].

An example of random errors made by generative AI discussed in the literature pertains to fake citations. Footnote 10 One reason why LLM-based systems, such as ChatGPT produce fake, but realistic-looking citations is that they process text data differently from human beings. Researchers produce citations by reading a specific text and citating it, but ChatGPT produces citations by processing a huge amount of text data and generating a highly probable response to a request for a citation. Software developers at OpenAI, Google, and other chatbot companies have been trying to fix this problem, but it is not easy to solve, due to differences between human and LLM processing of language [ 24 , 230 ]. AI companies advise users to use context-specific GPTs installed on top of ChatGPT. For instance, by using the Consensus.ai GPT ( https://consensus.app/ ), which claims to be connected to “200M + scientific papers”, users can ask for specific citations for a given input (e.g., “coffee is good for human health”). While the offered citations are likely to be correct bibliometrically, errors and biases may not be fully removed because it is unclear how these systems come to their conclusions and offer specific citations (see discussion of the black box problem in Sect.  5.7 ). Footnote 11

5.5 Prospects for reducing AI errors

If AI systems follow the path taken by most other technologies, it is likely that errors will decrease over time as improvements are made [ 151 ]. For example, early versions of ChatGPT were very bad at solving math problems but newer versions are much better math because they include special GPTs for performing this task [ 210 ]. AI systems also make errors in reading, classifying, and reconstructing radiological images, but the error rate is decreasing, and AI systems will soon outperform humans in terms of speed and accuracy of image reading [ 12 , 17 , 103 , 228 ]. However, it is also possible that AI systems will make different types of errors as they evolve or that there will be limits to their improvement. For example, newer versions of ChatGPT are prone to reasoning errors associated with intuitive thinking but older versions did not make these errors [ 91 ]. Also, studies have shown that LLMs are not good at self-correcting and need human supervision and fine-tuning to perform this task well [ 61 ].

Some types of errors may be difficult to eliminate due to differences between human perception/understanding and AI data processing. As discussed previously, AI systems, such as the system that generated the implausible hypothesis that laying down when having a radiologic image taken is a COVID-19 risk factor, make errors because they process information differently from humans. The AI system made this implausible inference because it did not factor basic biological and medical facts that would be obvious to doctors and scientists [ 170 ]. Another salient example of this phenomenon occurred when an image recognition AI was trained to distinguish between wolves and huskies, but it had difficulty recognizing huskies in the snow or wolves on the grass, because it had learned to distinguish between wolves and huskies by attending to the background of the images [ 222 ]. Humans are less prone to this kind of error because they use concepts to process perceptions and can therefore recognize objects in different settings. Consider, for example, captchas (Completely Automated Public Turing test to tell Computers and Humans Apart), which are used by many websites for security purposes and take advantage of some AI image processing deficiencies to authenticate whether a user is human [ 109 ]. Humans can pass Captchas tests because they learn to recognize images in various contexts and can apply what they know to novel situations [ 23 ].

Some of the factual and reasoning errors made by LLM-based systems occur because they lack human-like understanding of language [ 29 , 135 , 152 , 153 ]. ChatGPT, for example, can perform well when it comes to processing language that has already been curated by humans, such as describing the organelles in a cell or explaining known facts about photosynthesis, but they may perform sub-optimally (and sometimes very badly) when dealing with novel text that requires reasoning and problem-solving because it does do not have a human-like understanding of language. When a person processes language, they usually form a mental model that provides meaning and context for the words [ 29 ]. This mental model is based on implicit facts and assumptions about the natural world, human psychology, society, and culture, or what we might call commonsense [ 119 , 152 , 153 , 197 ]. LLMs do not do this,they only process symbols and predict the most likely string of symbols from linguistic prompts. Thus, to perform optimally, LLMs often need human supervision and input to provide the necessary meaning and context for language [ 61 ].

As discussed in Sect.  4 , because AI systems do not process information in the way that humans do, it can be difficult to anticipate, understand and detect the errors these tools make. For this reason, continual monitoring of AI performance in real-world applications, including feedback from end-users, developers, and other stakeholders, is essential to AI quality control and quality improvement and public trust in AI [ 131 , 174 ].

5.6 Lack of moral agency

As mentioned in Sect.  2 , narrow AI systems, such as LLMs, lack the capacities regarded as essential for moral agency, such as consciousness, self-concepts, personal memory, life experiences, goals, and emotions [ 18 , 139 , 151 ]. While this is not a problem for most technologies, it is for AI systems because they may be used to perform activities with significant moral and social consequences, such as reading radiological images or writing scientific papers (see discussion in Sect.  7.6 ), even though AI cannot be held morally or legally responsible or accountable. The lack of moral agency, when combined with other AI limitations, such as lack of a meaningful and human-like connection to the physical world, can produce dangerous results. For example, in 2021, Alexa, Amazon’s LLM-based voice-assistant, instructed a 10-year-old girl to stick a penny into an electric outlet when she asked it for a challenge to do [ 20 ]. In 2023, the widow of a Belgian man who committed suicide claimed that he had been depressed and was chatting with an LLM that encouraged him to kill himself [ 44 , 69 ]). OpenAI and other companies have tried to put guardrails in place to prevent their systems from giving dangerous advice, but this is not easy to fix. A recent study found that while ChatGPT can pass medical boards, it can give dangerous medical advice due to its tendency to make factual errors and its lack of understanding of the meaning and context of language [ 51 ].

5.7 The black box problem

Suppose ChatGPT produces erroneous output, and a computer scientist or engineer wants to know why. As a first step, they could examine the training data and algorithms to determine the source of the problem. Footnote 12 However, to fully understand what ChatGPT is doing they need to probe deeply into the system and examine not only the code but also the weightings attached to inputs in the ANN layers and the mathematical computations produced from the inputs. While an expert computer scientist or engineer could troubleshoot the code, they will not be able to interpret the thousands of numbers used in the weightings and the billions of calculations from those numbers [ 110 , 151 , 199 ]. This is what is meant when an AI system is described as a “black box.” See Fig.  8 . Trying to understand the meaning of the weightings and calculations in ML is very different from trying to understand other types of computer programs, such as those used in most cell phones or personal computers, in which an expert could examine the system (as a whole) to determine what it is doing and why [ 151 , 199 ]. Footnote 13

figure 8

The black box: AI incorrectly labels a picture of a dog as a picture of a wolf but a complete investigation of this error is not possible due to a “black box” in the system

The opacity of AI systems is ethically problematic because one might argue that we should not use these devices if we cannot trust them, and we cannot trust them if even the best experts do not completely understand how they work [ 6 , 7 , 39 , 47 , 63 , 186 ]. Trust in a technology is partially based on understanding that technology. If we do not understand how a telescope works, then we should not trust in what we see with it. Footnote 14 Likewise, if computer experts do not completely understand how an AI/ML system works, then perhaps we should not use them for important tasks, such as making hiring decisions, diagnosing diseases, analyzing data, or generating scientific hypotheses or theories [ 63 , 74 ].

The black box problem raises important ethical issues for science (discussed further in Sect.  7.4 ), because it can undermine public trust in science, which is already in decline, due primarily to the politicization of topics with significant social implications, such as climate change, COVID-19 vaccines and public health measures [ 123 , 189 ].

One way of responding to the black box problem is to argue that we do not need to completely understand AI systems to trust them; what matters is an acceptably low rate of error [ 136 , 186 ]. Proponents of this view draw an analogy between using AI systems and using other artifacts, such as using aspirin for pain relief, without fully understanding how they work. All that really matters for trusting a machine or tool is that we have evidence that it works well for our purposes, not that we completely understand how it works. This line of argument implies that it is justifiable to use AI systems to read radiological images, model the 3-D structures of proteins, or write scientific papers provided that we have evidence that they perform these tasks as well as human beings [ 136 ].

This response to the black box problem does not solve the problem but simply tells us not to worry about it [ 63 ]. Footnote 15 There are several reasons to be concerned about the black box problem. First, if something goes wrong with a tool or technology, regulatory agencies, injured parties, insurers, politicians, and others want to know precisely how it works to prevent similar problems in the future and hold people and organizations legally accountable [ 141 ]. For example, when the National Transportation Safety Board [ 160 ] investigates an airplane crash, they want to know what precisely went wrong. Was the crash due to human error? Bad weather? A design flaw? A defective part? The NTSB will not be satisfied with an explanation that appeals to a mysterious technology within the airplane.

Second, when regulatory agencies, such as the Food and Drug Administration (FDA), make decisions concerning the approval of new products, they want to know how the products work, so they can make well-informed, publicly-defendable decisions and inform the consumers about risks. To obtain FDA approval for a new drug, a manufacturer must submit a vast amount of information to the agency, including information about the drug’s chemistry, pharmacology, and toxicology; the results of pre-clinical and clinical trials; processes for manufacturing the drug; and proposed labelling and advice to healthcare providers [ 75 ]. Indeed, dealing with the black box problem has been a key issue in FDA approval of medical devices that use AI/ML [ 74 , 183 ].

Third, end-users of technologies, such as consumers, professionals, researchers, government officials, and business leaders may not be satisfied with black boxes. Although most laypeople comfortably use technologies without fully understanding their innerworkings, they usually assume that experts who understand how these technologies work have assessed them and deemed them to be safe. End-users may become highly dissatisfied with a technology when it fails to perform its function, especially when not even the experts can explain why. Public dissatisfaction with responses to the black box problem may undermine the adoption of AI/ML technologies, especially when these technologies cause harm, invade privacy, or produce biased claims and results [ 60 , 85 , 134 , 175 ].

5.8 Explainable AI

An alternative to the non-solution approach is to make AI explainable [ 11 , 96 , 151 , 186 ]. The basic idea behind explainability is to develop “processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms” [ 110 ]. Transparency of algorithms, models, parameters, and data is essential to making AI explainable, so that users can understand an AI system’s accuracy and precision and the types of errors it is prone to making. Explainable AI does not attempt to “peer inside” the black box, but it can make AI behavior more understandable to developers, users, and other stakeholders. Explainability, according to proponents of this approach, helps to promote trust in AI because it allows users and other stakeholders to make rational and informed decisions about it [ 77 , 83 , 110 , 186 ].

While the explainable AI approach is preferable to the non-solution approach, it still has some shortcomings. First, it is unclear whether making AI explainable will satisfy non-experts because considerable expertise in computer science and/or data analytics may be required to understand what is being explained [ 120 , 186 ]. For transparency to be effective, it must address the audience’s informational needs [ 68 ]. Explainable AI, at least in its current form, may not address the informational needs of laypeople, politicians, professionals, or scientists because the information is too technical [ 58 ]. To be explainable to non-experts, the information should be expressed in plain, jargon-free language that describes what the AI did and why [ 96 ].

Second, it is unclear whether explainable AI completely solves issues related to accountability and legal liability because we are yet to witness how legal systems will deal with AI lawsuits in which information pertaining to explainability (or lack thereof) is used as evidence in a court [ 141 ]. However, it is conceivable that the information conveyed to make AI explainable will satisfy the courts in some cases and set judicial precedent, so that legal doctrines and practices related to liability for AI-caused harms will emerge, much in the same way that doctrines and practices for medical technologies emerged.

Third, there is also the issue of whether explainable AI will satisfy the requirements of regulatory agencies, such as the FDA. However, regulatory agencies have been making some progress toward addressing the black box problem and explainability is likely to play a key role in these efforts [ 183 ].

Fourth, private companies uninterested in sharing information about their systems may not comply with explainable AI requirements or they may “game” the requirements to resemble compliance without actually complying. ChatGPT, for example, is a highly opaque system that is yet to disclose its training data and it is unclear whether/when OpenAI would open up its technology to external scrutiny [ 28 , 66 , 130 ].

Despite these shortcomings, the explainable AI approach is a reasonable way of dealing with transparency issues, and we encourage its continued development and application to AI/ML systems.

6 Ethical norms of science

With this overview of AI in mind, we can now consider how using AI in research impacts the ethical norms of science. But first, we need to describe these norms. Ethical norms of science are principles, values, or virtues that are essential for conducting good research [ 147 , 180 , 187 , 191 ]. See Table  1 . These norms apply to various practices, including research design; experimentation and testing; modelling; concept formation; data collection and storage; data analysis and interpretation; data sharing; publication; peer review; hypothesis/theory formulation and acceptance; communication with the public; as well as mentoring and education [ 207 ]. Many of these norms are expressed in codes of conduct, professional guidelines, institutional or journal policies, or books and papers on scientific methodology [ 4 , 10 , 113 , 235 ]. Others, like collegiality, might not be codified but are implicit in the practice of science. Some norms, such as testability, rigor, and reproducibility, are primarily epistemic, while others, such as fair attribution of credit, protection of research subjects, and social responsibility, are primarily moral (when enshrined in law, like instance of fraud, these norms become legal but here we only focus on ethical norms). There are also some like honesty, openness, and transparency, which have both epistemic and moral dimensions [ 191 , 192 ].

Scholars from different fields, including philosophy, sociology, history, logic, decision theory, and statistics have studied ethical norms of science [ 84 , 89 , 104 , 125 , 128 , 137 , 147 , 180 , 208 , 209 , 237 ]. Sociologists such as Merton [ 147 ] and Shapin [ 208 ], tend to view ethical norms as generalizations that accurately describe the practice of science, while philosophers, such as Kitcher [ 125 ] and Haack [ 89 ], conceive of these norms as prescriptive standards that scientists ought to follow. These approaches need not be mutually exclusive, and both can offer useful insights about ethical norms of science. Clearly, the study of norms must take the practice of science as its starting point, otherwise our understanding of norms would have no factual basis. However, one cannot simply infer the ethical norms of science from the practice of science because scientists may endorse and defend norms without always following them. For example, most scientists would agree that they should report data honestly, disclose significant conflicting interests, and keep good research records, but evidence indicates that they sometimes fail to do so [ 140 ].

One way of bridging the gap between descriptive and prescriptive accounts of ethical norms of science is to reflect on the social and epistemological foundations (or justifications) of these norms. Ethical norms of science can be justified in at least three ways [ 191 ].

First, these norms help the scientific community achieve its epistemic and practical goals, such as understanding, predicting, and controlling nature. It is nearly impossible to understand how a natural or social process works or make accurate predictions about it without standards pertaining to honesty, logical consistency, empirical support, and reproducibility of data and results. These and other epistemic standards distinguish science form superstition, pseudoscience, and sophistry [ 89 ].

Second, ethical norms promote trust among scientists, which is essential for collaboration, peer review, publication, sharing of data and resources, mentoring, education, and other scientific activities. Scientists need to be able to trust that the data and results reported in papers have not been fabricated, falsified, or manipulated; that reviewers for journals and funding agencies will maintain confidentiality; that colleagues or mentors will not steal their ideas and other forms of intellectual property; and that credit for collaborative work will be distributed fairly [ 26 , 233 ].

Third, ethical norms are important for fostering public support for science. The public is not likely to financially, legally, or socially support research that is perceived as corrupt, incompetent, untrustworthy, or unethical [ 191 ]. Taken together, these three modes of justification link ethical norms to science’s social foundations; that is, ethical norms are standards that govern the scientific community, which itself operates within and interacts with a larger community, namely society [ 137 , 187 , 209 ].

Although vital for conducting science, ethical norms are not rigid rules. Norms sometimes conflict, and when they do, scientists must make decisions concerning epistemic or moral priorities [ 191 ]. For example, model-building in science may involve tradeoffs among various epistemic norms, including generality, precision, realism, simplicity, and explanatory power [ 143 ]. Research with human subjects often involves tradeoffs between rigor and protection of participants. For example, placebo control groups are not used in clinical trials when receiving a placebo instead of an effective treatment would cause serious harm to the participant [ 207 ].

Although the norms can be understood as guidelines, some have higher priority than others. For example, honesty is the hallmark of good science, and there are very few situations in which scientists are justified in deviating from this norm. Footnote 16 Openness, on the other hand, can be deemphasized to protect research participants’ privacy, intellectual property, classified information, or unpublished research [ 207 ].

Finally, science’s ethical norms have changed over time, and they are likely to continue to evolve [ 80 , 128 , 147 , 237 ]. While norms such as empiricism, objectivity, and consistency originated in ancient Greek science, others, such as reproducibility and openness, developed during the 1500s; and many, such as protection of research subjects and social responsibility, did not emerge as formalized norms until the twentieth century. This evolution is in response to changes in science’s social, institutional, economic, and political environment and advancements in scientific instruments, tools, and methods [ 100 ]. For example, the funding of science by private companies and their requirements concerning data access and release policies have led to changes in norms related to open sharing of data and materials [ 188 ]. The increased presence of women and racial and ethnic minorities in science has led to the development of policies for preventing sexual and other forms of harassment [ 185 ]. The use of computer software to analyze large sets of complex data has challenged traditional views about norms related to hypothesis testing [ 193 , 194 ].

7 AI and the ethical norms of science

We will divide our discussion of AI and the ethics of science into six topics corresponding to the problems and issues previously identified in this paper and seventh topic related to scientific education. While these topics may seem somewhat disconnected, they all involve ethical issues that scientists who use AI in research are currently dealing with.

7.1 AI biases and the ethical norms of science

Bias can undermine the quality and trustworthiness of science and its social impacts [ 207 ]. While reducing and managing bias are widely recognized as essential to good scientific methodology and practice [ 79 , 89 ], they become crucial when AI is employed in research because AI can reproduce and amplify biases inherent in the data and generate results that lend support to policies that are discriminatory, unfair, harmful, or ineffective [ 16 , 202 ]. Moreover, by taking machines’ disinterestedness in findings as a necessary and sufficient condition of objectivity, users of AI in research may overestimate the objectivity of their findings. AI biases in medical research have generated considerable concern, since biases related to race, ethnicity, gender, sexuality, age, nationality, and socioeconomic status in health-related datasets can perpetuate health disparities by supporting biased hypotheses, models, theories, and policies [ 177 , 198 , 211 ]. Biases also negatively impact areas of science outside the health sphere, including ecology, forestry, urban planning, economics, wildlife management, geography, and agriculture [ 142 , 164 , 165 ].

OpenAI, Google, and other generative AI developers have been using filters that prevent their systems from generating text that is outright racist, sexist, homophobic, pornographic, offensive, or dangerous [ 93 ]. While bias reduction is a necessary step to make AI safe for human use, there are reasons to be skeptical of the idea that AI can be appropriately sanitized. First, the biases inherent in data are so pervasive that no amount of filtering can remove all of them [ 44 , 69 ]. Second, AI systems may also have political and social biases that are difficult to identify or control [ 19 ]. Even in the case of generative AI models where some filtering has happened, changing the inputted prompt may simply confuse and push a system to generate biased content anyway [ 98 ].

Third, by removing, reducing and controlling some biases, AI developers may create other biases, which are difficult to anticipate, identify or describe at this point. For example, LLMs have been trained using data gleaned from the Internet, scholarly articles and Wikipedia [ 90 ], all of which consist of the broad spectrum of human behavior and experience, from good to bad and virtuous to sinister. If we try to weed undesirable features of this data, we will eliminate parts of our language and culture, and ultimately, parts of us. Footnote 17 If we want to use LLMs to make sound moral and political judgments, sanitizing their data processing and output may hinder their ability to excel at this task, because the ability to make sound moral judgements or anticipate harm may depend, in part, on some familiarity with immoral choices and the darker side of humanity. It is only by understanding evil that we can freely and rationally choose the good [ 40 ]. We admit this last point is highly speculative, but it is worth considering. Clearly, the effects of LLM bias-management bear watching.

While the problem of AI bias does not require a radical revision of scientific norms, it does imply that scientists who use AI systems in research have special obligations to identify, describe, reduce, and control bias [ 132 ]. To fulfill these obligations, scientists must not only attend to matters of research design, data analysis and interpretation, but also address issues related to data diversity, sampling, and representativeness [ 70 ]. They must also realize that they are ultimately accountable for AI biases, both to other scientists and to members of the public. As such, they should only use AI in contexts where their expertise and judgement are sufficient to identify and remove biases [ 97 ]. This is important because given the accessibility of AI systems and the fact that they can exploit our cognitive shortcomings, they are creating an illusion of understanding [ 148 ].

Furthermore, to build public trust in AI and promote transparency and accountability, scientists who use AI should engage with impacted populations, communities and other stakeholders to address their needs and concerns and seek their assistance in identifying and reducing potential biases [ 132 , 181 , 202 ]. Footnote 18 During the engagement process, researchers should help populations and communities understand how their AI system works, why they are using it, and how it may produce bias. To address the problem of AI bias, the Biden Administration recently signed an executive order that directs federal agencies to identify and reduce bias and protect the public from algorithmic discrimination [ 217 ].

7.2 AI random errors and the ethical norms of science

Like bias, random errors can undermine the validity and reliability of scientific knowledge and have disastrous consequences for public health, safety, and social policy [ 207 ]. For example, random errors in the processing of radiologic images in a clinical trial of a new cancer drug could harm patients in the trial and future patients who take an approved drug, and errors related to the modeling of the transmission of an infectious disease could undermine efforts to control an epidemic. Although some random errors are unavoidable in science, an excessive amount when using AI could be considered carelessness or recklessness when using AI (see discussion of misconduct in Sect.  7.3 ).

Reduction of random errors, like reduction of bias, is widely recognized as essential to good scientific methodology and practice [ 207 ]. Although some random errors are unavoidable in research, scientists have obligations to identify, describe, reduce, and correct them because they are ultimately accountable for both human and AI errors. Scientists who use AI in their research should disclose and discuss potential limitations and (known) AI-related errors. Transparency about these is important for making research trustworthy and reproducible [ 16 ].

Strategies for reducing errors in science include time-honored quality assurance and quality improvement techniques, such as auditing data, instruments, and systems; validating and testing instruments that analyze or process data; and investigating and analyzing errors [ 1 ]. Replication of results by independent researchers, journal peer review, and post-publication peer review also play a major role in error reduction [ 207 ]. However, given that content generated by AI systems is not always reproducible [ 98 ], identifying and adopting measures to reduce errors is extremely complicated. Either way, accountability requires that scientists take responsibility for errors produced by AI/ML systems, that they can explain why errors have occurred, and that they transparently share their limitations of their knowledge related to these errors.

7.3 AI and research misconduct

Failure to appropriately control AI-related errors could make scientists liable for research misconduct, if they intentionally, knowingly, or recklessly disseminate false data or plagiarize [ 207 ]. Footnote 19 Although most misconduct regulations and policies distinguish between misconduct and honest error, scientists may still be liable for misconduct due to recklessness [ 42 , 193 , 194 ], which may have consequences for using AI. Footnote 20 For example, a person who uses ChatGPT to write a paper without carefully checking its output for errors or plagiarism could be liable for research misconduct for reckless use of AI. Potential liability for misconduct is yet another reason why using AI in research requires taking appropriate steps to minimize and control errors.

It is also possible that some scientists will use AI to fabricate data or images presented in scientific papers, grant proposals, or other documents. This unethical use of AI is becoming increasingly likely since generative models can be used to create synthetic datasets from scratch or make alternative versions of existing datasets [ 50 , 155 , 200 , 214 ]. Synthetic data are playing an increasingly important role in some areas of science. For example, researchers can use synthetic data to develop and validate models and enhance statistical analysis. Also, because synthetic data are similar to but not the same as real data, they can be used to eliminate or mask personal identifiers and protect the confidentiality of human participants [ 31 , 81 , 200 ].

Although we do not know of any cases where scientists have been charged with research misconduct for presenting synthetic data as real data, it is only a matter of time until this happens, given the pressures to produce results, publish, and obtain grants, and the temptations to cheat or cut corners. Footnote 21 This speculation is further corroborated by the fact that a small proportion of scientists deliberately fabricate or falsify data at some point in their careers [ 73 , 140 ]. Also, using synthetic data in research, even appropriately, may blur the line between real and fake data and undermine data integrity. Researchers who use synthetic data should (1) indicate which parts of data are synthetic, (2) describe how the data were generated; (3) explain how and why they were used [ 221 ].

7.4 The black box problem and the ethical norms of science

The black box problem presents significant challenges to the trustworthiness and transparency of research that use AI because some of the steps in the scientific process will not be fully open and understandable to humans, including AI experts. An important implication of the black box problem is that scientists who use AI are obligated to make their use of the technology explainable to their peers and the public. While precise details concerning what makes an AI system explainable may vary across disciplines and contexts, some baseline requirements for transparency may include:

The type, name, and version of AI system used.

What task(s) the system was used for.

How, when and by which contributor a system was used.

Why a certain system was used instead of alternatives (if available).

What aspects of a system are not explainable (e.g., weightings).

Technical details related to model’s architecture, training data and optimization procedures, influential features involved in model’s decisions, the reliability and accuracy of the system (if known).

Whether inferences drawn by the AI system are supported by currently accepted scientific theories, principles, or concepts.

This information should be expressed in plain language to allow non-experts to understand the whos, whats, hows, and whys related to the AI system. Ideally, this information would become a standard part of reported research that used AI. The information could be reported in the materials and methods section or in supplemental material, much that same way that information about statistical methods and software is currently reported.

As mentioned previously, making AI explainable does not completely solve the black box problem but it can play a key role in promoting transparency, accountability, and trust [ 7 , 9 ]. While there seems to be an emerging consensus on the utility and importance of making AI explainable, there is very little agreement about what explainability means in practice, because what makes AI explainable depends on the context of its use [ 58 ]. Clearly, this is a topic where more empirical research and ethical/policy analysis is needed.

7.5 AI and confidentiality

Using AI in research, especially generative AI models, raises ethical issues related to data privacy and confidentiality. ChatGPT, for example, stores the information submitted by users, including data submitted in initial prompts and subsequent interactions. Unless users opt out, this information could be used for training and other purposes. The data could potentially include personal and confidential information, such as information contained in drafts of scientific papers, grant proposals, experimental protocols, or institutional policies; computer code; legal strategies; business plans; and private information about human research participants [ 67 , 85 ]. Due to concerns about breaches of confidentiality, the National Institutes of Health (NIH) recently prohibited the use of generative AI technologies, such as LLMs, in grant peer review [ 159 ]. Footnote 22 Some US courts now require lawyers to disclose their use of generative AI in preparing legal documents and make assurances that they have taken appropriate steps to protect confidentiality [ 146 ].

While we are not suggesting that concerns about confidentiality justify prohibiting generative AI use in science, we think that considerable caution is warranted. Researchers who use generative AI to edit or review a document should assume that the material contained in it will not be kept confidential, and therefore, should not use these systems to edit or review anything containing confidential or personal information.

It is worth noting that technological solutions to the confidentiality problem may be developed in due course. For example, if an organization operates a local application of an LLM and places the technology behind a secure firewall, its members can use the technology safely. Electronic medical records, for example, have this type of security [ 127 ]. Some universities have already begun experimenting with operating their own AI systems for use by students, faculty, and administrators [ 225 ]. Also, as mentioned in Sect.  7.3 , the use of synthetic data may help to protect confidentiality.

7.6 AI and moral agency

The next issue we will discuss is whether AI can be considered a moral agent that participates in an epistemic community, that is, as a partner in knowledge generation. This became a major issue for the ethical norms of science in the winter of 2022–2023, when some researchers listed ChatGPT as authors on papers [ 102 ]. These publications initiated a vigorous debate in the research community, and journals scrambled to develop policies to deal with LLMs’ use in research. On one end of the spectrum, Jenkins and Lin [ 116 ] argued that AI systems can be authors if they make a substantial contribution to the research, and on the other end, Thorp [ 218 ] argued that AI systems cannot be named as authors and should not be used at all in preparing manuscripts. Currently, there seems to be an emerging consensus that falls in between these two extremes position, namely, that AI systems can be used in preparing manuscripts but that their use should be appropriately disclosed and discussed, [ 4 , 102 ]. In 2023, the International Committee of Medical Journal Editors (ICMJE), a highly influential organization with over 4,500 member journals, released the following statement about AI and authorship:

At submission, the journal should require authors to disclose whether they used artificial intelligence (AI)assisted technologies (such as Large Language Models [LLMs], chatbots, or image creators) in the production of submitted work. Authors who use such technology should describe, in both the cover letter and the submitted work, how they used it. Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship (see Section II.A.1). Therefore, humans are responsible for any submitted material that included the use of AI-assisted technologies. Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Authors should not list AI and AI assisted technologies as an author or co-author, nor cite AI as an author. Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI. Humans must ensure there is appropriate attribution of all quoted material, including full citations [ 113 ].

We agree with the ICMJE’s position, which mirrors views we defended in print before the ICMJE released its guidance [ 101 , 102 ].

Authorship on scientific papers is based not only on making a substantial contribution, but also on being accountable for the work [ 207 ]. Because authorship implies significant epistemic and ethical responsibilities, one should not be named as an author on a work if one cannot be accountable for one’s contribution to the work. If questions arise about the work after publication, one needs to be able to answer those questions intelligibly and if deemed liable, face possible legal, financial, or social consequences for one’s actions.

AI systems cannot be held accountable for their actions for two reasons: (1) they cannot provide intelligible explanations for what they did, (2) they cannot be held morally responsible for their actions, (3) they cannot suffer consequences nor can be sanctioned. The first reason has to do with the previously discussed black box problem. Although current proposals for making AI explainable may help to deal with this issue, they still fall far short of humanlike accountability, because these proposals do not require that the AI system, itself , should provide an explanation. Regarding the second reason, when we hold humans accountable, we expect them to explain their behavior in clear and intelligible language. Footnote 23 If a principal investigator wonders why a graduate student did not report all the data related to experiment, the investigator expects the student to explain why they did what they did. Current AI systems cannot do this. In some cases, someone else may be able to provide an explanation of how they work and what they do, but this not the same as the AI providing the explanation, which is a prerequisite for accountability. The third reason has to do with the link between accountabilities and sanctions. If an AI system makes a mistake which harms others, it cannot be sanctioned. These systems do not have interests, values, reputation and feelings in the same way that humans do and cannot be punished by law enforcement.

Even if an AI can intelligibly explain itself in the future, this does not imply that it can be morally responsible. While the concept of moral agency, like the concept of consciousness, is controversial, there is general agreement that moral agency requires the capacity to perform intentional (or purposeful) actions, understand moral norms, and make decisions based on moral norms. These capacities also presuppose additional capacities, such as consciousness, self-awareness, personal memory, perception, general intelligence, and emotions [ 46 , 95 , 213 ]. While computer scientists are making some progress on developing AI systems that have quasi-moral agency, that is, AI systems that can make decisions based on moral norms [ 71 , 196 , 203 ], they are still a long way from developing AGI or AC (see definitions of these terms in Sect.  2 ), which would seem to be required for genuine moral agency.

Moreover, other important implications follow from current AI’s lack of moral agency. First, AI systems cannot be named as inventors on patents, because inventorship also implies moral agency [ 62 ]. Patents are granted to individuals, i.e., persons, but since AI systems lack moral agency, they do not qualify as persons under the patent laws adopted by most countries. Second, AI systems cannot be copyright holders, because to own a copyright, one must be a person [ 49 ]. Copyrights, under US law, are granted only to people [ 224 ].

Although AI systems should not be named as authors or inventors, it is still important to appropriately recognize their contributions. Recognition should be granted not only to promote honesty and transparency in research but also to prevent human authors from receiving undue credit. For example, although many scientists and engineers deserve considerable accolades for solving the protein folding problem [ 118 , 176 ], failing to mention the role of AlphaFold in this discovery would be giving human contributors more credit than they deserve.

7.7 AI and research ethics education

The last topic we will address in this section has to do with education and mentoring in responsible conduct of research (RCR), which is widely recognized as essential to promoting ethical judgment, reasoning, and behavior in science [ 207 ]. In the US, the NIH and National Science Foundation (NSF) require RCR education for funded students and trainees, and many academic institutions require some form of RCR training for all research faculty [ 190 ]. Topics typically covered in RCR courses, seminars, workshops, or training sessions include data fabrication and falsification, plagiarism, investigation of misconduct, scientific record keeping, data management, rigor and reproducibility, authorship, peer review, publication, conflict of interest, mentoring, safe research environment, protection and human and animal subjects, and social responsibility [ 207 ]. As demonstrated in this paper, the use of AI in research has a direct bearing on most of these topics, but especially on authorship, rigor and reproducibility, peer review, and social responsibility. We recommend, therefore, that RCR education and training incorporate discussion of the use of AI in research, wherever relevant.

8 Conclusion

Using AI in research benefits science and society but also creates some novel and complex ethical issues that affect accountability, responsibility, transparency, trustworthiness, reproducibility, fairness, and objectivity, and other important values in research. Although scientists do not need to radically revise their ethical norms to deal with these issues, they do need new guidance for the appropriate use of AI in research. Table 2 provides a summary of our recommendations for this guidance. Since AI continues to advance rapidly, scientists, academic institutions, funding agencies and publishers, should continue to discuss AI’s impact on research and update their knowledge, ethical guidelines and policies accordingly. Guidance should be periodically revised as AI becomes woven into the fabric of scientific practice (or normalized) and researchers learn about it, adapt to it, and use it in novel ways. Since science has significant impacts on society, public engagement in such discussions is crucial for responsible the use, development, and AI in research [ 234 ].

In closing, we will observe that many scholars, including ourselves, assume that today’s AI systems lack the capacities necessary for moral agency. This assumption has played a key role in our analysis of ethical uses of AI in research and has informed our recommendations. We realize that a day may arrive, possibly sooner than many would like to believe, when AI will advance to the point that this assumption will need to be revised, and that society will need to come to terms with the moral rights and responsibilities of some types of AI systems. Perhaps AI systems will one day participate in science as full partners in discovery and innovation [ 33 , 126 ]. Although we do not view this as a matter that now demands immediate attention, we remain open to further discussion of this issue in the future.

There is not sufficient space in this paper to conduct a thorough review of all the ways that AI is being used in scientific research. For a review of the information, see Wang et al. [ 231 ] and Krenn et al. [ 126 ].

However, the National Institutes of Health has prohibited the use of AI to review grants (see Sect.  7.5 ).

This a simplified taxonomy of AI that we have found useful to frame the research ethics issues. For more detailed taxonomy, see Graziani et al. [ 86 ].

See Krenn et al. [ 126 ] for a thoughtful discussion of the possible role of AGI in scientific research.

We will use the term ‘input’ in a very general sense to refer to data which are routed into the system, such as numbers, text, or image pixels.

It is important to note that the [ 167 ] paper was corrected to remove ChatGPT as an author because the tool did not meet the journal’s authorship criteria. See O’Connor [ 166 ].

There are important, philosophical issues at stake here concerning whether AI users should regard an output as ‘acceptable’ or ‘true’, but these questions are beyond the scope of our paper.

The question of whether true randomness exists in nature is metaphysically controversial because some physicists and philosophers argue that nothing happens by pure chance [ 64 ]. We do not need to delve into this issue here, since most people agree that the distinction can be viewed as epistemic and not metaphysical, that is, an error is systemic or random relative to our knowledge about the generation of the error.

Some of the most well-known cases of bias involved the use of AI systems by private companies. For example, Amazon stopped using an AI hiring tool in 2018 after it discovered that the tool was biased against women [ 57 ]. In 2021, Facebook faced public ridicule and shame for using image recognition software the labelled images of African American men as non-human primates [ 117 ]. In 2021, Zillow lost hundreds of millions of dollars because its algorithm systematically overestimated the market value of homes the company purchased [ 170 ].

Fake citations and factual errors made by LLMs are often referred to as ‘hallucinations.’ We prefer not to use this term because it ascribes mental states to AI.

An additional, and perhaps more concerning, issue is that using chatbots to review the literature contributes to the deskilling of humanity because it involves trusting an AI’s interpretation and synthesis of the literature instead of reading it and thinking about it for oneself. Since deskilling is a problem with many different applications of AI, we will not explore it in depth here. See Vallor [ 226 ].

We are assuming here that the engineer or scientist has access to the computer code and training data, which private companies may to be loath to provide. For example, developers at OpenAI and Google have not provided the public with access to their training data and code [ 130 ].

Although our discussion of the black box problem focuses on ML, in theory this problem could arise in any type of AI in which its workings cannot be understood by human beings.

Galileo had to convince his critics that his telescope could be trusted to convey reliable information about heavenly bodies, such as the moon and Jupiter. Explaining how the telescope works and comparing it to the human eye played an important role in his defense of the instrument [ 36 ].

This response may also conflate trust with verification. According to some theories of trust, if you trust something, you do not need to continually verify it. If I trust someone to tell me the truth, I do not need to continually verify that they are telling the trust. Indeed, it seems that we verify because we do not trust. For further discussion, see McLeod [ 145 ].

One could argue that deviation from honesty might be justified to protect human research subjects in some situations. For example, pseudonyms are often used in qualitative social/behavioral research to refer to participants or communities in order to protect their privacy [ 92 ].

Sanitizing LLMs is a form of censorship, which may be necessary in some cases, but also carries significant risks for freedom of expression [ 236 ].

While public, community, and stakeholder engagement is widely accepted as important for promoting trust in science and technology but can be difficult to implement, especially since publics, communities, and stakeholders can be difficult to identify and may have conflicting interests [ 157 ].

US federal policy defines research misconduct as data fabrication or falsification or plagiarism [ 168 ].

While the difference between recklessness and negligence can be difficult to ascertain, one way of thinking of recklessness is that it involves an indifference to or disregard for the veracity or integrity of research. Although almost all misconduct findings claim that the accused person (or respondent) acted intentionally, knowingly, or recklessly, there have been a few cases in which the respondent was found only to have acted recklessly [ 42 , 193 , 194 ].

The distinction between synthetic and real data raises some interesting and important philosophical and policy issues that we will examine in more depth in future work.

Some editors and publishers have been using AI to review and screen journal submissions [ 35 , 212 ]. For a discussion of issues raised by using AI in peer review, see Hosseini and Horbach [ 98 , 99 ].

This issue reminds us of the scene in 2001: A Space Odyssey in which the human astronauts ask the ship’s computer, HAL, to explain why it incorrectly diagnosed a problem with the AE-35 unit. HAL responds that HAL 9000 computers have never made an error so the misdiagnosis must be due to human error.

Aboumatar, H., Thompson, C., Garcia-Morales, E., Gurses, A.P., Naqibuddin, M., Saunders, J., Kim, S.W., Wise, R.: Perspective on reducing errors in research. Contemp. Clin. Trials Commun. 23 , 100838 (2021)

Article   Google Scholar  

Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., Walters, P.: Molecular Biology of the Cell, 4th edn. Garland Science, New York and London (2002)

Google Scholar  

Ali, R., Connolly, I.D., Tang, O.Y., Mirza, F.N., Johnston, B., Abdulrazeq, H.F., Galamaga, P.F., Libby, T.J., Sodha, N.R., Groff, M.W., Gokaslan, Z.L., Telfeian, A.E., Shin, J.H., Asaad, W.F., Zou, J., Doberstein, C.E.: Bridging the literacy gap for surgical consents: an AI-human expert collaborative approach. NPJ Digit. Med. 7 (1), 63 (2024)

All European Academies.: The European Code of Conduct for Research Integrity, Revised Edition 2023 (2023). https://allea.org/code-of-conduct/

Allyn, B.: The Google engineer who sees company's AI as 'sentient' thinks a chatbot has a soul. NPR (2022). https://www.npr.org/2022/06/16/1105552435/google-ai-sentient

Alvarado, R.: Should we replace radiologists with deep learning? Bioethics 36 (2), 121–133 (2022)

Alvarado, R.: What kind of trust does AI deserve, if any? AI Ethics (2022). https://doi.org/10.1007/s43681-022-00224-x

Alvarado, R.: Computer simulations as scientific instruments. Found. Sci. 27 (3), 1183–1205 (2022)

Article   MathSciNet   Google Scholar  

Alvarado, R.: AI as an epistemic technology. Sci. Eng. Ethics 29 , 32 (2023)

American Society of Microbiology.: Code of Conduct (2021). https://asm.org/Articles/Ethics/COEs/ASM-Code-of-Ethics-and-Conduct

Ankarstad, A.: What is explainable AI (XAI)? Towards Data Science (2020). https://towardsdatascience.com/what-is-explainable-ai-xai-afc56938d513

Antun, V., Renna, F., Poon, C., Adcock, B., Hansen, A.C.: On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. U.S.A. 117 (48), 30088–30095 (2020)

Assael, Y., Sommerschield, T., Shillingford, B., Bordbar, M., Pavlopoulos, J., Chatzipanagiotou, M., Androutsopoulos, I., Prag, J., de Freitas, N.: Restoring and attributing ancient texts using deep neural networks. Nature 603 , 280–283 (2022)

Babu, N.V., Kanaga, E.G.M.: Sentiment analysis in social media data for depression detection using artificial intelligence: a review. SN Comput. Sci. 3 , 74 (2022)

Badini, S., Regondi, S., Pugliese, R.: Unleashing the power of artificial intelligence in materials design. Materials 16 (17), 5927 (2023). https://doi.org/10.3390/ma16175927

Ball, P.: Is AI leading to a reproducibility crisis in science? Nature 624 , 22–25 (2023)

Barrera, F.J., Brown, E.D.L., Rojo, A., Obeso, J., Plata, H., Lincango, E.P., Terry, N., Rodríguez-Gutiérrez, R., Hall, J.E., Shekhar, S.: Application of machine learning and artificial intelligence in the diagnosis and classification of polycystic ovarian syndrome: a systematic review. Front. Endocrinol. (2023). https://doi.org/10.3389/fendo.2023.1106625

Bartosz, B.B., Bartosz, J.: Can artificial intelligences be moral agents? New Ideas Psychol. 54 , 101–106 (2019)

Baum, J., Villasenor, J.: The politics of AI: ChatGPT and political biases. Brookings (2023). https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/

BBC News.: Alexa tells 10-year-old girl to touch live plug with penny. BBC News (2021). https://www.bbc.com/news/technology-59810383

Begus, G., Sprouse, R., Leban, A., Silva, M., Gero, S.: Vowels and diphthongs in sperm whales (2024). https://doi.org/10.31219/osf.io/285cs

Bevier, C.: ChatGPT broke the Turing test—the race is on for new ways to assess AI. Nature (2023). https://www.nature.com/articles/d41586-023-02361-7

Bevier, C.: The easy intelligence test that AI chatbots fail. Nature 619 , 686–689 (2023)

Bhattacharyya, M., Miller, V.M., Bhattacharyya, D., Miller, L.E.: High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus 15 (5), e39238 (2023)

Biddle, S.: The internet’s new favorite AI proposes torturing Iranians and surveilling mosques. The Intercept (2022). https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/

Bird, S.J., Housman, D.E.: Trust and the collection, selection, analysis and interpretation of data: a scientist’s view. Sci. Eng. Ethics 1 (4), 371–382 (1995)

Biology for Life.: n.d. https://www.biologyforlife.com/error-analysis.html

Blumauer, A.: How ChatGPT works and the problems with non-explainable AI. Pool Party (2023). https://www.poolparty.biz/blogposts/how-chat-gpt-works-non-explainable-ai#:~:text=ChatGPT%20is%20the%20antithesis%20of,and%20explainability%20are%20critical%20requirements

Bogost, I.: ChatGPT is dumber than you think. The Atlantic (2022). https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligencewriting-ethics/672386/

Bolanos, F., Salatino, A., Osborne, F., Motta, E.: Artificial intelligence for literature reviews: opportunities and challenges (2024). arXiv:2402.08565

Bordukova, M., Makarov, N., Rodriguez-Esteban, P., Schmich, F., Menden, M.P.: Generative artificial intelligence empowers digital twins in drug discovery and clinical trials. Expert Opin. Drug Discov. 19 (1), 33–42 (2024)

Borowiec, M.L., Dikow, R.B., Frandsen, P.B., McKeeken, A., Valentini, G., White, A.E.: Deep learning as a tool for ecology and evolution. Methods Ecol. Evol. 13 (8), 1640–1660 (2022)

Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)

Bothra, A., Cao, Y., Černý, J., Arora, G.: The epidemiology of infectious diseases meets AI: a match made in heaven. Pathogens 12 (2), 317 (2023)

Brainard, J.: As scientists face a flood of papers, AI developers aim to help. Science (2023). https://www.science.org/content/article/scientists-face-flood-papers-ai-developers-aim-help

Brown, H.I.: Galileo on the telescope and the eye. J. Hist. Ideas 46 (4), 487–501 (1985)

Brumfiel, G.: New proteins, better batteries: Scientists are using AI to speed up discoveries. NPR (2023). https://www.npr.org/sections/health-shots/2023/10/12/1205201928/artificial-intelligence-ai-scientific-discoveries-proteins-drugs-solar

Brunello, N.: Example of a deep neural network (2021). https://commons.wikimedia.org/wiki/File:Example_of_a_deep_neural_network.png

Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3 (1), 2053951715622512 (2016)

Calder, T.: The concept of evil. Stanford Encyclopedia of Philosophy (2022). https://plato.stanford.edu/entries/concept-evil/#KanTheEvi

Callaway, A.: ‘The entire protein universe’: AI predicts shape of nearly every known protein. Nature 608 , 14–16 (2022)

Caron, M.M., Dohan, S.B., Barnes, M., Bierer, B.E.: Defining "recklessness" in research misconduct proceedings. Accountability in Research, pp. 1–23 (2023)

Castelvecchi, D.: AI chatbot shows surprising talent for predicting chemical properties and reactions. Nature (2024). https://www.nature.com/articles/d41586-024-00347-7

CBS News.: ChatGPT and large language model bias. CBS News (2023). https://www.cbsnews.com/news/chatgpt-large-language-model-bias-60-minutes-2023-03-05/

CC BY-SA 4.0 DEED.: Amino-acid chains, known as polypeptides, fold to form a protein (2020). https://en.wikipedia.org/wiki/AlphaFold#/media/File:Protein_folding_figure.png

Cervantes, J.A., López, S., Rodríguez, L.F., Cervantes, S., Cervantes, F., Ramos, F.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26 (2), 501–532 (2020)

Chan, B.: Black-box assisted medical decisions: AI power vs. ethical physician care. Med. Health Care Philos. 26 , 285–292 (2023)

ChatGPT, Zhavoronkov, A.: Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience 9 , 82–84 (2022)

Chatterjee, M.: AI cannot hold copyright, federal judge rules. Politico (2023). https://www.politico.com/news/2023/08/21/ai-cannot-hold-copyright-federal-judge-rules-00111865#:~:text=Friday's%20ruling%20will%20be%20a%20critical%20component%20in%20future%20legal%20fights.&text=Artificial%20intelligence%20cannot%20hold%20a,a%20federal%20judge%20ruled%20Friday

Chen, R.J., Lu, M.Y., Chen, T.Y., Williamson, D.F., Mahmood, F.: Synthetic data in machine learning for medicine and healthcare. Nat. Biomed. Eng. 5 , 493–497 (2021)

Chen, S., Kann, B.H., Foote, M.B., Aerts, H.J.W.L., Savova, G.K., Mak, R.H., Bitterman, D.S.: Use of artificial intelligence chatbots for cancer treatment information. JAMA Oncol. 9 (10), 1459–1462 (2023)

Cyrus, L.: How to fold graciously. In: Mossbauer Spectroscopy in Biological Systems: Proceedings of a Meeting Held at Allerton House, Monticello, Illinois, pp. 22–24 (1969)

Conroy, G.: Scientists used ChatGPT to generate an entire paper from scratch—but is it any good? Nature 619 , 443–444 (2023)

Conroy, G.: How ChatGPT and other AI tools could disrupt scientific publishing. Nature (2023). https://www.nature.com/articles/d41586-023-03144-w

Dai, B., Xu, Z., Li, H., Wang, B., Cai, J., Liu, X.: Racial bias can confuse AI for genomic studies. Oncologie 24 (1), 113–130 (2022)

Daneshjou, R., Smith, M.P., Sun, M.D., Rotemberg, V., Zou, J.: Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatol. 157 (11), 1362–1369 (2021)

Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. Reuters (2018). https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

de Bruijn, H., Warnier, M., Janssen, M.: The perils and pitfalls of explainable AI: strategies for explaining algorithmic decision-making. Gov. Inf. Q. 39 (2), 101666 (2022)

Delua, J.: Supervised vs. unsupervised learning: What’s the difference? IBM (2021). https://www.ibm.com/blog/supervised-vs-unsupervised-learning/

Dhinakaran, A.: Overcoming AI’s transparency paradox. Forbes (2021). https://www.forbes.com/sites/aparnadhinakaran/2021/09/10/overcoming-ais-transparency-paradox/?sh=6c6b18834b77

Dickson, B.: LLMs can’t self-correct in reasoning tasks, DeepMind study finds. Tech Talks (2023). https://bdtechtalks.com/2023/10/09/llm-self-correction-reasoning-failures

Dunlap, T.: Artificial intelligence (AI) as an inventor? Dunlap, Bennett and Ludwig (2023). https://www.dbllawyers.com/artificial-intelligence-as-an-inventor/

Durán, J.M., Jongsma, K.R.: Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 47 (5), 329–335 (2021)

Einstein, A.: Letter to Max Born. Walker and Company, New York (1926). Published in: Irene Born (translator), The Born-Einstein Letters (1971)

Eisenstein, M.: Teasing images apart, cell by cell. Nature 623 , 1095–1097 (2023)

Eliot, L.: Nobody can explain for sure why ChatGPT is so good at what it does, troubling AI ethics and AI Law. Forbes (2023). https://www.forbes.com/sites/lanceeliot/2023/04/17/nobody-can-explain-for-sure-why-chatgpt-is-so-good-at-what-it-does-troubling-ai-ethics-and-ai-law/?sh=334c95685041

Eliot, L.: Generative AI ChatGPT can disturbingly gobble up your private and confidential data, forewarns AI ethics and AI law. Forbes (2023). https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=592b16547fdb

Elliott, K.C., Resnik, D.B.: Making open science work for science and society. Environ. Health Perspect. 127 (7), 75002 (2019)

Euro News.: Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change. Euro News (2023). https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate

European Agency for Fundamental Rights.: Data quality and Artificial Intelligence—Mitigating Bias and Error to Protect Fundamental Rights (2019). https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-data-quality-and-ai_en.pdf

Evans, K., de Moura, N., Chauvier, S., Chatila, R., Dogan, E.: Ethical decision making in autonomous vehicles: the AV ethics project. Sci. Eng. Ethics 26 , 3285–3312 (2020)

Extance, A.: How AI technology can tame the scientific literature. Nature (2018). https://www.nature.com/articles/d41586-018-06617-5

Fanelli, D.: How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 4 (5), e5738 (2009)

Food and Drug Administration.: Artificial intelligence (AI) and machine learning (ML) in medical devices (2020). https://www.fda.gov/media/142998/download

Food and Drug Administration.: Development and approval process: drugs (2023). https://www.fda.gov/drugs/development-approval-process-drugs

Fraenkel, A.S.: Complexity of protein folding. Bull. Math. Biol. 55 (6), 1199–1210 (1993)

Fuhrman, J.D., Gorre, N., Hu, Q., Li, H., El Naqa, I., Giger, M.L.: A review of explainable and interpretable AI with applications in COVID-19 imaging. Med. Phys. 49 (1), 1–14 (2022)

Garin, S.P., Parekh, V.S., Sulam, J., Yi, P.H.: Medical imaging data science competitions should report dataset demographics and evaluate for bias. Nat. Med. 29 (5), 1038–1039 (2023)

Giere, R., Bickle, J., Maudlin, R.F.: Understanding Scientific Reasoning, 5th edn. Wadsworth, Belmont (2005)

Gillispie, C.C.: The Edge of Objectivity. Princeton University Press, Princeton (1960)

Giuffrè, M., Shung, D.L.: Harnessing the power of synthetic data in healthcare: innovation, application, and privacy. NPJ Digit. Med. 6 , 186 (2023)

Godwin, R.C., Bryant, A.S., Wagener, B.M., Ness, T.J., DeBerryJJ, H.L.L., Graves, S.H., Archer, A.C., Melvin, R.L.: IRB-draft-generator: a generative AI tool to streamline the creation of institutional review board applications. SoftwareX 25 , 101601 (2024)

Google.: Responsible AI practices (2023). https://ai.google/responsibility/responsible-ai-practices/

Goldman, A.I.: Liaisons: philosophy meets the cognitive and social sciences. MIT Press, Cambridge (2003)

Grad, P.: Trick prompts ChatGPT to leak private data. TechXplore (2023). https://techxplore.com/news/2023-12-prompts-chatgpt-leak-private.html

Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J.P., Yordanova, K., Vered, M., Nair, R., Abreu, P.H., Blanke, T., Pulignano, V., Prior, J.O., Lauwaert, L., Reijers, W., Depeursinge, A., Andrearczyk, V., Müller, H.: A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 56 , 3473–3504 (2023)

Guinness, H.: The best AI image generators in 2023. Zappier (2023). https://zapier.com/blog/best-ai-image-generator/

Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., Kim, R., Raman, R., Nelson, P.C., Mega, J.L., Webster, D.R.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316 (22), 2402–2410 (2016)

Haack, S.: Defending Science within Reason. Prometheus Books, New York (2007)

Hackernoon.: (2024). https://hackernoon.com/the-times-v-microsoftopenai-unauthorized-reproduction-of-times-works-in-gpt-model-training-10

Hagendorff, T., Fabi, S., Kosinski, M.: Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nat. Comput. Sci. (2023). https://doi.org/10.1038/s43588-023-00527-x

Heaton, J.: “*Pseudonyms are used throughout”: a footnote, unpacked. Qual. Inq. 1 , 123–132 (2022)

Heikkilä, M.: How OpenAI is trying to make ChatGPT safer and less biased. The Atlantic (2023). https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Helmenstine, A.: Systematic vs random error—differences and examples. Science Notes (2021). https://sciencenotes.org/systematic-vs-random-error-differences-and-examples/

Himma, K.E.: Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics Inf. Technol. 11 , 19–29 (2009)

Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wires (2019). https://doi.org/10.1002/widm.1312

Hosseini, M., Holmes, K.: Is it ethical to use generative AI if you can’t tell whether it is right or wrong? [Blog Post]. Impact of Social Sciences(2024). https://blogs.lse.ac.uk/impactofsocialsciences/2024/03/15/is-it-ethical-to-use-generative-ai-if-you-cant-tell-whether-it-is-right-or-wrong/

Hosseini, M., Horbach, S.P.J.M.: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res. Integr. Peer Rev. 8 (1), 4 (2023)

Hosseini, M., Horbach, S.P.J.M.: Can generative AI add anything to academic peer review? [Blog Post] Impact of Social Sciences(2023). https://blogs.lse.ac.uk/impactofsocialsciences/2023/09/26/can-generative-ai-add-anything-to-academic-peer-review/

Hosseini, M., Senabre Hidalgo, E., Horbach, S.P.J.M., Güttinger, S., Penders, B.: Messing with Merton: the intersection between open science practices and Mertonian values. Accountability in Research, pp. 1–28 (2022)

Hosseini, M., Rasmussen, L.M., Resnik, D.B.: Using AI to write scholarly publications. Accountability in Research, pp. 1–9 (2023)

Hosseini, M., Resnik, D.B., Holmes, K.: The ethics of disclosing the use of artificial intelligence in tools writing scholarly manuscripts. Res. Ethics (2023). https://doi.org/10.1177/17470161231180449

Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L.H., Aerts, H.J.W.L.: Artificial intelligence in radiology. Nat. Rev. Cancer 18 (8), 500–510 (2018)

Howson, C., Urbach, P.: Scientific Reasoning: A Bayesian Approach, 3rd edn. Open Court, New York (2005)

Humphreys, P.: Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford University Press, New York (2004)

Book   Google Scholar  

Huo, T., Li, L., Chen, X., Wang, Z., Zhang, X., Liu, S., Huang, J., Zhang, J., Yang, Q., Wu, W., Xie, Y., Wang, H., Ye, Z., Deng, K.: Artificial intelligence-aided method to detect uterine fibroids in ultrasound images: a retrospective study. Sci. Rep. 13 (1), 3714 (2023)

Hutson. M.: Hypotheses devised by AI could find ‘blind spots’ in research. Nature (2023). https://www.nature.com/articles/d41586-023-03596

IBM.: What is AI? (2023). https://www.ibm.com/topics/artificial-intelligence

IBM.: What is a Captcha? (2023). https://www.ibm.com/topics/captcha

IBM.: Explainable AI (2023). https://www.ibm.com/topics/explainable-ai

IBM.: What is generative AI? (2023). https://research.ibm.com/blog/what-is-generative-AI

IBM.: What is ML? (2024). https://www.ibm.com/topics/machine-learning

International Committee of Medical Journal Editors.: Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals (2023). https://www.icmje.org/icmje-recommendations.pdf

International Organization for Standardization.: What is AI? (2024). https://www.iso.org/artificial-intelligence/what-is-ai#:~:text=Artificial%20intelligence%20is%20%E2%80%9Ca%20technical,%2FIEC%2022989%3A2022%5D

Janowicz, K., Gao, S., McKenzie, G., Hu, Y., Bhaduri, B.: GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. Int. J. Geogr. Inf. Sci. 34 (4), 625–636 (2020)

Jenkins, R., Lin, P.:. AI-assisted authorship: How to assign credit in synthetic scholarship. SSRN Scholarly Paper No. 4342909 (2023). https://doi.org/10.2139/ssrn.4342909

Jones, D.: Facebook apologizes after its AI labels black men as 'primates'. NPR (2021). https://www.npr.org/2021/09/04/1034368231/facebook-apologizes-ai-labels-black-men-primates-racial-bias

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S.A.A., Ballard, A.J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A.W., Kavukcuoglu, K., Kohli, P., Hassabis, D.: Highly accurate protein structure prediction with AlphaFold. Nature 596 (7873), 583–589 (2021)

Junction AI.: What is ChatGPT not good at? Junction AI (2023). https://junction.ai/what-is-chatgpt-not-good-at/

Kahn, J.: What wrong with “explainable A.I.” Fortune (2022). https://fortune.com/2022/03/22/ai-explainable-radiology-medicine-crisis-eye-on-ai/

Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus, Giroux, New York (2011)

Kembhavi, A., Pattnaik, R.: Machine learning in astronomy. J. Astrophys. Astron. 43 , 76 (2022)

Kennedy, B., Tyson, A., Funk, C.: Americans’ trust in scientists, other groups declines. Pew Research Center (2022). https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/

Kim, I., Kang, K., Song, Y., Kim, T.J.: Application of artificial intelligence in pathology: trends and challenges. Diagnostics (Basel) 12 (11), 2794 (2022)

Kitcher, P.: The Advancement of Knowledge. Oxford University Press, New York (1993)

Krenn, M., Pollice, R., Guo, S.Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., Gomes, G.P., Häse, F., Jinich, A., Nigam, A., Yao, Z., Aspuru-Guzik, A.: On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022)

Kruse, C.S., Smith, B., Vanderlinden, H., Nealand, A.: Security techniques for the electronic health records. J. Med. Syst. 41 (8), 127 (2017)

Kuhn, T.S.: The Essential Tension. University of Chicago Press, Chicago (1977)

Lal, A., Pinevich, Y., Gajic, O., Herasevich, V., Pickering, B.: Artificial intelligence and computer simulation models in critical illness. World Journal of Critical Care Medicine 9 (2), 13–19 (2020)

La Malfa, E., Petrov, A., Frieder, S., Weinhuber, C., Burnell, R., Cohn, A.G., Shadbolt, N., Woolridge, M.: The ARRT of language-models-as-a-service: overview of a new paradigm and its challenges (2023). arXiv: 2309.16573

Larkin, Z.: AI bias—what Is it and how to avoid it? Levity (2022). https://levity.ai/blog/ai-bias-how-to-avoid

Lee, N.T., Resnick, P., Barton, G.: Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings Institute, Washington, DC (2019)

Leswing, K.: OpenAI announces GPT-4, claims it can beat 90% of humans on the SAT. CNBC (2023). https://www.cnbc.com/2023/03/14/openai-announces-gpt-4-says-beats-90percent-of-humans-on-sat.html

Licht, K., Licht, J.: Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy. AI Soc. 35 , 917–926 (2020)

Lipenkova, J.: Overcoming the limitations of large language models: how to enhance LLMs with human-like cognitive skills. Towards Data Science (2023). https://towardsdatascience.com/overcoming-the-limitations-of-large-language-models-9d4e92ad9823

London, A.J.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent. Rep. 49 (1), 15–21 (2019)

Longino, H.: Science as Social Knowledge. Princeton University Press, Princeton (1990)

Lubell, J.: ChatGPT passed the USMLE. What does it mean for med ed? AMA (2023). https://www.ama-assn.org/practice-management/digital/chatgpt-passed-usmle-what-does-it-mean-med-ed

Martinho, A., Poulsen, A., Kroesen, M., Chorus, C.: Perspectives about artificial moral agents. AI Ethics 1 , 477–490 (2021)

Martinson, B.C., Anderson, M.S., de Vries, R.: Scientists behaving badly. Nature 435 (7043), 737–738 (2005)

Martins, C., Padovan, P., Reed, C.: The role of explainable AI (XAI) in addressing AI liability. SSRN (2020). https://ssrn.com/abstract=3751740

Matta, V., Bansal, G., Akakpo, F., Christian, S., Jain, S., Poggemann, D., Rousseau, J., Ward, E.: Diverse perspectives on bias in AI. J. Inf. Technol. Case Appl. Res. 24 (2), 135–143 (2022)

Matthewson, J.: Trade-offs in model-building: a more target-oriented approach. Stud. Hist. Philos. Sci. Part A 42 (2), 324–333 (2011)

McCarthy, J.: What is artificial intelligence? (2007). https://www-formal.stanford.edu/jmc/whatisai.pdf

McLeod, C.: Trust. Stanford Encyclopedia of Philosophy (2020). https://plato.stanford.edu/entries/trust/

Merken, S.: Another US judge says lawyers must disclose AI use. Reuters (2023). https://www.reuters.com/legal/transactional/another-us-judge-says-lawyers-must-disclose-ai-use-2023-06-08/

Merton, R.: The Sociology of Science. University of Chicago Press, Chicago (1973)

Messeri, L., Crockett, M.J.: Artificial intelligence and illusions of understanding in scientific research. Nature (2024). https://doi.org/10.1038/s41586-024-07146-0

Mieth, B., Rozier, A., Rodriguez, J.A., Höhne, M.M., Görnitz, N., Müller, R.K.: DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genom. Bioinform. 3 (3), lqab065 (2021)

Milmo, D.: Two US lawyers fined for submitting fake court citations from ChatGPT. The Guardian (2023). https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt

Mitchell, M.: Artificial Intelligence. Picador, New York (2019)

Mitchell, M.: What does it mean for AI to understand? Quanta Magazine (2021). https://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/

Mitchell, M.: AI’s challenge of understanding the world. Science 382 (6671), eadm8175 (2023)

Mittermaier, M., Raza, M.M., Kvedar, J.C.: Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit. Med. 6 , 113 (2023)

Naddaf, M.: ChatGPT generates fake data set to support scientific hypothesis. Nature (2023). https://www.nature.com/articles/d41586-023-03635-w#:~:text=Researchers%20say%20that%20the%20model,doesn't%20pass%20for%20authentic

Nahas, K.: Now AI can be used to generate proteins. The Scientist (2023). https://www.the-scientist.com/news-opinion/now-ai-can-be-used-to-design-new-proteins-70997

National Academies of Sciences, Engineering, and Medicine: Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. National Academies Press, Washington, DC (2016)

National Institutes of Health.: Guidelines for the Conduct of Research in the Intramural Program of the NIH (2023). https://oir.nih.gov/system/files/media/file/2023-11/guidelines-conduct_research.pdf

National Institutes of Health.: The use of generative artificial intelligence technologies is prohibited for the NIH peer review process. NOT-OD-23-149 (2023). https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html

National Transportation and Safety Board.: Investigations (2023). https://www.ntsb.gov/investigations/Pages/Investigations.aspx

Nawaz, M.S., Fournier-Viger, P., Shojaee, A., Fujita, H.: Using artificial intelligence techniques for COVID-19 genome analysis. Appl. Intell. (Dordrecht) 51 (5), 3086–3103 (2021)

Ng, G.W., Leung, W.C.: Strong artificial intelligence and consciousness. J. Artif. Intell. Conscious. 7 (1), 63–72 (2020)

Nordling, L.: How ChatGPT is transforming the postdoc experience. Nature 622 , 655–657 (2023)

Nost, E., Colven, E.: Earth for AI: a political ecology of data-driven climate initiatives. Geoforum 130 , 23–34 (2022)

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, K., Tiropanis, T., Staab, S.: Bias in data-driven artificial intelligence systems—an introductory survey. Wires (2020). https://doi.org/10.1002/widm

O’Connor, S.: Corrigendum to “Open artificial intelligence platforms in nursing education: tools for academic progress or abuse?” [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Educ. Pract. 67 , 103572 (2023)

O’Connor, S., ChatGPT: Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ. Pract. 66 , 103537 (2023)

Office of Science and Technology Policy: Federal research misconduct policy. Fed. Reg. 65 (235), 76260–76264 (2000)

Office and Science and Technology Policy.: Blueprint for an AI Bill of Rights (2022). https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Olavsrud, T.: 9 famous analytics and AI disasters. CIO (2023). https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html

Omiye, J.A., Lester, J.C., Spichak, S., Rotemberg, V., Daneshjou, R.: Large language models propagate race-based medicine. NPJ Digit. Med. 6 , 195 (2023)

Oncology Medical Physics.: Accuracy, precision, and error (2024). https://oncologymedicalphysics.com/quantifying-accuracy-precision-and-error/

OpenAI.: (2023). https://openai.com/chatgpt

Osoba, O., Welser, W.: An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Rand Corporation (2017). https://www.rand.org/content/dam/rand/pubs/research_reports/RR1700/RR1744/RAND_RR1744.pdf

Othman, K.: Public acceptance and perception of autonomous vehicles: a comprehensive review. AI Ethics 1 , 355–387 (2021)

Ovchinnikov, S., Park, H., Varghese, N., Huang, P.S., Pavlopoulos, G.A., Kim, D.E., Kamisetty, H., Kyrpides, N.C., Baker, D.: Protein structure determination using metagenome sequence data. Science 355 (6322), 294–298 (2017)

Parikh, R.B., Teeple, S., Navathe, A.S.: Addressing bias in artificial intelligence in health care. J. Am. Med. Assoc. 322 (24), 2377–2378 (2019)

Parrilla, J.M.: ChatGPT use shows that the grant-application system is broken. Nature (2023). https://www.nature.com/articles/d41586-023-03238-5

Pearson, J.: Scientific Journal Publishes AI-Generated Rat with Gigantic Penis In Worrying Incident [Internet]. Vice (2024). https://www.vice.com/en/article/dy3jbz/scientific-journal-frontiers-publishes-ai-generated-rat-with-gigantic-penis-in-worrying-incident

Pennock, R.T.: An Instinct for Truth: Curiosity and the Moral Character of Science. MIT Press, Cambridge (2019)

Perni, S., Lehmann, L.S., Bitterman, D.S.: Patients should be informed when AI systems are used in clinical trials. Nat. Med. 29 (8), 1890–1891 (2023)

Perrigo, B.: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine (2023). https://time.com/6247678/openai-chatgpt-kenya-workers/

Pew Charitable Trust.: How FDA regulates artificial intelligence in medical products. Issue brief (2021). https://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2021/08/how-fda-regulates-artificial-intelligence-in-medical-products

Raeburn, A.: What’s the difference between accuracy and precision? Asana (2023). https://asana.com/resources/accuracy-vs-precision

Rasmussen, L.: Why and how to incorporate issues of race/ethnicity and gender in research integrity education. Accountability in Research (2023)

Ratti, E., Graves, M.: Explainable machine learning practices: opening another black box for reliable medical AI. AI Ethics 2 , 801–814 (2022)

Resnik, D.B.: Social epistemology and the ethics of research. Stud. Hist. Philos. Sci. 27 , 566–586 (1996)

Resnik, D.B.: The Price of Truth: How Money Affects the Norms of Science. Oxford University Press, New York (2007)

Resnik, D.B.: Playing Politics with Science: Balancing Scientific Independence and Government Oversight. Oxford University Press, New York (2009)

Resnik, D.B., Dinse, G.E.: Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey. Acad. Med. 87 , 1237–1242 (2012)

Resnik, D.B., Elliott, K.C.: Value-entanglement and the integrity of scientific research. Stud. Hist. Philos. Sci. 75 , 1–11 (2019)

Resnik, D.B., Elliott, K.C.: Science, values, and the new demarcation problem. J. Gen. Philos. Sci. 54 , 259–286 (2023)

Resnik, D.B., Elliott, K.C., Soranno, P.A., Smith, E.M.: Data-intensive science and research integrity. Account. Res. 24 (6), 344–358 (2017)

Resnik, D.B., Smith, E.M., Chen, S.H., Goller, C.: What is recklessness in scientific research? The Frank Sauer case. Account. Res. 24 (8), 497–502 (2017)

Roberts, M., Driggs, D., Thorpe, M., Gilbey, J., Yeung, M., Ursprung, S., Aviles-Rivero, A.I., Etmann, C., McCague, C., Beer, L., Weir-McCall, J.R., Teng, Z., Gkrania-Klotsas, E., AIX-COVNET, Rudd, J.H.F., Sala, E., Schönlieb, C.B.: Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat. Mach. Intell. 3 , 199–217 (2021)

Rodgers, W., Murray, J.M., Stefanidis, A., Degbey, W.Y., Tarba, S.: An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Hum. Resour. Manag. Rev. 33 (1), 100925 (2023)

Romero, A.: AI won’t master human language anytime soon. Towards Data Science (2021). https://towardsdatascience.com/ai-wont-master-human-language-anytime-soon-3e7e3561f943

Röösli, E., Rice, B., Hernandez-Boussard, T.: Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. J. Am. Med. Inform. Assoc. 28 (1), 190–192 (2021)

Savage, N.: Breaking into the black box of artificial intelligence. Nature (2022). https://www.nature.com/articles/d41586-022-00858-1

Savage, N.: Synthetic data could be better than real data. Nature (2023). https://www.nature.com/articles/d41586-023-01445-8

Schmidt, E.: This is how AI will transform the way science gets done. MIT Technology Review (2023). https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/#:~:text=AI%20can%20also%20spread%20the,promising%20candidates%20for%20new%20drugs

Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., Hal, P.: Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology (2022). https://view.ckcest.cn/AllFiles/ZKBG/Pages/264/c914336ac0e68a6e3e34187adf9dd83bb3b7c09f.pdf

Semler, J.: Artificial quasi moral agency. In: AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (2022). https://doi.org/10.1145/3514094.3539549

Service RF: The game has changed. AI trumphs at protein folding. Science 370 (6521), 1144–1145 (2022)

Service R.: Materials-predicting AI from DeepMind could revolutionize electronics, batteries, and solar cells. Science (2023). https://www.science.org/content/article/materials-predicting-ai-deepmind-could-revolutionize-electronics-batteries-and-solar

Seth, A.: Being You: A New Science of Consciousness. Faber and Faber, London (2021)

Shamoo, A.E., Resnik, D.B.: Responsible Conduct of Research, 4th edn. Oxford University Press, New York (2022)

Shapin, S.: Here and everywhere: sociology of scientific knowledge. Ann. Rev. Sociol. 21 , 289–321 (1995)

Solomon, M.: Social Empiricism. MIT Press, Cambridge (2007)

Southern, M.G.: ChatGPT update: Improved math capabilities. Search Engine Journal (2023). https://www.searchenginejournal.com/chatgpt-update-improved-math-capabilities/478057/

Straw, I., Callison-Burch, C.: Artificial Intelligence in mental health and the biases of language based models. PLoS ONE 15 (12), e0240376 (2020)

Swaak, T.: ‘We’re all using it’: Publishing decisions are increasingly aided by AI. That’s not always obvious. The Chronicle of Higher Education (2023). https://deal.town/the-chronicle-of-higher-education/academe-today-publishing-decisions-are-increasingly-aided-by-ai-but-thats-not-always-obvious-PK2J5KUC4

Talbert, M.: Moral responsibility. Stanford Encyclopedia of Philosophy (2019). https://plato.stanford.edu/entries/moral-responsibility/

Taloni, A., Scorcia, V., Giannaccre, G.: Large language model advanced data analysis abuse to create a fake data set in medical research. JAMA Ophthalmol. (2023). https://jamanetwork.com/journals/jamaophthalmology/fullarticle/2811505

Tambornino, L., Lanzerath, D., Rodrigues, R., Wright, D.: SIENNA D4.3: survey of REC approaches and codes for Artificial Intelligence & Robotics (2019). https://zenodo.org/records/4067990

Terwilliger, T.C., Liebschner, D., Croll, T.I., Williams, C.J., McCoy, A.J., Poon, B.K., Afonine, P.V., Oeffner, R.D., Richardson, J.S., Read, R.J., Adams, P.D.: AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination. Nat. Methods (2023). https://doi.org/10.1038/s41592-023-02087-4

The White House.: Biden-⁠Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI (2023). https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/#:~:text=President%20Biden%20signed%20an%20Executive,the%20public%20from%20algorithmic%20discrimination

Thorp, H.H.: ChatGPT is fun, but not an author. Science 379 (6630), 313 (2023)

Turing.: Complete analysis of artificial intelligence vs artificial consciousness (2023). https://www.turing.com/kb/complete-analysis-of-artificial-intelligence-vs-artificial-consciousness

Turing, A.: Computing machinery and intelligence. Mind 59 (236), 433–460 (1950)

UK Statistic Authority.: Ethical considerations relating to the creation and use of synthetic data (2022). https://uksa.statisticsauthority.gov.uk/publication/ethical-considerations-relating-to-the-creation-and-use-of-synthetic-data/pages/2/

Unbable.: Why AI fails in the wild. Unbable (2019). https://resources.unbabel.com/blog/artificial-intelligence-fails

UNESCO.: Ethics of Artificial Intelligence (2024). https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

US Copyright Office: Copyright registration guidance: works containing material generated by artificial intelligence. Fed. Reg. 88 (51), 16190–16194 (2023)

University of Michigan.: Generative artificial intelligence (2023). https://genai.umich.edu/

Vallor, S.: Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character. Philos. Technol. 28 , 107–124 (2015)

Van Gulick, R.: Consciousness. Stanford Encyclopedia of Philosophy (2018). https://plato.stanford.edu/entries/consciousness/

Varoquaux, G., Cheplygina, V.: Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ Digit. Med. 5 , 48 (2022)

Vanian, J., Leswing, K.: ChatGPT and generative AI are booming, but the costs can be extraordinary. CNBC (2023). https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html

Walters, W.H., Wilder, E.I.: Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci. Rep. 13 , 14045 (2023)

Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., Anandkumar, A., Bergen, K., Gomes, C.P., Ho, S., Kohli, P., Lasenby, J., Leskovec, J., Liu, T.Y., Manrai, A., Marks, D., Ramsundar, B., Song, L., Sun, J., Tang, J., Veličković, P., Welling, M., Zhang, L., Coley, C.W., Bengio, Y., Zitnik, M.: Scientific discovery in the age of artificial intelligence. Nature 620 (7972), 47–60 (2023)

Weiss, D.C.: Latest version of ChatGPT aces bar exam with score nearing 90th percentile. ABA J. (2023). https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile

Whitbeck, C.: Truth and trustworthiness in research. Sci. Eng. Ethics 1 (4), 403–416 (1995)

Wilson, C.: Public engagement and AI: a values analysis of national strategies. Gov. Inf. Q. 39 (1), 101652 (2022)

World Conference on Research Integrity.: Singapore Statement (2010). http://www.singaporestatement.org/statement.html

Zheng, S.: China’s answers to ChatGPT have a censorship problem. Bloomberg (2023). https://www.bloomberg.com/news/newsletters/2023-05-02/china-s-chatgpt-answers-raise-questions-about-censoring-generative-ai

Ziman, J.: Real Science. Cambridge University Press, Cambridge (2000)

Download references

Open access funding provided by the National Institutes of Health. Funding was provided by Foundation for the National Institutes of Health (Grant number: ziaes102646-10).

Author information

Authors and affiliations.

National Institute of Environmental Health Sciences, Durham, USA

David B. Resnik

Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

Mohammad Hosseini

Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David B. Resnik .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Resnik, D.B., Hosseini, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00493-8

Download citation

Received : 14 December 2023

Accepted : 07 May 2024

Published : 27 May 2024

DOI : https://doi.org/10.1007/s43681-024-00493-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Transparency
  • Accountability
  • Explainability
  • Social responsibility
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Research Ethics

    scholarly article on research ethics

  2. research ethics

    scholarly article on research ethics

  3. (PDF) RESEARCH ETHICS POWERPOINT

    scholarly article on research ethics

  4. research-ethics-photo

    scholarly article on research ethics

  5. Réalisable Inapproprié Erreur priere de degagement et purification la

    scholarly article on research ethics

  6. Scholarly Sources: The A-Z Guide

    scholarly article on research ethics

VIDEO

  1. How to Read a Scholarly Article

  2. Scholarly Article Presentation

  3. Research Week 2021: Ethics of scholarly writing

  4. Reading a Scholarly Article @LearningontheJourney

  5. publication ethics// Ph.D Course work paper // BASICS//explained in Tamil/Dr.Sridevi Sangeetha/

  6. Relevance of Research Methods in Scholarly Writing: Prof. (DR). T.R. Subrahmanya

COMMENTS

  1. Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  2. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    To discuss the challenges faced by qualitative researchers in research ethics boards: Case studies—projects submitted to ethics boards: Research ethics boards review process may threaten core foundational principles. It is necessary a proportionate representation of qualitative researchers on research ethics boards: McMurphy et al. Canada ...

  3. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific integrity and the end uses of research. This journal is a member of the Committee on ...

  4. Full article: Principles for ethical research involving humans: ethical

    Morality, ethics and ethical practice. Ethics, or moral philosophy, is a branch of philosophy that seeks to address questions of morality. Morality refers to beliefs or standards about concepts like good and bad, right and wrong (Jennings Citation 2003).When used as the basis for guiding individual and collective behaviour, ethics takes on a normative function, helping individuals consider how ...

  5. The Ethics of Research, Writing, and Publication

    According to Resnik (2011), many people think of ethics as a set of rules distinguishing right from wrong, but actually the term "ethics" refers to norms of conduct or of action and in disciplines of study. Research ethics or norms promote the "knowledge, truth, and avoidance of error" (p. 1) and protect against "fabricating ...

  6. Ethics in research

    Principles. Ethical principles are supported by the Belmont report into 3 precepts: 1) respect for persons; 2) beneficence and 3) justice [ 11 ]. Respect for autonomy has its origin in the Latin "self-rule" which establishes there is an obligation to respect the autonomy of others to decide on their lives.

  7. First do no harm: An exploration of researchers' ethics of conduct in

    Research ethics has traditionally been guided by well-established documents such as the Belmont Report and the Declaration of Helsinki. At the same time, the introduction of Big Data methods, that is having a great impact in behavioral research, is raising complex ethical issues that make protection of research participants an increasingly difficult challenge. By conducting 39 semi-structured ...

  8. Home

    The Journal of Academic Ethics is an academia-focused journal. It discusses a range of ethical issues related to research, teaching, administration, and governance at post-secondary level. An interdisciplinary, hybrid, peer-reviewed journal. Publishes original, review and opinion articles and book reviews.

  9. (PDF) Ethics in research

    Ethics is a. responsibility of all involved in research starting. from individual researchers to funding. organizations, central and institutional ethical. committees, journal editors ...

  10. Assisting you to advance with ethics in research: an introduction to

    Ethics and ethical behaviour are the fundamental pillars of a civilised society. The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law. In fact, ethics gets precedence with anything that would include, affect, transform, or influence upon individuals, communities or any living creatures. Many institutions within Europe have set up their own ...

  11. Ethics in academic research and scholarship: An elucidation of the

    This paper defined ethics in the context of research and scholarship and the ethical considerations required under different typologies of research and scholarship. Every research and scholarship work intended for dissemination, have inherent and situational ethical tenets which must be followed.

  12. Ethical conduct of nursing research

    Much nursing research does not require research ethics approval through HRA processes, but it remains essential that those undertaking the research, and those who might later read the research in journals, are able to have faith in all ethical review processes and can, subsequently, have greater confidence in the quality of the research.

  13. Ethics in educational research: Review boards, ethical issues and

    This paper addresses current issues regarding the place and role of ethics in educational research. Academic researchers and professional associations have argued current ethical procedures in the form of ethics review committees are often lacking in knowledge and expertise of particular ethical contexts, including education (Sikes and Piper, 2010).

  14. The ethics of using artificial intelligence in scientific research: new

    Using artificial intelligence (AI) in research offers many important benefits for science and society but also creates novel and complex ethical issues. While these ethical issues do not necessitate changing established ethical norms of science, they require the scientific community to develop new guidance for the appropriate use of AI. In this article, we briefly introduce AI and explain how ...

  15. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  16. Information

    In the digital age, the intersection of artificial intelligence (AI) and higher education (HE) poses novel ethical considerations, necessitating a comprehensive exploration of this multifaceted relationship. This study aims to quantify and characterize the current research trends and critically assess the discourse on ethical AI applications within HE. Employing a mixed-methods design, we ...

  17. Ethics as Methods: Doing Ethics in the Era of Big Data Research

    This is an introduction to the special issue of "Ethics as Methods: Doing Ethics in the Era of Big Data Research." Building on a variety of theoretical paradigms (i.e., critical theory, [new] materialism, feminist ethics, theory of cultural techniques) and frameworks (i.e., contextual integrity, deflationary perspective, ethics of care), the Special Issue contributes specific cases and ...

  18. Emotions, stress and related phenomena in public ...

    This article introduces the special issue on the topic of emotions, stress, and related phenomena in public service interpreter and translator training. In it, we provide background and context for the special issue, discuss issues of ethics in interpreting research involving learners, examine four dimensions of the psycho-affective that need to be considered in interpreter and translator ...

  19. Ethical, Practical, and Methodological Considerations ...

    This article contributes to the ongoing discussion of ethical issues surrounding qualitative research on online content and the mediated and contingent nature of ethics in research practice (Calvey, 2008). Such recognition can foster ethically aware yet flexible approaches to online qualitative research and creative methodological efforts to ...

  20. Full article: Resuscitating the Dead: NRP and Language

    People also read lists articles that other readers of this article have read. Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine. Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.