Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 September 2019

Human decision-making biases in the moral dilemmas of autonomous vehicles

  • Darius-Aurel Frank   ORCID: orcid.org/0000-0002-1577-7352 1 ,
  • Polymeros Chrysochou   ORCID: orcid.org/0000-0002-7905-5658 1 , 2 ,
  • Panagiotis Mitkidis 1 , 3 &
  • Dan Ariely 3  

Scientific Reports volume  9 , Article number:  13080 ( 2019 ) Cite this article

33k Accesses

46 Citations

28 Altmetric

Metrics details

  • Human behaviour
  • Social behaviour

The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.

Similar content being viewed by others

moral decision making research paper

The risk ethics of autonomous vehicles: an empirical approach

moral decision making research paper

An ethical trajectory planning algorithm for autonomous vehicles

moral decision making research paper

The ironies of autonomy

Introduction.

Autonomous vehicles are at the forefront of the development of artificial intelligence and are designed to operate without any human intervention 1 . It is expected that they will revolutionize public and private transportation, with the prospect of saving lives, reducing congestion, enhancing mobility, and improving overall productivity 2 , 3 , 4 , 5 , 6 . The future of autonomous vehicles, however, is disputed due to the ethical and psychological concerns about their behavior in critical, non-routine traffic situations that potentially involve fatalities 7 , 8 , 9 . The challenge in training artificial intelligence morality is meeting societal expectations about the ethical principles that should guide machine behavior 10 . An unresolved question is how an autonomous vehicle should be trained to act when – regardless of its actions – the outcome of a critical incident would lead to fatality 8 , 11 .

To address this challenge, researchers set out to explore the moral dilemmas faced by autonomous vehicles in order to develop a universally accepted moral code that could guide the machines’ behavior 10 , 12 , 13 . The largest project, the Moral Machine experiment, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles, has managed to gather data on millions of humans’ moral decisions 10 . This data was consecutively used to train machine-learning algorithms 14 , such as those implemented in autonomous vehicles. Developing moral guidelines for artificial intelligence driven technologies based on people’s moral decisions, however, risks incorporating human predispositions in moral decision-making 15 . The most prevalent conditions that were shown to interfere with moral judgements are cognitive load 16 and emotional engagement 15 .

The inherent problem of peoples’ preferences in moral dilemmas, as discussed by Bonnefon and colleagues, is that people seem to favor a utilitarian moral doctrine that minimizes the total casualties in potentially fatal accidents, but they simultaneously report preferring an autonomous vehicle that is preprogrammed to protect themselves and their families over the lives of others 14 . These findings illustrate that moral decisions could be a matter of personal perspective: When people think about the outcomes of the dilemmas for the greater good of society, they appear to employ a utilitarian moral doctrine; however, when they consider themselves and their loved ones, they shift towards a deontological moral doctrine that rejects the idea of sacrificing the passengers in their vehicle 17 . As a consequence, moral codes derived from human decisions could reflect biased moral preferences.

Our research aims to investigate whether the abovementioned mechanisms can help explain the duality in people’s moral decisions. In seven studies, we replicate moral dilemmas used in the Moral Machine experiment. First, we quantify the influence of personal perspective (passenger versus pedestrian) and decision-making mode (intuitive versus deliberate) on people’s decisions in moral dilemmas faced by autonomous vehicles. Second, we document the variations in moral preferences based on the situational factors of the dilemmas, including the number of passengers and pedestrians, presence of children among passengers and pedestrians, outcome of intervention, and lawfulness of pedestrians’ behavior. We discuss the implications of our findings for the development of universally accepted moral guidelines for autonomous vehicles and provide directions for future research.

Biases in Moral Decision-Making

The dilemmas studied in research on the moral programming of autonomous vehicles represent adaptions of the original trolley problem 18 . The trolley problem describes a thought experiment in which an individual, confronted with the critical situation of a trolley about to run over five people, must choose between the default and an alternative outcome that alters the path of the trolley and sacrifices its single driver to save the people on the tracks. This dilemma can be transferred to the case of autonomous vehicles because the only difference is that autonomous vehicles are programmed in advance to make the decision 8 .

Decision-making modes

Research on the underlying mechanisms of human moral decision-making in variations of the trolley dilemma shows that two distinct decision-making modes can alter the outcomes of people’s decision processes significantly 16 . Extending on dual-process theory 19 , 20 , 21 , 22 , Greene et al . find that in a deliberate decision-making mode, people use more cognitive resources and make more utilitarian moral decisions 16 . On the other hand, in the alternative, intuitive decision-making mode, which is driven by emotions and easily accessible rules, people make more deontological moral decisions. Based on the availability of processing time, people shift between the two modes 16 . Experiments show that in the presence of time pressure, people are systematically biased towards using the intuitive decision-making mode resulting in more deontological moral decisions 15 , 23 . Accordingly, it can be expected that people in a deliberate decision mode prefer a utilitarian moral code for autonomous vehicles that maximizes the number of saved lives, yet while in an intuitive decision mode categorically decide to sacrifice the passengers of the vehicle. This is why the environment and circumstances under which people face moral dilemmas can heavily influence those decisions’ outcomes. While the largest differences are likely to be observed between decisions in real-world actions (i.e., driver on the road) and hypothetical scenarios (i.e., taking a survey at home), the influence of people’s decision-making mode also applies when researchers survey large segments of the population for the purpose of programming autonomous vehicles to make moral decisions in dilemmas.

Personal perspective

Another bias, when it comes to moral decisions in the dilemmas that are faced by autonomous vehicles, is rooted in the psychological constraints of people’s beliefs and decisions. The underlying theory of bounded rationality postulates that people’s decisions are biased by the cognitive limitations of their minds 24 , 25 , resulting in being biased by the emotional proximity of the event or people in question 26 . In support of this theory, Greene 27 studies the moral bias that is attributed to emotional proximity in the realm of moral decision-making and finds that impersonal moral dilemmas are more likely to trigger utilitarian moral decisions, whereas personal dilemmas tend to result in more deontological moral decisions. Other research links this self-preserving behavior to personal perspective 28 . This bias of personal perspective can further be seen in recent findings of research on the moral dilemmas in the use of autonomous vehicles, which observes shifting moral judgements when it comes to people’s moral decisions for others and consideration of themselves 7 .

Overview of the Present Research

Our research reports seven studies on human moral decision-making in the moral dilemmas faced by autonomous vehicles. The independent samples combine more than 12,000 moral decisions from thousands of individuals across the US and Denmark. The studies are designed to replicate the Moral Machine experiment, an online experimental platform that explored moral preferences in visual illustrations of the moral dilemmas faced by autonomous vehicles 10 . This design allows us to validate and discuss the findings in light of previous research and facilitates our contribution to the relevant understanding and future research.

Studies 1 and 2 investigate the influence of perspective and decision-making mode in people’s moral decisions in the context of the most basic and simple autonomous vehicle dilemma. Studies 3 and 4 gradually increase the complexity of the dilemma and test additional hypotheses on the underlying moral doctrines (deontological versus utilitarian) that guide people’s moral decisions. Studies 5 and 6 experiment with the concepts of agency and social norms violations and their influence on people’s moral decisions. Finally, Study 7 combines the cited factors in a large, controlled lab experiment and, along with an internal meta-analysis of the six online studies, provides converging evidence on the findings and conclusions of this research.

Study 1: Perspective and Decision-making Mode

Study 1 establishes the main effects of decision-making mode and personal perspective on people’s decisions in the context of a simplified dilemma in which an autonomous vehicle must sacrifice either an innocent pedestrian or its own passenger. The purpose of this study is to determine whether the personal perspective of a decision-maker leads to more selfish and self-preserving moral decisions, according to which people from the perspective of the pedestrian will favor the life of the pedestrian. Moreover, it determines the extent to which moral decisions are affected by people’s decision-making mode, contrasting between intuitive and deliberate, reflective moral decisions.

Participants

Eight hundred and seven participants (46.0% females; age: M  = 32.49 years, SD  = 11.84) were recruited on Prolific Academic and compensated $0.18–$0.35 each. Only US residents of minimum 18 years of age who were fluent in English were eligible for this study. The majority of participants reported having a driver’s license (89.1%) and using cars frequently ( M  = 5.93, SD  = 1.51; 7-point scale, “7” very often, “1” never).

The stimuli used in this study and subsequent ones were adapted from the Moral Machine experiment (available at https://moralmachine.mit.edu/ ). The dilemma represents a modern variant of the original trolley dilemma 18 , in which an autonomous vehicle faces a critical incident with inevitably fatal consequences (see Supplementary materials for stimuli used in all studies). The decision-maker must choose between two possible outcomes: the autonomous vehicle either (a) stays on its original course, thereby killing one or more pedestrians crossing the street, or (b) swerves into the other lane, thereby killing one or more of its passengers. Figure  1 shows the simplified version of the studied dilemma, which consists of a single pedestrian and a single passenger and is presented to the decision-maker as two side-by-side illustrations of the possible, alternative outcomes. A timer in the top left corner indicates the remaining number of seconds to complete the task. Participants are instructed to select the outcome they believe is correct for an autonomous vehicle to be programmed to do.

figure 1

Moral decision task used in Study 1. Note: Own work. Image assets adapted from the Moral Machine ( http://moralmachine.mit.edu/ ) by Scalable Cooperation and MIT Media Lab [CC BY 4.0 ( https://creativecommons.org/licenses/by/4.0/ )].

Participants were randomly assigned to 3 perspectives (passenger, pedestrian, observer) x 2 decision-making modes (deliberate, intuitive) between-subject conditions. Perspectives consisted of two personal conditions (passenger, pedestrian) and a control condition (observer). In the personal perspectives, participants saw a visual stimulus of the target person with the instruction “Imagine that you are the [passenger of the car; pedestrian walking the street].” In the control condition, participants were only instructed to “Imagine […] you are observing the situation,” without visual aid.

Decision-making modes consisted of intuitive and deliberate conditions that were controlled for by manipulation of time pressure 15 . Time pressure was used to trigger intuitive decisions. In this condition, participants were instructed to respond in less than five seconds, a span of time that was pretested in a pilot study ( N  = 26; see Supplementary materials). In the deliberate decision-making condition, participants were instructed to respond within 30 seconds allowing them to make more deliberate, informed decisions. In both conditions, participants were presented with a visual timer that counted down the remaining seconds (see Fig.  1 ).

Due to a technical constraint in the survey software, participants were able to exceed the time limit of their respective condition. This limitation potentially affected participants who did not submit a decision before the timer ran out. We control for this limitation by excluding participants who responded (a) two or more seconds slower than the given time limit in the intuitive decision condition (timer counts five seconds; cutoff at seven seconds) or (b) too fast in the deliberate decision-making condition, using the same cutoff as in the intuitive decision-making condition (timer counts 30 seconds; cutoff at seven seconds). One hundred and ninety-eight participants (24.6%) were removed from the original sample. A floodlight and correspondence analysis supported the decision to use a cutoff of seven seconds to separate intuitive from deliberate decisions (see Supplementary materials).

Procedure and measures

All online studies followed the same structure and used the same measures, except when otherwise stated. Participants were informed about the purpose and gave their consent in advance. First, participants saw an image of an autonomous sedan and learned about the capability of autonomous vehicles to drive without human intervention 1 . Next, participants were instructed that they would have control over the outcome of a moral dilemma that an autonomous vehicle faced and were familiarized with the visual elements of the dilemma (i.e., car, passenger, pedestrian, roadblock, and crosswalk) to increase their comprehension in the decision-making task. Finally, participants were instructed to assume the perspective and to respond within the given time according to their respective condition. In the moral decision-making task, participants were presented with the two illustrations representing the alternative outcomes of the dilemma and a timer counted the number of seconds left. Participants submitted their moral decision by clicking on the illustration that represented their preferred outcome. The decision-making task was followed by a manipulation check and a brief questionnaire on the participant’s beliefs, attitudes, and intentions towards autonomous vehicles. The demographic questions concerned the participants’ gender and age.

Data analysis

We used SPSS 24 and R Studio (Version 1.1.423) for all analyses. Either χ 2 or two-sided, independent samples t-tests were used to assess differences in group means. Binary logistic regression was used to regress the experimental conditions (perspectives, decision-making mode) and other control variables on participants’ decision to sacrifice the pedestrian.

First, we looked at the percentage of participants for each individual condition who decided that the autonomous vehicle in the one-versus-one dilemma should sacrifice the pedestrian. As shown in Fig.  2 , participants’ decisions are anything but random and clearly trend towards sparing the pedestrian. Across the six conditions, participants’ moral decisions differ significantly (χ 27  = 22.58, p  < 0.003). The most notable difference can be observed between the two decision-making modes: the intuitive decision-making condition led to the pedestrian being sacrificed considerably less often (21.5%) than the deliberate decision-making condition (36.5%). The individual’s perspectives resulted in much smaller differences. Participants in the pedestrian perspective condition chose to sacrifice the pedestrian less often (22.8%) than participants in the passenger perspective condition (32.4%). In the control condition, the pedestrian was sacrificed in 30.7% of cases, on average.

figure 2

Moral decision to sacrifice the pedestrian by individual’s perspective and decision-making mode in Study 1. Caption: The dashed line marks the point at which lives of passenger(s) are valued equally to those of pedestrian(s).

Table  1 shows that intuitive decision-making results in a significant decrease in the likelihood of people sacrificing the pedestrian ( OR 0.44, 95% CI: 0.30–0.65). The passenger perspective leads to a significant increase in sacrificed pedestrians relative to the pedestrian perspective (Perspective 1; OR 1.64, 95% CI: 1.01–2.65). Moral decisions in the control condition are not statistically different from the pedestrian perspective (see Perspective 2). We also tested for interactions of decision-making mode and personal perspectives and found no significant effects. In regard to control variables, females ( OR 0.59, 95% CI: 0.40–0.89) and older participants ( OR 0.78, 95% CI: 0.44–0.94) sacrificed the pedestrians significantly less often. Participants’ possession of a driver’s license and car use frequency did not significantly alter their moral decisions.

Study 1 demonstrates that people’s decisions in the simplified one-versus-one moral dilemma are influenced by their decision-making mode and personal perspective. First, people’s intuitive decisions appear to favor sacrificing the passenger. When people take more time to deliberate on the decision, the moral preference trends towards indifference between the life of the passenger and that of the pedestrian. The personal perspective also influenced people’s moral decisions. The difference is driven by the pedestrian and the passenger perspectives, which led to more selfish decisions, in that both spared their own lives more often compared to the control condition. Nevertheless, even the passenger perspective condition generally favored sacrificing the vehicle, which contradicts self-preservation as the underlying motivation. In the absence of a utility trade-off that would allow distinguishing a utilitarian from a deontological moral doctrine, we proceeded to test two alternative explanations on the action and status quo biases in people’s moral decisions before concluding this discussion. In Study 3, we address the hypothesis about the moral doctrine and its moderation of people’s decision-making mode.

Study 2: Alternative Explanations

Study 2 tests two alternative explanations for people’s prevalent decision to sacrifice the passenger in the one-versus-one dilemma presented in Study 1. The first explanation is an action bias, in which people choose action (changing the path of the vehicle) over inaction (staying on the default path) even when the outcome of taking action is irrational 29 , 30 . The alternative explanation is a status quo bias, according to which decision-makers are more likely to preserve the default state of an outcome over change 11 , 31 . This would mean that people perceive sacrificing the passenger as the default state.

We test these alternative explanations by inverting the outcome of the previously introduced one-versus-one dilemma (see Fig.  3 ). This time, instead of heading straight for the pedestrian, the vehicle is heading straight for the barrier, sacrificing the passenger. The alternative course of action in this adapted dilemma leads to the death of the pedestrian. An increase in sacrificed pedestrians would provide evidence for an action bias, as this would show consistency with people’s decisions to choose the alternative path in the first study. In contrast, a further increase in sacrificed passengers would support a status quo bias, assuming the decision to sacrifice the passenger represents the status quo. In the case of no change in the observed preferences, the findings would suggest that the observed trend to sacrifice a single passenger over a single pedestrian reflects an unbiased moral preference.

figure 3

The default path of the vehicle in Studies 1 and 2. Note: Own work. Image assets adapted from the Moral Machine ( http://moralmachine.mit.edu/ ) by Scalable Cooperation and MIT Media Lab [CC BY 4.0 ( https://creativecommons.org/licenses/by/4.0/ )].

Participants ( N  = 848; 51.9% females; age: M  = 33.02 years, SD  = 11.81) were randomly assigned to the same 3 (perspective: pedestrian, passenger, observer) x 2 (decision-making mode: deliberate, intuitive) between-subject conditions introduced in Study 1. As shown in Fig.  3 , the dilemma was adapted to feature the exact opposite outcomes of the dilemma used in Study 1. The remaining procedure was identical to Study 1, except that the questionnaire on participants’ beliefs about autonomous vehicles was shortened.

Results and Discussion

We compared participants’ decision to sacrifice the pedestrian of Studies 1 and 2 (see Table  2 ) and found that in both alternative one-versus-one dilemmas, about one quarter of participants chose to sacrifice the pedestrian. Although the default paths in the two dilemmas lead to the exact opposite outcomes, the differences in people’s moral decisions in the individual conditions remain almost equal. The greatest deviation is seen in the control condition, which shifts from 38.1 percent to merely 29.1 percent in the deliberate decision-making condition. This suggests that people who were unbiased in their decision as to whom they would want to protect tended to spare the pedestrian even more. This shift appears only rational, since in Study 2, people would have to deliberately steer the vehicle to kill the innocent pedestrian.

In the logistic regression, reported in Table  2 , we probe the difference in participants’ decisions that is attributed to the difference in the outcomes by pooling the independent samples of Studies 1 and 2 ( N =  1,212). This results in an increase in sample size and elevates the significance of the results of Study 1. The effect that we focus on, however, is the change in participants’ likelihood of choosing to sacrifice the pedestrian due to the default path being changed to driving into the barrier. As shown in Table  2 , the effect of this alternative default path is not significant ( OR 0.87, 95% CI: 0.66–1.16, p  = 0.346). The result contradicts both alternative explanations (action bias, which expected the likelihood of sacrificed pedestrians to be increased, and status quo bias, which expected the opposite). We therefore conclude that a decision to sacrifice the passenger is unrelated to the default outcome of the dilemma and therefore likely represents a moral preference to avoid harming an innocent pedestrian in the street. Nevertheless, the simplified dilemma used in Studies 1 and 2 falls short in terms of creating a trade-off to determine the moral doctrine underlying people’s moral decisions. Study 3 follows up on this.

Study 3: The Moral Doctrine

Study 3 builds on our previous findings and investigates people’s underlying moral decisions in a dilemma that is faced by an autonomous vehicle. To determine the presence of either a utilitarian or deontological moral doctrine, we adjust the previously introduced dilemma to become a closer representation of the original trolley dilemma 18 . That original dilemma, which has also been widely studied in the context of autonomous vehicle dilemmas 7 , 13 , presents people with a steep utility trade-off in choosing either to spare one person while sacrificing five others or kill one person to spare the lives of five. To introduce a utility trade-off in our dilemma, we therefore added a second passenger to the vehicle while keeping a single pedestrian in the crosswalk. Under the assumption that people employ a utilitarian moral doctrine, we would expect to see an increase of sacrificed pedestrians (in favor of saving two lives over one) compared with the previous one-versus-one dilemma. In the case of no change, the results would suggest that people employ a deontological moral doctrine motivated by the reasoning that it is simply not right to sacrifice an innocent person in the street.

The method in Study 3 was identical to that in our previous studies. Three hundred and ninety-three participants (50.6% females; age: M  = 38.10 years, SD  = 11.58) were recruited online and randomly assigned to the same 3 (perspective: pedestrian, passenger, observer) x 2 (decision-making mode: deliberate, intuitive) between-subjects conditions. The dilemma was based on that used in Study 2, but this time, it showed two passengers in the front seats of the autonomous vehicle and a single pedestrian in the crosswalk. The stimuli and instructions were adapted to reflect this change. The procedure remained identical to the previous studies.

Table  3 shows participants’ moral decisions to sacrifice the pedestrian in Studies 2 and 3, which were identical except for the number of passengers in the autonomous vehicle. What is notable is that except for the intuitive decisions from the pedestrian perspective, the percentage of sacrificed pedestrians increases in Study 3 compared to Study 2. The largest difference is observed for the pedestrian perspective, in which the preference, for the first time in our studies, shifted in favor of sparing the passengers. In the intuitive condition, only 4.3 percent of participants chose to sacrifice the pedestrian, whereas in the deliberate decisions, this number increased to 60.0 percent. This large difference between the two decision-making modes illustrates the cognitive load that the moral decision task puts on participants, highlighting how time to deliberate can dramatically shift people’s moral preferences 16 .

In a logistic regression on the pooled, independent samples of Studies 2 and 3 ( N  = 889), reported in Table  1 , we estimate the change in the likelihood of choosing to sacrifice the pedestrian that can be attributed the increase in the number of passengers. The results show that in the dilemma with two passengers, the likelihood of people sacrificing the pedestrian is 1.93 times higher than in the dilemma with the single passenger ( OR 1.93, 95% CI: 1.37–2.73). This effect is highly significant ( p  < 0.001). The other experimental variables showed the same pattern seen in our previous studies.

The finding that the number of sacrificed pedestrians increases with the number of passengers is in line with previous research on the original trolley dilemma and in the context of autonomous vehicles 13 . The relatively higher utility of saving two lives over one appears to shift people’s moral decisions towards sacrificing the pedestrian relatively more often. This finding supports the hypothesis that people employ a utilitarian moral doctrine in a deliberate decision-making mode, while people in an intuitive, speedy decision-making mode are relying on the more accessible deontological moral doctrine. People’s moral preferences in the latter decision-making mode further reflect the internalized social norm that pedestrians in public roads may not be harmed by drivers, which US citizens are taught when they are young 32 .

While people’s moral decisions in the present study trended towards sparing the lives of the two passengers, the prevalent choice remained in favor of the single pedestrian. The moral doctrine that guides this decision appears to be stronger than the utility trade-off that was created in favor of the passengers in the two-versus-one dilemma. This raises the question of what degree of utility trade-off would be necessary for people to prefer that an autonomous vehicle actually harms an innocent pedestrian to avoid the certain death of its passengers. It is also of particular interest whether the difference between people’s moral decisions in intuitive versus deliberate decisions will increase or eventually converge.

Study 4: The Value of Life

Study 4 builds on the previous finding that people’s moral decisions are influenced by the contextual factor of number of passengers in the presented dilemma. As seen in Study 3, the increased complexity of the dilemma (two-versus-one) leads to a larger difference in moral decisions between the two decision-making modes. In Study 4, we aim to further increase the complexity of the dilemma and thus the effects of decision-making mode and perspective by adding the passenger’s age as another contextual variable. In earlier research, age has been studied as a factor in moral dilemmas and connected to the value-of-life heuristic in people’s moral decision-making 13 . According to this research, the life of a younger person is valued over that of an older person, leading to a lower likelihood of sacrificing children in similar moral dilemmas. The recent publication of the global data of the Moral Machine experiment, however, limits this value-of-life observation to Western cultures; it shows that in Asian cultures, the trend is reversed (older people’s lives are more valued) 10 . Since we conducted this study in the US, we expect younger age to correspond to higher value and increase the decisions to spare the life of the younger person.

Participants ( N  = 428; 50.5% females; age: M  = 35.97 years, SD  = 11.96), recruited on MTurk, were randomly assigned to the same 3 (perspective: pedestrian, passenger, observer) x 2 (decision-making mode: deliberate, intuitive) between-subject conditions as introduced in our previous studies. The dilemma was adapted from Study 3; in Study 4, two passengers were shown in the vehicle, one of whom was a child sitting in the back seat (see Fig.  4 ).

figure 4

The default path of the vehicle in Studies 3 and 4. Note: Own work. Image assets adapted from the Moral Machine ( http://moralmachine.mit.edu/ ) by Scalable Cooperation and MIT Media Lab [CC BY 4.0 ( https://creativecommons.org/licenses/by/4.0/ )].

The stimuli and instructions were adapted to reflect the presence of the child in the car. The procedure remained identical to that in the previous studies. The questionnaire on participants’ demographics was extended to capture whether participants were parents of children similar in age to the child in the dilemma (0–9 years).

First, we compare participants’ moral decisions observed in this study with the previous study that used an otherwise identical dilemma but showed an adult passenger instead of a child. The results, presented in Table  4 , show that there is little difference in participants’ moral decisions to sacrifice the pedestrian. In fact, participants’ decisions in Study 4 are almost identical to those seen in Study 3, except for a change from 42.2 to 57.4 percent in sacrificed pedestrians in the deliberate decision-making condition in the observer perspective.

In a logistic regression on the pooled, independent samples of Studies 3 and 4 ( N  = 594), reported in Table  1 , we determine the size of the effect that can be attributed to the difference of the child relative to the adult among the passengers in the two-versus-one dilemma. The model reveals that the child does not significantly increase the likelihood of people choosing to sacrifice the pedestrian ( OR 1.06, 95% CI: 0.94–1.20, p  = 0.333). This result suggests that people did not attribute a higher value to the child than to the adult passenger.

In a second logistic regression, shown in Table  5 , we probe the influence of the control variable (children) and second-order interactions. Model 1 shows that intuitive decisions lead to a significant lower likelihood of participants sacrificing the pedestrian than deliberate decisions ( OR  = 0.30, CI: 0.18–0.49). The difference of the passengers’ perspective from the pedestrian is marginally significant ( OR  = 1.82, CI: 0.99–3.35), whereas the control is certainly not ( OR  = 1.69, CI: 0.90–3.18). When the interaction of perspective and decision-making mode is added (see Model 2, Table  5 ), the main effects of perspective diminish and the effect of decision-making mode becomes even stronger. Moreover, the results show that the likelihood of people’s intuitive decisions to sacrifice the pedestrian in the passenger perspective condition is four times higher than in the pedestrian perspective condition. The same trend is observed in the interaction of decision-making with the control relative to pedestrian perspectives; however, this effect is only marginally significant. This result suggests that people are less protective of the child in the pedestrian condition than in the passenger and control conditions.

Lastly, we test for the effect of children and interaction of children with the main effects. Model 3 shows no significant effect or interaction between parents of young children and the rest of the sample. This result suggests that parents do not project the consequences of the dilemma onto themselves and their own children. This, together with the mentioned findings, supports the theory that people’s consideration of the value of life in moral dilemmas is driven by deliberate decision-making, which in turn is associated with a utilitarian moral doctrine and absence of emotion 15 . Regardless, even in the deliberate decision-making mode, people’s decisions were only as good as flipping a coin for choosing whether one pedestrian versus two passengers should be sacrificed. The prevalent moral doctrine that deems it wrong to sacrifice an innocent pedestrian is surprisingly strong. Further increasing the utility trade-off would potentially shift the preference in favor of the passengers. In the following two studies, we therefore investigate the influence of agency (or illusion of agency) on people’s moral decisions in the autonomous vehicle dilemma.

Study 5: Agency Bias

Study 5 tests another alternative explanation for people’s moral decisions, specifically whether the prevalent decision to sacrifice the passenger is caused by the inference of passenger agency. The concept of agency refers to humans’ capacity to change their immediate environment, shape the course of their lives, and control their actions 33 . In this and other research on the moral dilemmas of autonomous vehicles, the artificial intelligence drives the vehicle and is in control of the situation. Accordingly, agency can be attributed to the autonomous vehicle, however, not the passenger who is driven by one. Nevertheless, in visual representations of the dilemmas, the passenger often is in the driver’s seat, which, in a traditional, manually driven car, grants control over the vehicle and the outcome of the situation. Due to the human behavior of considering attribution of responsibility in moral decisions 34 , 35 , people’s attribution of agency could bias their moral decision. We test this hypothesis by comparing two otherwise identical dilemmas in which the passenger is in either the “driver’s” seat or the back seat. In both dilemmas, the passenger’s agency is the same, and we should not see a difference in people’s moral decisions. However, if the seating position facilitates a bias in the attribution of the passenger’s agency, we are likely to see a lower percentage of sacrificed passengers when the passenger is in the back seat.

Participants ( N  = 302; 56.3% females; age: M  = 37.05 years, SD  = 12.06), recruited on MTurk, were randomly assigned to 3 (perspective: pedestrian, passenger, observer) x 2 (seating position: front, back) between-subjects conditions. We used the simple one-versus-one dilemma introduced in Study 2, but we varied the passenger’s seating position. In the front-seat position condition, the passenger was in the left front seat. Note that this is the same seating position that was used in all our previous studies and that the illustration does not show a steering wheel. In the back-seat position condition, the passenger was in the middle back seat (see Fig.  5 ). The three perspective conditions remained identical to our previous studies. Participants were instructed that the vehicle was driven autonomously and that they were asked to choose the more appropriate outcome.

figure 5

The default path of the vehicle for the front- and back-seat conditions in Study 5. Note: Own work. Image assets adapted from the Moral Machine ( http://moralmachine.mit.edu/ ) by Scalable Cooperation and MIT Media Lab [CC BY 4.0 ( https://creativecommons.org/licenses/by/4.0/ )].

Because the decision-making mode was not actively manipulated in this study, we controlled for response time in the logistic regression model. No participants were excluded from the analysis.

Table  6 shows the percentage of participants that chose to sacrifice the pedestrian for each of the six conditions. In all back-seat conditions, participants sacrificed the pedestrian more often (sparing the life of the passenger). The difference is lowest in the observer (control) condition and more than doubles in the passenger perspective. Pedestrian and passenger perspectives are nearly the same in the front-seat condition and differ largely in the back-seat condition.

The logistic regression on the Study 5 sample, reported in Table  2 , finds that the passenger’s seating position in the back seat results in a 2.23 times higher likelihood of the pedestrian being sacrificed relative to the front-seat position ( OR 2.23, 95% CI: 1.24–4.03). The effect is highly significant (p  = 0.007) and supports the hypothesis that people’s moral decisions are affected by the inference of the passenger’s agency. This result suggests that people, despite our stressing of the implications of an autonomous vehicle, are biased by attribution of agency, because they sacrifice the passenger more often when the passenger is in the “driver’s” seat. A potential mechanism for this agency bias is found in previous research on the attribution of responsibility based on the proximity to the immediate cause of action 36 , 37 that would explain why the passenger in the driver’s seat is perceived more responsible for the actions than the passenger in the backseat.

Study 6: Social Norms Violation

Study 6 investigates the influence that social conventions, and specifically their violation, have on people’s moral decisions 38 . Social norms are conceptualized as rules that are shared by members of society, guide social behavior, and serve to coordinate societies 39 . In particular, we are interested in the effect that violating the social norm of not endangering others in traffic situations has on people’s decisions to sacrifice the pedestrian in the previously used one-versus-one moral dilemma. We added two alternative versions of the same dilemma, in which the pedestrian violates traffic norms by walking out in front of a car and jaywalking at a red light. We compare the results of the two norm conditions with the default dilemma, in which the pedestrian walks in the crosswalk, as expected by the social norm. We expect norm violations to increase the likelihood of the pedestrian being sacrificed, irrespective of people’s perspective.

Participants ( N  = 608; 59.4% females; age: M  = 35.07 years, SD  = 11.20) were randomly assigned to 3 (perspective: pedestrian, passenger, observer) x 3 (norm violation: low, high, control) between-subject conditions. The perspective conditions were manipulated in the same way as in the previous studies. The norm violation conditions consisted of low norm violation, high norm violation, and no norm violation (control) (see Fig.  6 ). In the low norm violation condition, the pedestrian walked in a street where there were no signs, traffic lights, or crosswalk. In the high norm violation condition, the pedestrian jaywalked at a red light. In the control condition, the pedestrian walked in the crosswalk, as in all previous studies. Again, all participants were included in the analyses, and the logistic regression model accounts for the effect of decision-making modes by controlling for participants’ response times.

figure 6

The default path of the vehicle for the norm violation conditions in Study 6. Note: Own work. Image assets adapted from the Moral Machine ( http://moralmachine.mit.edu/ ) by Scalable Cooperation and MIT Media Lab [CC BY 4.0 ( https://creativecommons.org/licenses/by/4.0/ )].

Table  7 shows the percentage of participants that chose to sacrifice the pedestrian for each of the nine conditions. The results show that participants sacrificed the pedestrian less in the high norm violation condition than in the low norm violation and control conditions. The moral decisions in the control condition replicate the pattern observed in earlier studies.

The logistic regression on the Study 6 sample, reported in Table  2 , contrasts both norm violation conditions with the control condition. The result shows that high norm violation (jaywalking) significantly reduces the likelihood of sacrificing the pedestrian ( OR 0.61, 95% CI: 0.37–0.99, p  = 0.047). The low norm violation, on the other hand, shows no significant effect on participants’ moral decisions ( OR 0.82, 95% CI: 0.51–1.31, p  = 0.402). This finding contradicts our hypothesis that norm violation would increase the likelihood of sacrificing the pedestrian. This result required further investigation; when we examined the stimuli, it became apparent that the traffic light in the high norm violation condition might have been interpreted as governing the autonomous vehicle instead of the pedestrian. In that case, the norm violation would be committed by the autonomous vehicle, which could explain the unexpected result. Due to this controversy, we revisit the hypothesis with an improved, unambiguous stimulus for manipulation of the norm violation in Study 7.

Study 7: Systematic Replication

Study 7 revisits our previous hypotheses and replicates the effect of decision-making mode and personal perspective in a controlled lab experiment. Its objective is to increase the validity of our previous findings by controlling for the experimental conditions and replicating the findings in a different sample population. Besides the main manipulation of participants’ perspectives and decision-making mode, Study 7 tests four different situational factors, which were individually examined in Studies 2, 3, 4, and 6: the vehicle’s alternative default path (Study 2), the number of passengers (Study 3) and pedestrians, the presence of a child among the passengers (Study 4) and pedestrians, and a social norm violation by a pedestrian (Study 6). The agency bias addressed in Study 5 was not actively manipulated because the passengers’ seating positions depended on the number of passengers in the vehicle. In line with our previous studies, we expected that the alternative default path would not change participants’ moral decisions and that the number of passengers would increase the likelihood of sacrificing pedestrians and vice versa. Likewise, the presence of a child among the passengers would increase the likelihood of sacrificing pedestrians and vice versa. The pedestrian’s norm violation was expected to result in an increase of sacrificed pedestrians, in contrast to our finding in Study 6.

One hundred and twenty-eight participants aged between 19 and 62 years ( M  = 25.46, SD  = 6.42; 64.8% females) were recruited through the subject pool at a behavioral science lab in Denmark. Participants received 100 DKK (~15.78 USD) on completion of the study. The experiment ran continuously, and data collection was completed in one week.

We created 56 visual representations of the two alternative outcomes (sacrifice pedestrian[s], sacrifice passenger[s]) of a fractional factorial design of 28 dilemmas combining variations of the five experimental situational factors (alternative default path, number of passengers [1, 2, or 4], number of pedestrians [1, 2, or 4], child among passengers, child among pedestrians, and social norm violation of pedestrian; see Supplementary materials). The fractional factorial design was computed in JMP. The stimulus for illustrating the pedestrian’s norm violation (jaywalking) was adapted from Study 6 and proved to allow an unambiguous interpretation of the red traffic lights (see Supplementary materials). The adjusted illustration showed three traffic lights, two red pedestrian traffic lights and a green traffic light for the vehicle, with the pedestrian walking in the middle of the street.

The experimental design for this study was a 2 × 3 × 28 mixed design, with decision-making mode (intuitive, deliberate) as the between-subjects factor and perspectives (pedestrian, passenger, control) and dilemmas (factorial design of five situational factors) as the within-subjects factors. Participants were randomly assigned to the decision-making mode and always started with the control perspective before proceeding to the two personal perspectives in randomized order. The presentation of the dilemmas was randomized for each perspective. Three perspectives with 28 dilemmas each totaled 84 moral decisions for each participant.

The decision-making mode was manipulated by means of time pressure (<5 seconds response time for intuitive decisions, <60 seconds response time for deliberate decisions) and aided by a countdown timer. The manipulation of time pressure resulted in significant differences in participants’ reported level of feeling pressured in the intuitive ( M  = 5.09, SD  = 1.38) versus deliberate ( M  = 4.04, SD  = 1.90) decision-making mode conditions (t(126) = −3.54, p  < 0.001, 95% CI: −1.629, −0.462). Participants adapted to the effect of time pressure in that the reported level of feeling pressured is lower after the personal perspective conditions (passengers: M  = 4.81, SD  = 1.77; pedestrians: M  = 4.90, SD  = 1.60) compared to the control condition ( M  = 5.55, SD  = 1.25). In contrast to the previous online studies, participants were only allowed to enter their choice within the time limit. When the timer ran out, the dilemma was skipped and participants were reminded to answer within the given time limit; skipped dilemmas repeated after a full set of 28 dilemmas, in randomized order, until decisions were successfully recorded for all dilemmas. Presentation order did not influence people’s moral decisions ( OR  = 1.00, 95% CI: 0.89–1.11, p  = 0.963).

Participants’ moral decisions were recorded as their choice to sacrifice the pedestrian(s) or passenger(s) for each presented dilemma. Demographic and control variables included age, gender, education, residential area (city center, suburbs, countryside), use frequency of transportation means (bike, bus, car, train, walking, and airplane; measured on 5-point scales, 1 = “never,” 2 = “few times a year,” 3 = “few times a month,” 4 = “few times a week,” 5 = “daily”), possession of a driver’s license (yes, no), car ownership (none, one, two or more), experience with autonomous vehicles (yes, no) and knowledge of the term “autonomous vehicle” (yes, no). The postquestionnaire also included a series of exploratory measures unrelated to the purpose of this research (see Supplementary materials).

Approximately one week before the lab study, two hundred and eighteen participants (64.22% females; age: M  = 25.39, SD  = 6.27) filled out a short prequestionnaire. Participants evaluated the morality of six traditional, text-based dilemmas: three moral-personal and three moral-impersonal dilemmas (see Supplementary materials). The moral-impersonal dilemmas included the original trolley dilemma, in which a rail worker must decide whether to pull a lever that would kill one innocent person to save five trolley passengers 21 ; the moral-personal dilemmas included the original (foot-)bridge dilemma, in which the decision-maker must choose whether to push an innocent fat man in front of a trolley to save five people 18 . Participants’ moral judgements in these dilemmas served as a measure of their general morality in personal and non-personal dilemmas, which was entered as a control variable in the logistic regression analysis of the main study. All participants who successfully completed the pre-study were invited to the lab.

In the main experiment, participants of the same decision-making mode were grouped in batches to minimize noise and distraction due to a large difference in experiment duration (up to 15 minutes). The experiment started only after all participants were seated and ready. First, participants learned about the features of autonomous vehicles and were familiarized with the elements (i.e., car, pedestrian, passenger) used for the visual representation of the dilemmas (see Supplementary materials). To become familiar with the mechanics of responding under time pressure, participants in the intuitive decision-making condition then completed six non-dilemma-related training trials, in which the task was to select the illustration that showed more circles. Participants were then instructed to assume the observer perspective and respond to the first set of 28 dilemmas. Participants’ decisions were recorded as a selection of the outcome that they found more appropriate. An artificial loading screen of seven seconds in between dilemmas was used to reduce interference with moral decisions due to cognitive load 19 . In between perspectives, participants were forced to pause for another 60 seconds. After completing all three perspectives, participants completed a postquestionnaire containing measures on their beliefs, attitudes, and intentions towards autonomous vehicles.

All analyses were conducted only after completion of the data collection. Two-sided, independent samples t-tests were used to assess differences of group means. Binary logistic regression was used to regress the experimental conditions (decision-making mode, perspective, situational factors) onto participants’ decision to sacrifice the pedestrian. All participants were included in the analyses.

Figure  7 shows the distribution of moral decisions for the six decision-making modes by perspective conditions collapsed over the 28 dilemmas. In the control condition, pedestrians were sacrificed as often as passengers in the deliberate decision-making mode condition ( M  = 49.62%, SD  = 50.01%) and less often in the intuitive decision-making mode condition ( M  = 38.75%, SD  = 48.73%). In the pedestrian perspective condition, the pedestrian was sacrificed less on average, with a difference between the deliberate ( M  = 43.13%, SD  = 49.54%) and intuitive ( M  = 32.14%, SD  = 46.72%) decision-making mode conditions. The passenger perspective was the opposite, with more pedestrians sacrificed in both the deliberate ( M  = 57.85%, SD  = 49.39%) and intuitive ( M  = 54.90%, SD  = 49.77%) decision-making mode conditions. The difference between the two decision-making mode conditions is smaller in the passenger perspective condition than in the pedestrian perspective and control conditions.

figure 7

Moral decision to sacrifice the pedestrian by individual’s perspective and decision-making mode in Study 7. Note: The dashed line marks the point at which lives of passenger(s) are valued equally to those of pedestrian(s).

Table  8 shows the logistic regression model of Study 7. In line with Study 1, the result shows that intuitive decision-making significantly lowers the likelihood deciding to sacrifice the pedestrian ( OR 0.71, 95% CI: 0.65–0.79). The likelihood of sacrificing the pedestrian is significantly higher in the passenger perspective (Perspective 1; OR 2.87, 95% CI: 2.57–3.21) and control (Perspective 2; OR 1.47, 95% CI: 1.20–1.79) conditions relative to the pedestrian perspective condition. Second, all situational factors of the dilemmas result in significant and relatively large effects on people’s moral decisions. In line with Study 2, the alternative path of the vehicle does not alter the decision of the two outcomes ( OR 1.08, 95% CI: 0.99–1.18, p  = 0.095). In line with Study 3, a larger number of passengers increases the likelihood of sacrificed pedestrians – likewise, a larger number of pedestrians increases the likelihood of sacrificed passengers. In line with Study 4, the presence of children among the passengers significantly increases the likelihood of sacrificed pedestrians – likewise, the presence of children among the pedestrians increases the likelihood of sacrificed passengers. And finally, in line with the initial hypothesis of Study 6, the pedestrian’s norm violation results in a large and significant increase in the likelihood of being sacrificed ( OR 3.63, 95% CI: 3.31–3.98). This finding supports our explanation that the result obtained in Study 6 was likely due to misinterpretation of the stimuli.

In regard to control variables, results show that participants’ age, car usage frequency, car ownership and experience with autonomous vehicles significantly decrease the likelihood of sacrificing the pedestrian. Possession of a driver’s license, knowledge of autonomous vehicles, higher education, and living in the country, on the other hand, significantly increase the likelihood of participants sacrificing the pedestrian. Participants’ judgement of the appropriateness of sacrificing a single person to save many in the moral-personal text-based dilemmas shows a significant increase in the likelihood of sacrificing the pedestrian ( OR 1.14, 95% CI: 1.06–1.23).

The findings of our study address all the previous hypotheses and provide converging validity on the effects found in the previous studies. First and foremost, the findings demonstrate a strong and significant effect of decision-making mode on people’s moral decisions, thus supporting the evidence of previous research 16 . The effect shows that regardless of perspective and situational factors, people express an intuitive moral preference to sacrifice the autonomous vehicle and its passengers rather than harming pedestrians. Second, the results provide evidence that personal perspective significantly changes the moral decision on who should be sacrificed 7 . As discussed by Bonnefon, et al . 7 , the differences in perspective contribute to a social dilemma in the moral programming of machines, as passengers favor sacrificing pedestrians and vice versa. Our study supports the notion of a personally biased morality and further shows that the perspective of the observer lies in between the two personally motivated perspectives.

In addition to these two experimental factors, Study 7 offers a plethora of information on the influence of situational factors. In line with our findings from Study 2, people appear to show neither status quo nor action bias. In line with the hypothesized influence of utility on people’s moral decisions, the findings clearly show the utilitarian doctrine being applied. That is, people maximize the utility of lives saved by sacrificing the group with fewer people. Moreover, people consider age in the utility function and tend to spare the lives of children over those of adults. This further confirms the influence of a value-of-life heuristic in people’s moral decision-making, as shown by Sütfeld, et al . 13 . Lastly, in line with our original hypothesis in Study 6, people’s decisions seem to be strongly affected by the pedestrian’s norm violation. As shown in this study, regardless of all other factors, pedestrians who violate the norm of stopping at a red light are sacrificed considerably more. This influence further caters to the cultural embeddedness in people’s moral decisions: the violation of a norm, such as obeying traffic regulations, might be punished less severely in certain societies. Likewise, the valuation of utility and the value of life attached to it appear to be bound to cultural views, as highlighted by the large cross-country sample of the Moral Machine experiment 10 .

Robustness of Results

We subjected our data to an internal meta-analysis 40 to validate our findings on the effect of decision-making mode and personal perspective on people’s moral decisions for Studies 1–6. Study 7 was not included in the analysis because it featured a substantially different design, which, in contrast to the almost identical online studies, introduced too much variation across its 168 different conditions. Figure  8 shows the results of the main effects of decision-making mode and perspective. The simple effects are shown in Fig.  9 . Summary information and coding of the contrast are provided in the Supplementary materials. The I 2 was estimated at 69.90% (95% CI: 54.75%, 79.98%), suggesting that heterogeneity across the studies is high. This was expected due to the considerable variation of situational factors across the online studies.

figure 8

Main effects estimates on moral decision to sacrifice the pedestrian(s) for Studies 1–6. Note: Effect estimates are given by the squares for single-study estimates and the horizontal bars for SPM estimates; 50% and 95% intervals are given by the thick and thin lines, respectively. The average sample size per condition in each study is given by the size of the squares. The vertical dotted lines indicate the null-effect.

figure 9

Simple effects estimates for moral decision to sacrifice the pedestrian for Studies 1–6. Note: Effect estimates are given by the squares for single-study estimates and the horizontal bars for SPM estimates; 50% and 95% intervals are given by the thick and thin lines, respectively. The average sample size per condition in each study is given by the size of the squares. The vertical dotted lines indicate the null-effect.

As shown in Fig.  8 , the experimental factor of intuitive decision-making significantly reduces the likelihood of sacrificed pedestrians, with an overall effect size of −0.54 (95% CI: −0.72 to −0.38; see Contrast 1). This effect is consistent with the finding in Study 7. Moreover, the results show a significant increase in sacrificed pedestrians for people in the passenger relative to the pedestrian perspectives, with an overall effect size of 0.21 (95% CI: 0.08–0.34; see Contrast 2). Again, this effect is consistent with the findings in Study 7. Similarly, in line with Study 7, the observer (control) perspective lies in between the pedestrian and passenger perspectives. The observer perspective is not statistically different from the passenger perspective (0.03 [95% CI: −0.10–0.17]; see Contrast 3) but is statistically different from the pedestrian perspective (−0.17 [95% CI: −0.30 to −0.05]; see Contrast 4). This finding aligns with results obtained in Study 7 and suggests that the more accessible perspective, even when people are asked only to observe the situation, is that of the passenger.

Figure  9 shows that the simple effects of personal perspective in the intuitive decision-making (Contrasts 8–10) and deliberate decision-making (Contrasts 5–7) mode conditions replicate the direction of the main effect of perspective (Contrast 2–4). However, considerable variation can be observed across the six studies, which can be attributed to the studies that featured dilemmas with increased complexity (Studies 3 and 4). Moreover, the significant difference between perspectives in the online studies appears to be driven by the intuitive decision-making mode, as the results show that the perspectives are not significantly different in the deliberate decision-making mode conditions.

Taken together, these estimates, along with the visual convergence of effects, obtained in the internal meta-analysis are particularly reassuring because they reflect the consistency of the findings of the online studies (Studies 1–6) with those of the lab study (Study 7). More information, including contrast estimates and 95-percent intervals, is enclosed in the Supplementary materials.

General Discussion

The present research updates the current knowledge on people’s morality in dilemmas faced by autonomous vehicles. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. Once the personal perspective made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These effects are supported by a combined pool of more than ten thousand individual moral decisions across seven studies, which consistently find that the two moral decision-making biases cause substantial variations in people’s decisions.

The most prevalent effect – the distinct decision-making mode – is moderated by cognitive processing time. The less time people were permitted, the more their decisions trended towards a deontological moral doctrine. With the ability to process a given dilemma deliberately, people’s decisions trended towards a more utilitarian doctrine. Further, the investigation of people’s individual perspectives highlights that deliberately priming people to think as observers when judging moral dilemmas does not necessarily remove bias from their decisions. The results show that people’s decisions in the control condition (instructed to use the perspective of an observer) are almost identical with those in the passenger perspective. While this finding can be explained by the majority of our sample being frequent car users, it highlights that moral decisions gathered from representative samples of the US population are likely going to overrepresent the passenger perspective and lack the pedestrian perspective. As a result, personal perspectives represent a major bias in people’s moral decisions and underline the social challenge in the design of an universal moral code for autonomous vehicles 7 .

Our research further validates as well as extends previous findings on people’s moral decisions that have been identified as global moral preferences and those guided by a series of readily accessible and culturally acquired mental shortcuts 10 . First, our results show that people tend to maximize the number of saved lives, even if an innocent pedestrian is sacrificed in the process. This consistent pattern in the preference to save the lives of the many supports the global moral preference observed in the Moral Machine experiment 10 and reflects the utilitarian moral doctrine in terms of the original trolley problem 18 . Second, our research provides further evidence that the life of a younger person is valued over that of an older one – as was shown to be the case in Western cultures, including the US and Denmark 10 . This moral decision appears to be independent of the decision-maker being a parent. In addition, our findings show that the prevalent moral doctrine that deems it wrong to sacrifice an innocent pedestrian is surprisingly strong. In fact, results show the limitation of this consideration in that the younger life of a passenger is not necessarily valued over a single innocent adult pedestrian. Lastly, the results of our studies show that when a social norm is violated (i.e., pedestrians jaywalk in front of the autonomous vehicle), people trend towards favoring a deontological doctrine and collectively punish the norm violation. This reflects the moral preference to spare the lawful, also found in the Moral Machine experiment 10 . A complication in the inference of collectively perceived norm violations highlighted by our results is that people are biased towards overestimating the attribution of agency of the passengers.

Implications

From a theoretical perspective, this research provides supporting evidence for the presence of dual-process theory in people’s moral decision-making. It demonstrates not only that two states of decision-making modes can be achieved by limiting the available time to process decisions but also that the response time itself – in the absence of experimental manipulation – results in a significant difference in people’s moral decisions. In line with in earlier studies, the distinct route towards making the moral decision is found to alter peoples’ use of moral doctrines (deontological vs. utilitarian) which are associated with deliberate thinking (for utilitarian moral decisions) and intuitive thinking (for deontological moral decisions) 13 , 16 , 27 . In addition, this research contributes to the theory of bounded rationality by demonstrating that people’s consideration of a personal perspectives in moral dilemmas of autonomous vehicles leads to biased moral decisions that favor positive outcomes for their perspective.

From a practical standpoint, this research addresses critical aspects in the approach of inferring moral guides for autonomous vehicles from people’s moral decisions. For manufacturers of autonomous vehicles and ambitious projects such as the Moral Machine experiment, these findings on human decision-making biases imply that sourcing people’s moral preferences on a large scale requires developing a standardized and reliable instrument that actively controls for participants’ personal perspective and decision-making mode. The measurement of universal moral guidance should evenly balance moral decisions from all stakeholders who are directly affected by the outcome of the vehicle’s decision. Moreover, the instrument should force participants into one of the two decision modes, as this step primarily determines the moral doctrine that will be used. This decision, of course, can be moral but also strategic. If a manufacturer aims to emphasize more norm-driven, emotional responses, this can be easily achieved by limiting people’s cognitive processing time.

For policy makers, this research offers an interesting insight into the moral trade-off between people’s rationale of utility and social norms. It suggests that the expectation of moral-acting autonomous vehicles implies that while they should make computed decisions to maximize the good of society, they should simultaneously take situational factors into account. In this context, people seem to expect autonomous vehicles to become an agent that enforces the widely accepted social norms (i.e., obeying traffic regulations) and therefore favors punishing norm violations in critical incidents. This further implies that the development of moral autonomous vehicles must be closely aligned with the accepted (or enforced) norms.

Future research

While these findings may be very useful to researchers and companies interested in understanding the moral decision-making biases of people in the realm of autonomous vehicle dilemmas, the present research has limitations that warrant discussion and offer avenues for future research.

First, the dilemma and the variations of it used in this and other research in this field represent modern versions of the original trolley problem and so largely exaggerate the moral decisions that autonomous vehicles will face in their everyday routines. While previous fatal incidents with autonomous vehicles highlight the importance of this extreme scenario 41 , we suggest that future research should focus on the more practical and more likely dilemmas that occur in everyday use, including the degree to which an autonomous vehicle should be able to be more aggressive when maneuvering in high traffic conditions or overtaking slow vehicles on the highway, speed to a destination if the passenger has a serious condition or needs to catch a connection, and adapt to the preferences or commands of its user, such as disregarding traffic regulations or roaming the streets without a specific destination.

Second, the dilemmas studied use definitive outcomes for the two alternatives – the passengers or pedestrians will die, while the others live. While this is the default assumption of the trolley dilemma and contrasts the possible extremes, in the real world, the odds would never be perfectly even. In fact, it is reasonable to assume that future automated, high-tech vehicles will provide superior safety mechanisms to protect passengers against potential incidents or software malfunctions and that passengers therefore are more likely to survive a critical incident. To create a better and more realistic picture, it is important that future studies provide decision-makers with a more realistic distribution of the probabilities of survival in moral dilemmas 11 .

Lastly, this research investigates the moral decision-making bias that is attributed to people’s personal perspectives. While we find a consistent pattern that is independent of various situational factors, future research could benefit greatly from an investigation of the motivations behind individuals’ moral decisions in the context of the use of autonomous vehicles. It would be particularly interesting to study to what degree people who are biased by a certain personal perspective are willing to accept a different, possibly opposing, perspective. Testing interventions that move people to voluntarily agree to a common, universal moral doctrine would be of interest to practitioners and researchers in this field.

Ethical statement

All methods were carried out in accordance with relevant guidelines and regulations. Written informed consent was obtained from all subjects or, if subjects were under 18, from a parent and/or legal guardian. All subjects were informed in advance about the purpose, tasks, foreseeable risks or discomforts, benefits, confidentiality, expected duration, and researchers’ contact information. All members of the research team obtained ethics certification from the National Institutes of Health (NIH). All experimental protocols were subject to approval by the COBE Human Subjects Committee.

SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, http://standards.sae.org/j3016_201609/ (2016).

U.S. Department of Transportation. Automated Driving Systems 2.0: A Vision for Safety, https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf (2017).

Pillath, S. Automated vehicles in the EU, http://www.europarl.europa.eu/RegData/etudes/BRIE/2016/573902/EPRS_BRI(2016)573902_EN.pdf (European Parliament, 2016).

Kockelman, K. et al . Implications of Connected and Automated Vehicles on the Safety and Operations of Roadway Networks: A Final Report, https://library.ctr.utexas.edu/ctr-publications/0-6849-1.pdf (Center for Transportation Research, The University of Texas, 2016).

Silberg, G. et al . Self-driving cars: The next revolution, https://staff.washington.edu/jbs/itrans/self_driving_cars[1].pdf (KPMG LLP & Center of Automotive Research, 2012).

Thompson, C. Why driverless cars will be safer than human drivers. Business Insider Nordic , https://www.businessinsider.de/why-driverless-cars-will-be-safer-than-human-drivers-2016-11 (2016).

Bonnefon, J.-F., Shariff, A. & Rahwan, I. The social dilemma of autonomous vehicles. Science 352 , 1573 (2016).

Article   ADS   CAS   PubMed   Google Scholar  

Althauser, J. Moral programming will define the future of autonomous transportation. VentureBeat , https://venturebeat.com/2017/09/24/moral-programming-will-define-the-future-of-autonomous-transportation/ (2017).

Himmelreich, J. The everyday ethical challenges of self-driving cars. The Conversation , https://theconversation.com/the-everyday-ethical-challenges-of-self-driving-cars-92710 (2018).

Awad, E. et al . The Moral Machine experiment. Nature , https://doi.org/10.1038/s41586-018-0637-6 (2018).

Meder, B., Fleischhut, N., Krumnau, N.-C. & Waldmann, M. R. How Should Autonomous Cars Drive? A Preference for Defaults in Moral Judgments Under Risk and Uncertainty. Risk Analysis 39 , 295–314 (2019).

Article   PubMed   Google Scholar  

Noothigattu, R. et al . A Voting-Based System for Ethical Decision Making. arXiv preprint arXiv 1709 , 06692 (2017).

Google Scholar  

Sütfeld, L. R., Gast, R., König, P. & Pipa, G. Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure. Frontiers in Behavioral Neuroscience 11 (2017).

Shariff, A., Bonnefon, J.-F. & Rahwan, I. Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour , https://doi.org/10.1038/s41562-017-0202-6 (2017).

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. & Cohen, J. D. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science 293 , 2105–2108 (2001).

Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E. & Cohen, J. D. Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107 , 1144–1154 (2008).

Rand, D. G. Cooperation, Fast and Slow: Meta-Analytic Evidence for a Theory of Social Heuristics and Self-Interested Deliberation. Psychological Science 27 , 1192–1206 (2016).

Thomson, J. J. The Trolley Problem. Yale Law J 94 , 1395–1415 (1985).

Article   Google Scholar  

Kahneman, D. Thinking, fast and slow. (Macmillan, 2011).

Evans, J. S. B. T. & Curtis-Holmes, J. Rapid responding increases belief bias: Evidence for the dual-process theory of reasoning. Thinking & Reasoning 11 , 382–389 (2005).

Stanovich, K. E. & West, R. F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23 , 645–665 (2000).

Article   CAS   PubMed   Google Scholar  

Epstein, S. & Pacini, R. Some basic issues regarding dual-process theories from the perspective of cognitive–experiential self-theory in Dual-Process Theories in Social Psychology , 462–482 (Guilford Press, 1999).

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M. & Cohen, J. D. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 , 389–400 (2004).

Simon, H. A. Theories of bounded rationality. Decision and Organization 1 , 161–176 (1972).

MathSciNet   Google Scholar  

March, J. G. Bounded Rationality, Ambiguity, and the Engineering of Choice. The Bell Journal of Economics 9 , 587–608 (1978).

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. The American Economic Review 93 , 1449–1475 (2003).

Greene, J. D. Dual-process morality and the personal/impersonal distinction: A reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of Experimental Social Psychology 45 , 581–584 (2009).

Oatley, K. & Johnson-laird, P. N. Towards a Cognitive Theory of Emotions. Cognition and Emotion 1 , 29–50 (1987).

Bar-Eli, M., Azar, O. H., Ritov, I., Keidar-Levin, Y. & Schein, G. Action bias among elite soccer goalkeepers: The case of penalty kicks. Journal of Economic Psychology 28 , 606–621 (2007).

Patt, A. & Zeckhauser, R. Action Bias and Environmental Decisions. Journal of Risk and Uncertainty 21 , 45–72 (2000).

Kahneman, D., Knetsch, J. L. & Thaler, R. H. Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. Journal of Economic Perspectives 5 , 193–206 (1991).

Levy, D. T. Youth and traffic safety: The effects of driving age, experience, and education. Accident Analysis & . Prevention 22 , 327–334 (1990).

CAS   Google Scholar  

Bandura, A. Toward a Psychology of Human Agency. Perspectives on Psychological Science 1 , 164–180 (2006).

De Groot, J. I. M. & Steg, L. Morality and Prosocial Behavior: The Role of Awareness, Responsibility, and Norms in the Norm Activation Model. The Journal of Social Psychology 149 , 425–449 (2009).

Sneijder, P. & te Molder, H. F. M. Moral logic and logical morality: Attributions of responsibility and blame in online discourse on veganism. Discourse & Society 16 , 675–696 (2005).

Brickman, P., Ryan, K. & Wortman, C. B. Causal chains: Attribution of responsibility as a function of immediate and prior causes. Journal of Personality and Social Psychology 32 , 1060–1067 (1975).

Johnson, J. T. & Drobny, J. Proximity biases in the attribution of civil liability. Journal of Personality and Social Psychology 48 , 283–296 (1985).

Turiel, E. The Development of Social Knowledge: Morality and Convention. (Cambridge University Press, 1983).

Cialdini, R. B. & Trost, M. R. Social Influence: Social Norms, Conformity and Compliance. (McGraw-Hill, 1998).

McShane, B. B. & Böckenholt, U. Single-Paper Meta-Analysis: Benefits for Study Summary, Theory Testing, and Replicability. Journal of Consumer Research 43 , 1048–1063 (2017).

Wakabayashi, D. Uber Ordered to Take Its Self-Driving Cars Off Arizona Roads. The New York Times , https://www.nytimes.com/2018/03/26/technology/arizona-uber-cars.html (2018).

Download references

Acknowledgements

This research received funding from the Interacting Minds Center (IMC), Aarhus University (Seed number: 26122).

Author information

Authors and affiliations.

Department of Management, Aarhus University, Aarhus, Denmark

Darius-Aurel Frank, Polymeros Chrysochou & Panagiotis Mitkidis

Ehrenberg-Bass Institute for Marketing Science, School of Marketing, University of South Australia, South Australia, Australia

Polymeros Chrysochou

Center for Advanced Hindsight, Duke University, Durham, United States

Panagiotis Mitkidis & Dan Ariely

You can also search for this author in PubMed   Google Scholar

Contributions

D.A.F., P.C. and P.M. developed the study concept and design. D.A.F. collected and analyzed the data. All authors interpreted the data. D.A.F. led the writing of the manuscript. D.A.F., P.C., P.M. and D.A. provided critical revisions. All authors approved the manuscript.

Corresponding author

Correspondence to Darius-Aurel Frank .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Frank, DA., Chrysochou, P., Mitkidis, P. et al. Human decision-making biases in the moral dilemmas of autonomous vehicles. Sci Rep 9 , 13080 (2019). https://doi.org/10.1038/s41598-019-49411-7

Download citation

Received : 06 September 2018

Accepted : 24 August 2019

Published : 11 September 2019

DOI : https://doi.org/10.1038/s41598-019-49411-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A comparative analysis of cnn-based deep learning architectures for early diagnosis of bone cancer using ct images.

  • Kanimozhi Sampath
  • Sivakumar Rajagopal
  • Ananthakrishna Chintanpalli

Scientific Reports (2024)

Not a good judge of talent: the influence of subjective socioeconomic status on AI aversion

  • Mengying Zhao

Marketing Letters (2024)

What do academics say about artificial intelligence ethics? An overview of the scholarship

  • Onur Bakiner

AI and Ethics (2023)

Responsibility in Hybrid Societies: concepts and terms

  • Stefanie Meyer
  • Sarah Mandl
  • Anja Strobel

Automatic differentiation of thyroid scintigram by deep convolutional neural network: a dual center study

BMC Medical Imaging (2021)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

moral decision making research paper

Moral Development in Business Ethics: An Examination and Critique

  • Review Paper
  • Open access
  • Published: 18 November 2019
  • Volume 170 , pages 429–448, ( 2021 )

Cite this article

You have full access to this open access article

moral decision making research paper

  • Kristen Bell DeTienne 1 ,
  • Carol Frogley Ellertson 2 ,
  • Marc-Charles Ingerson 3 &
  • William R. Dudley 1  

The field of behavioral ethics has seen considerable growth over the last few decades. One of the most significant concerns facing this interdisciplinary field of research is the moral judgment-action gap. The moral judgment-action gap is the inconsistency people display when they know what is right but do what they know is wrong. Much of the research in the field of behavioral ethics is based on (or in response to) early work in moral psychology and American psychologist Lawrence Kohlberg’s foundational cognitive model of moral development. However, Kohlberg’s model of moral development lacks a compelling explanation for the judgment-action gap. Yet, it continues to influence theory, research, teaching, and practice in business ethics today. As such, this paper presents a critical review and analysis of the pertinent literature. This paper also reviews modern theories of ethical decision making in business ethics. Gaps in our current understanding and directions for future research in behavioral business ethics are presented. By providing this important theoretical background information, targeted critical analysis, and directions for future research, this paper assists management scholars as they begin to seek a more unified approach, develop newer models of ethical decision making, and conduct business ethics research that examines the moral judgment-action gap.

Similar content being viewed by others

moral decision making research paper

Behavioral Ethics: A Critique and a Proposal

moral decision making research paper

Moral Motivation Across Ethical Theories: What Can We Learn for Designing Corporate Ethics Programs?

moral decision making research paper

How Methods of Moral Philosophy Inform Business

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

Scandals in business never seem to end. Even when one scandal seems to finally end, another company outdoes the prior disgraced company and dominates the public dialog on corporate ethics (c.f., Chelliah and Swamy 2018 ; Merle 2018 ). So, what is happening here? One common issue shows up repeatedly in cases of unethical behavior, which is that of knowing what is right yet doing what is wrong. This failure is classically understood as the moral judgment-moral action gap. Footnote 1

A main goal of behavioral business ethics is to understand the primary drivers of good and bad ethical decision making (Treviño et al. 2014 ). The hope is that with a better understanding of these drivers, organizations can implement structures that lead to more frequent and consistent ethical behavior by employees. However, business scholars are still working to discover what actually spurs ethical behaviors that improve profit maximization and corporate social performance.

This focus on understanding ethical decision making in business in a way that bridges the moral judgment-moral action gap has experienced an explosion of interest in recent decades (Bazerman and Sezer 2016 ; Paik et al. 2017 ; Treviño et al. 2014 ). These types of studies constitute a branch of behavioral ethics research that incorporates moral philosophy, moral psychology, and business ethics. These same interdisciplinary scholars seek to address questions about the fundamental nature of morality—and whether the moral has any objective justification—as well as the nature of moral capacity or moral agency and how it develops (Stace 1937 ). These aims are similar to those of prior moral development researchers.

However, behavioral business ethicists sometimes approach these aims without the theoretical or philosophical background that can be helpful in grappling with problems like the judgment-action gap (Painter-Morland and Werhane 2008 ). Therefore, this article provides a useful reference for behavioral business ethics scholars on cognitive moral development and its indirect but important influence on research today.

The first goal of this paper is to examine the moral development theory in behavioral business ethics that comes first to mind for most laypersons and practitioners—the cognitive approach. At the forefront of the cognitive approach is Kohlberg ( 1969 , 1971a , b , 1981 , 1984 ) with his studies and reflection of the development of moral reasoning. We also examine subsequent supports and critiques of the approach, as well as reactions to its significant influence on business ethics teaching, research, and practice. We also examine the affective approach by reviewing the work of Haidt ( 2001 , 2007 , 2009 ), Bargh ( 1989 , 1990 , 1996 , 1997 ), and others.

We then consider research that moves away from this intense historical debate between cognitive and affective decision making and may be better for understanding moral development and helping to bridge the moral judgment-moral action gap. For example, some behavioral ethics researchers bracket thinking and feeling and have explored a deeper approach by examining the brain’s use of subconscious mental shortcuts (Gigerenzer 2008 ; Sunstein 2005 ). In addition, virtue ethics and moral identity scholars focus on how individuals in organizations develop certain qualities that become central to their identity and motivate their moral behavior, not by focusing on cognition or affect but by focusing on the practice of behavioral habits (Blasi 1993 , 1995 , 2009 ; Grant et al. 2018 ; Martin 2011 ). Each of these groups of behavioral ethics researchers have moved the discussion of moral development forward using theorizing that rests on different—and often competing—assumptions.

In this article, we seek to make these various theories of moral development explicit and to bring different theories face to face in ways that are rarely discussed. We show how some of the unrelated theories seem compatible and how some of the contrasting theories seem irreconcilable. The comparisons and conflicts will then be used to make recommendations for future research that we hope will lead to greater unity in theorizing within the larger field of business ethics.

In other words, the second goal of this paper is to provide a critical theoretical review of the most pertinent theories of Western moral development from moral psychology and to highlight similarities and differences among scholars with respect to their views on moral decision making. We hope this review and critique will be helpful in identifying what is best included in any future unified theory for moral decision making in behavioral ethics that will actually lead to the moral judgment-moral action gap being bridged in practice as well.

The third goal of our paper is to question common assumptions about the nature of morality by making them explicit and analyzing them (Martin and Parmar 2012 ). Whetten ( 1989 ) notes the importance of altering our thinking “in ways that challenge the underlying rationales supporting accepted theories” (p. 493). Regarding the field of business ethics specifically, O’Fallon and Butterfield ( 2005 ) found that a major weakness in the business ethics literature is a lack of theoretical grounding—and we believe this concern still requires attention. In addition, Craft ( 2013 ) notes that “perhaps theory building is weak because researchers are reluctant to move beyond the established theories into more innovative territory” (p. 254). As recommended by Whetten ( 1989 ), challenging our assumptions in the field of behavioral ethics will help us conduct stronger, more compelling research that will have a greater impact on the practice of business ethics.

For example, many business and management scholars are heavily influenced by long-held assumptions reflected in the work of Lawrence Kohlberg ( 1969 , 1971a , b ), one of the most prominent theorists of ethical decision making (Hannah et al. 2011 ; Treviño 1986 ; Treviño et al. 2006 ; Weber 2017 ; Zhong 2011 ). Like Sobral and Islam ( 2013 ), we call upon researchers to move beyond these assumptions. We will review a selection of research that explores alternate ideas and leaves past assumptions behind, leading to unique outcomes, which are of value to the field of management. Thus, in addition to making long-held assumptions clear, we will present critical analysis and alternative ways of thinking to further enhance the scientific literature on the topic.

To accomplish this third goal, we will discuss links between definition, theory, and empirical study. This method of analysis is demonstrated by Fig.  1 .

figure 1

Model for analysis of moral theory

Our fourth and final goal is to note gaps in our current understanding of ethical decision making and to present directions for future research. We discuss these opportunities throughout the paper and more specifically in our summary.

To accomplish these four goals, we begin with a review of the moral judgment-action gap and Greek and Kantian philosophy. After laying this theoretical background as a foundation for our discussion, we move deeper into a critical analysis. To begin this critical analysis, we discuss Piaget and Kohlberg, and the implications of their approaches. We then consider the Neo-Kohlbergian, Moral Identity, and Moral Domain research. The final section analyzes Moral Automaticity, Moral Schemas, and Moral Heuristics Research, as outlined in Fig.  2 .

figure 2

Visual summary of review

Moral Judgment-Action Gap

As mentioned above, behavioral ethics research indicates that the mere ability to reason accurately about moral issues predicts surprisingly little about how a person will actually behave ethically (Blasi 1980 ; Floyd et al. 2013 ; Frimer and Walker 2008 ; Jewe 2008 ; Walker and Hennig 1997 ). This ongoing failure is not for a lack of many thoughtful attempts on the part of researchers (Wang and Calvano 2015 ; Williams and Gantt 2012 ). The predictive failure has led to expressions of disappointment and frustration from scholars (Bergman 2002 ; Blasi 1980 ; Thoma 1994 ; Walker 2004 ).

The gap has led to a call for a more integrated and interdisciplinary approach to the problem in business ethics (De Los Reyes Jr et al. 2017 ). In agreement with that call for greater integration, we suggest that if business scholars and practitioners are going to move forward the work on the moral judgment-moral action gap, then it will be helpful to return to the historical embeddedness of this gap problem.

Philosophical Background: Greeks and Kant

The study of ethics is concerned with the question of “what is right?” Greek philosophers such as Socrates, Plato, and Aristotle examined issues such as right versus wrong and good versus bad. For these philosophers, morality found its meaning in the fact that it served to achieve personal needs and desires for happiness, avoid harm, and preserve goods required for the well-being of individuals and society. These goods include truth, safety, health, and harmony, and are maintained by moral, virtuous behavior. We call this a teleological approach because of its focus on results rather than on rules governing behavior (Lane 2017 ; Parry 2014 ).

One of the first of these moral philosophers was Socrates (470–399 B.C.). Socrates believed that through rational processes or reasoning we can discern truth, including universal moral truth. Thus, Socrates taught that a person’s essential moral function is to act rationally. He taught that “to know the good is to do the good” (Stumpf and Fieser 2003 , p. 42), meaning that if an individual knows what is right, he or she will do it. On the other hand, Socrates acknowledged that humans frequently commit acts that they know to be wrong. The Greeks called this phenomena—knowing what is right but failing to act on that knowledge—akrasia. Akrasia, from the ancient Greek perspective, is what leads to wrong or evil doing (Kraut 2018 ). Footnote 2

Another perspective that will be helpful later on in our examination of current literature is that of Aristotle. Regarding moral functioning, Aristotle focused on the development of and reliance on virtues: qualities, such as courage, that motivate a person’s actions (Kraut 2018 ). These virtues are developed through social influences and practice, and they become an essential part of who a person is. Thus, rather than learning to reason about actions and their results, as Socrates would emphasize as the core of moral functioning, Aristotle emphasizes virtues that a person possesses and that motivate ethical behavior (Kraut 2018 ).

Like Socrates, German philosopher Immanuel Kant ( 1785/1993 ) claimed that moral judgment is a result of reasoning. However, rather than taking a teleological approach to morality, he held to deontological views. For Kant, moral behavior is defined by an overarching obligation or duty to comply with strict universal principles, valid independent of any empirical observation. According to this deontological view, an action is right or wrong in and of itself, not as defined by end results or impact on well-being. Simply put, people are obligated out of duty to perform certain moral actions (Johnson 2008 ; Kant 1785/1993 ). In summary, for Socrates, Aristotle, and Kant, the emphasis is on knowledge and cognition.

Modern Influences: Piaget and Kohlberg

This reliance on knowledge and cognition continued on from Socrates to Kant and on to the American moral psychologist Kohlberg (1927–1987). Kohlberg advocated a theory that sought to describe how individuals mature in their abilities to make moral decisions.

Before discussing Kohlberg further, we note that his work has had an enormous impact on academic research as whole. His research has been cited over 70,000 times. In the last 5 years alone, he has been cited between 2000 and 3500 times each year. Within business, his theory of cognitive moral development is widely discussed, commonly used as a basis for research, and frequently covered in the standard textbooks for business ethics courses. Thus, to say that the field has moved past him is to deny the reality of the literature and experience of business ethics as a whole. With that in mind, any careful examination of how to better bridge the moral judgment-moral action gap in behavioral ethics must address Kohlberg’s ideas.

Socrates’ belief that “to know the good is to do the good,” which reflects the importance in Greek thought of arriving at truths through reasoning, influenced Kohlberg’s emphasis on the chief role of rationality as the arbiter for discerning moral universals (Stumpf and Fieser 2003 , p. 42). Kohlberg also embraced Aristotle’s notion that social experiences promote development by stimulating cognitive processes. Moreover, his emphasis on justice morality reflects Aristotle’s claims that virtues function to attain justice, which is needed for well-being, inner harmony, and the moral life.

Kohlberg’s thinking was heavily influenced by Jean Piaget, who believed that children develop moral ideas in a progression of cognitive development. Piaget held that children develop judgments—through experience—about relationships, social institutions, codes of conduct, and authority. Social moral standards are transmitted by adults, and the children participate “in the elaborations of norms instead of receiving them ready-made,” thus creating their own conceptions of the world (Piaget 1977 , p. 315).

According to Piaget, children develop a moral perception of the world, including concepts of fairness and ideas about right and wrong. These ideas do not originate directly from teaching. Often, children persist in these beliefs even when adults disagree (Gallagher 1978 ). In his theory of morality, presented in The Moral Judgment of the Child, Piaget philosophically defined morality as universal and obligatory (Ash and Woodward 1987 ; Piaget 1977 ). He drew on Kantian theory, which emphasized generating universal moral maxims through logical, rational thought processes. Thus, he rejected equating cultural norms with moral norms. In other words, he rejected the moral relativity that pervaded most research in human development at the time (Frimer and Walker 2008 ).

In the tradition of Piaget’s four stages of cognitive development, Kohlberg launched contemporary moral psychology with his doctoral paper in 1958. His structural development model holds that the stages of moral development emerge from a person’s own thoughts concerning moral issues. Kohlberg believed that social experiences play a part in moral development by stimulating our mental processes. Thus, moral behavior is rooted in moral and ethical cognitive deliberation (Kohlberg 1969 ; Levine et al. 1985 ).

Kohlberg investigated how people justify their decisions in the face of moral dilemmas. Their responses to these dilemmas established how far within stages of moral development a person had progressed. He outlined six discrete stages of moral reasoning within three overarching levels of moral development (Kohlberg 1971a ), outlined in Table  1 below. These stages were centered in cognitive reasoning (or rationality).

Kohlberg claimed that the moral is manifest within the formulation of moral judgments that progress through stages of development and could be demonstrated empirically (Kohlberg 1971a , b ). In this way, Kohlberg shifted the paradigm for moral philosophy and moral psychology. Up to this point, from the modern, Western perspective, most empirical studies of morality were descriptive (Lapsley and Hill 2009 ). Most research chronicled how various groups of peoples lived their moral lives and what the moral life consisted of, not what universal moral principles should constitute moral life. Kohlberg made the bold claim that individuals should aspire to certain moral universal principles of moral reasoning, and furthermore, that these principles could be laid bare through rigorous scientific investigation.

According to Kohlberg, an individual’s moral reasoning begins at stage one and develops progressively to stage two, then stage three, and so on, in order. Movement from one level to the next entails re-organization of a form of thought into a new form.
Not everyone can progress through all six stages. According to Kohlberg, it is quite rare to find people who have progressed to stage five or six, emphasizing that his idea of moral development stages was not synonymous with maturation (Kohlberg 1971a ). That is, the stages do not simply arise based on a genetic blueprint. Neither do they develop directly from socialization. In other words, new thinking strategies do not come from direct instruction, but from active thinking about moral issues. The role of social experiences is to prompt cognitive activity. Our views are challenged as we discuss or contend with others. This process motivates us to invent more comprehensive opinions, which reflect more advanced stages of moral development (c.f., Kohlberg 1969 ).

Reflecting Piaget and thus Kantian ethics, Kohlberg claimed that his stages of moral development are universal. His sixth stage of moral development (the post-conventional, universal principles level) occurs when reasoning includes abstract ethical thinking based on universal principles. Footnote 3

For Kohlberg, moral development consisted of transformations in a person’s thinking–not as an increased knowledge of cultural values that leads to ethical relativity, but as maturing knowledge of existing structures of moral judgment found universally in development sequences across cultures (Kohlberg and Hersh 1977 ). In other words, Kohlberg sought to eliminate moral relativism by advocating for the universal application of moral principles. According to him, the norms of society should be judged against these universal standards. Thus, Kohlberg sought to demonstrate empirically that specific forms of moral thought are better than others (Frimer and Walker 2008 ; Kohlberg 1971a , b ).

Lapsley and Hill ( 2009 ) discuss the far-reaching ramifications of how Kohlberg moralized child psychology: “He committed the ‘cognitive developmental approach to socialization’ to an anti-relativism project where the unwelcome specter of ethical relativism was to yield to the empirical findings of moral stage theory” (p. 1). Footnote 4 For Kohlberg, a particular behavior qualified as moral only when motivated by a deliberate moral judgment (Kohlberg et al. 1983 , p. 8). His ‘universal’ moral principles, then, were not so universal after all. Lapsley and Hill ( 2009 ) note that this principle of phenomenalism “was used as a cudgel against behaviourism (which rejected both cognitivism and ordinary moral language)” (p. 2).

Implications of Kohlberg for Today

This section of the article examines Kohlberg’s underlying assumptions and limitations. Although Kohlberg’s work is historically important and currently influential, this article proposes that business ethicists should avoid mis-application of and over-reliance on his framework.

To begin, Kohlberg assumes that the essence of morality is found in cognitive reasoning, mirroring Greek and Kantian thought. While such an assumption fit his purposes, we must move beyond this to understand ethical decision making more holistically (Sobral and Islam 2013 ). We know that the ability to reason does not always lead humans to act morally. Morality is more central to human existence, and reasoning is only one of multiple human activities that achieve the ends of morality (c.f., Ellertson et al. 2016 ). If we were to use Kohlberg’s assumption, we would assume that as long as someone is capable of advanced moral reasoning (as with Kohlberg’s use of hypothetical situations), we need not worry about that person’s actions. However, empirical studies by Hannah et al. ( 2018 ) indicate that although a person might demonstrate advanced moral reasoning in one role, the same person might show moral deviance in another role. Thus, recent research suggests that moral identity is multi-dimensional and ethical decision making is quite complex. Future work should consider the true, yet limited role of rationality in moral behavior and moral decision making (see Table  2 ).

Kohlberg also assumes that all humans proceed universally through moral development and that when fully developed—for those who do reach the highest level of reasoning—everyone will exhibit the same moral reasoning. If we are to build on this assumption, many questions are left unanswered about the easily observable differences both within and between individuals. For example, recent research by Sanders et al. ( 2018 ), suggests that in leaders who have high levels of moral identity, those who are authentically proud (versus leaders who are hubristically proud) are more likely to engage in ethical behavior. We call on researchers to study differences and limitations in moral processing that come from individual differences including past experiences, upbringing, age, personality, and culture. With such research, we will be able to better understand and reconcile differences regarding ethical issues and behavior.

Continuing to follow Kohlberg’s emphasis on universalism may limit our consideration of the real impact of social norms. We call on management scholars to investigate the importance of social, organizational, and individual norms rather than unwittingly assuming that universal principles should govern all organizational affairs. Certainly, some actions in business are universally unethical, but an assumption of absolute universal norms may limit organizational development, creative decision making, and the innovative power that comes from diversity of an individual’s social and cultural background. For example, empirical research by Reynolds et al. ( 2010 ) suggests that humans are moral agents and that their automatic decision-making practices interact with the situation to influence their moral behavior. Also, research by Kilduff et al. ( 2016 ) demonstrates how rivalry can increase unethical behavior. Future research on how situations and social norms affect behavior may help scholars to better predict, understand, and prevent moral judgment-action gaps and ethical conflicts between different individuals. Moral Domain Theory, which will be discussed later, provides one example of how to handle this question.

Kohlberg’s work does not directly address the moral judgment-action gap. For Kohlberg, until a person functions at the sixth stage of moral development, any immoral behavior roots from an inability to reason based on universal principles. However, his theory does not adequately explain the behavior of individuals who clearly understand what is moral–yet fail to act on that understanding (c.f., Hannah et al. 2018 ). This is yet another reason why as scholars we must question the claim that cognitive reasoning is central to the nature of morality. We call on business ethics scholars to design and test theoretically rigorous models of moral processing that connect gaps between judgment and action.

Moving forward, we do not disagree with Kohlberg’s notion that social interactions are important to moral reasoning, and we invite researchers and practitioners to consider what social experiences in the workplace could promote ethical development. Are some experiences, reflective practices, exercises, ethics training programs, or cultures more effective at promoting ethical behavior? For example, empirical research by Gaspar et al. ( 2015 ) suggests that how an individual reflects on past misdeeds can impact that person’s future immoral behavior. Future research could examine which experiences are most impactful, as well as when, why, and how these experiences affect change. Thus far we have reviewed the early work in moral development, including Socrates, Aristotle, Kant, Piaget, and Kohlberg. The remainder of the article discusses more recent theories.

Variations: Neo-Kohlbergians, Moral Identity, Moral Domain, and Moral Automaticity

The remainder of this paper will review how some researchers have built on Kohlberg’s assumptions and how others have successfully challenged them. In reviewing the theories of these researchers, remaining gaps in understanding will be discussed and future possible directions will be offered. Four areas of moral psychology research will be reviewed as follows: (1) Neo-Kohlbergian research, which builds upon Kohlberg’s original “rational moral judgments” approach, (2) Moral Identity research, which examines how moral identity is a component of how individuals define themselves and is a source for social identification, (3) Moral Domain research, which sees no moral judgment-action gap and assumes that social behavior stems from various domains of judgment, such as moral universals, cultural norms, and personal choice, and (4) Moral Automaticity research, which emphasize the fast and automatic intuitive approach in explanations of moral behavior.

Neo-Kohlbergian Research

Rest ( 1979 , 1984 , 1999 ) extended Kohlberg’s work methodologically and theoretically with his formulation of the Defining Issues Test (DIT), which began as a simple, multiple-choice substitute for Kohlberg’s time-consuming interview procedure. The DIT is a means of activating moral schemas (general knowledge structures that organize information) (Narvaez and Bock 2002 ). It is based on a component model that builds on Kohlberg’s stages of moral development—an approach he called ‘Neo-Kohlbergian.’ Rest ( 1983 ) maintained that a person must develop four key psychological qualities to become moral: moral sensitivity, moral judgment, moral motivation, and moral character. Without these, a person would have many gaps between his or her judgment and behavior. With 25 years of DIT research, Rest and others (Rest et al. 2000 ; Thoma et al. 2009 ) have found some support for the DIT and the model.

Although Rest built on Kohlberg’s work by emphasizing the role of cognitive moral judgments, he moved beyond the idea that the essence of morality is found in reasoning. Under the Neo-Kohlbergian approach, dealing with the moral became a more multifaceted endeavor, and many intricate theories of moral functioning—including moral motivation—have followed.

The work of Rest and his colleagues, along with Kohlberg’s foundation, has become a ‘gold standard’ in the minds of some management scholars (Hannah et al. 2011 ). Rest’s work has proven promising in its ability to explain the gap between moral cognition and behavior. However, his four-component model has also been criticized for assigning a single level of moral development to each respondent. Curzer ( 2014 ) points out that people develop at different rates and across different spheres of life, and that Rest’s Defining Issues Test (DIT) is not specific enough in its assessment of moral development. Future research could explore this criticism and analyze other methods for identifying, measuring, and improving moral development.

Moral Identity and Virtue Ethics Research

Blasi ( 1995 ) subscribed to a Neo-Kohlbergian point of view as he expanded on Kohlberg’s Cognitive Developmental Theory by focusing on motivation, an area of exploration not within the purview of Kohlberg’s main research. Though, toward the end of his career, Kohlberg did become more interested in the concept of motivation in his research (Kohlberg and Candee 1984 ), his empirical findings illuminate an individual’s understanding of moral principles without shedding much light on the motivation to act on those principles. According to Kohlberg, proficient moral reasoning informs moral action but does not necessarily explain it completely (Aquino and Reed 2002 ). Kohlberg’s own findings showed moral reasoning does not necessarily predict moral behavior.

Though his research builds on Kohlberg’s by emphasizing the role of cognitive development, Blasi’s focus on motivation represents a philosophical shift that provides a basis for moral identity research. Researchers in moral identity, though they agree with Kohlberg on some aspects of moral behavior, find the meaning of morality in characteristics or values that motivate a person to act. Because these components of identity are defined by society and deal with outcomes that a decision maker seeks, the philosophy of moral identity is more teleological than deontological. The philosophical definition of morality held by moral identity theorists influenced the way they studied moral behavior and the judgment-action gap.

Blasi introduced the concept of ‘the self’ as a sort of mediator between moral reasoning and action. Could it be that ‘the self’ was the source for moral motivation? Up until then, most of Kohlberg’s empirical findings involved responses to hypothetical moral dilemmas which might not seem relevant to the self or in which an individual might not be particularly engaged (Giammarco 2016 ; Walker 2004 ). Blasi’s model of the self was one of the first influential theories that endeavored to connect moral cognition (reasoning) to moral action, explaining the moral judgment-action gap. He proposed that moral judgments or moral reasoning could more reliably connect with moral behavior by taking into account other judgments about personal responsibility based upon moral identity (Blasi 1995 ).

Blasi is considered a pioneer for his theory of moral identity. His examination has laid a foundation upon which other moral identity scholars have built using social cognition research and theory. These other scholars have focused on concepts such as values, goals, actions, and roles that make up the content of identity. The content of identity can take a moral quality (e.g., values such as honesty and kindness, goals of helping, serving, or caring for others) and, to one degree or another, become central and important in a person’s life (Blasi 1983 ; Hardy and Carlo 2005 , 2011 ). Research by Walker et al. ( 1995 ) shows that some individuals see themselves exhibiting the moral on a regular basis, while others do not consider moral standards and values particularly relevant to their daily activities.

Blasi’s original Self Model ( 1983 ) posited that three factors combine to bridge the moral judgment-action gap. The first is the moral self, or what is sometimes referred to as ‘moral centrality,’ which constitutes the extent to which moral values define a person’s self-identity. Second, personal responsibility is the component that determines that after making a moral judgment, a person is responsible to act upon the judgment. This is a connection that Kohlberg’s model lacked. Third, this kind of self-consistency leads to a reliable, constant uniformity between judgment and action (Walker 2004 ).

Blasi ( 1983 , 1984 , 1993 , 1995 , 2004 , 2009 ) and Colby and Damon ( 1992 , 1993 ) posit that people with a moral personality have personal goals that are synonymous with moral values. Blasi’s model claims if one acts consistently according to his or her core beliefs, moral values, goals, and actions, then he or she possesses a moral identity or personality. When morality is a critical element of a person’s identity, that person generally feels responsible to act in harmony with his or her moral beliefs (Hardy and Carlo 2005 ).

Since Blasi introduced his Self Model, he has elaborated in more detail on the structure of the self’s identity. He has classified two distinct elements of identity: first, the objective content of identity such as moral ideals, and second, the modes in which identity is experienced, or the subjective experience of identity. As moral identity matures, the basis for self-perception transitions from external content to internal content. A mature identity is based on moral ideals and aspirations rather than relationships and actions. Maturity also brings increased organization of the self and a refined sense of agency (Blasi 1993 ; Hardy and Carlo 2005 ).

Blasi believes that moral identity produces moral motivation. Thus, moral identity is the source for understanding or bridging the moral judgment-action gap. However, some researchers (Frimer and Walker 2008 ; Hardy and Carlo 2005 ; Lapsley and Hill 2009 ) have noted that Blasi’s ideas are quite abstract and somewhat inaccessible. Furthermore, empirical research supporting his notions is limited. Moreover, Blasi’s endorsement of the first-person perspective on the moral has made it difficult to devise empirical studies. Empirical research on Blasi’s model often involves self-report methods, calling into question the validity of self-perceived attributes. In addition, the survey instruments that rate character traits often exhibit arbitrariness and variability across lists of collections of virtues hearkening back to the ‘bag of virtues’ approach that Kohlberg sought to move beyond (Frimer and Walker 2008 ).

On the other hand, some researchers have investigated the concept of ‘moral exemplars,’ presumably under the assumption that they possess moral identities. Colby and Damon’s ( 1992 , 1993 ) research on individuals known for their moral exemplarity found that these individuals experienced “a unity between self and morality” and that “their own interests were synonymous with their moral goals” (Colby and Damon 1992 , p. 362). Hart and Fegley ( 1995 ) compared teenage moral exemplars to other teens and found that moral exemplars are more likely than other teens to describe themselves using moral concepts such as being honest and helpful. Additional research using self-descriptions found similar results (Reimer and Wade-Stein 2004 ). This implies that to maintain ethical character in the workplace managers may want to hire candidates who describe themselves using moral characteristics.

Other identity research includes Hart’s ( 2005 ) model, which strives to identify a moral identity in terms of five factors that give rise to moral behavior (personality, social influence, moral cognition, self and opportunity). Aquino and Reed ( 2002 ) propose that definitions of self can be rooted in moral identity. This concept of self is organized around moral characteristics. Their self-report questionnaire measures the extent to which moral traits are integrated into an individual’s self-concept. Cervone and Tripathi ( 2009 ) stress the need for moral identity researchers to step outside the field of moral psychology, shift the focus away from the moral and engage general personality theorists. This allows moral psychologists to access broader studies in personality and cognitive science and to break out of what they see as the compartmentalized discourse within moral psychology.

In summary, the main concern of moral identity theory is how unified or disunified a person is, or the level of integrity an individual possesses. For moral psychologists, an individual with integrity is unified and consistent across all contexts. Because of this unification and consistency, that person experiences fewer lapses (or gaps) in his or her moral judgments and moral actions (Frimer and Walker 2008 ).

Moral identity theory represents a philosophical belief that morality is at the core of personhood. Rather than focusing simply on the processes or functioning of moral development and ethical decision making, moral identity scholars look more deeply at what motivates moral behavior, and they make room for the concept of agency. Similarly, Ellertson et al. ( 2016 ) draw on Levinas to explain that morality is more central to human existence than simply the processes it includes.

The philosophy of virtue ethics arose from Aristotle’s views of the development of virtues (Grant et al. 2018 ). Virtue ethics theorizes that any individual can obtain real happiness by pursuing meaning, concern for the common good, and the trait of virtue itself, and that by doing so such an individual will develop virtuous qualities that further increase his or her capacity to obtain real happiness through worthwhile pursuits (Martin 2011 ).

Virtue ethics also posits that individuals with enough situational awareness and knowledge can correctly evaluate their own virtue, underlying motivations, and ethical options in a given situation (Martin 2011 ). Grant et al. ( 2018 ) explain that researchers of virtue ethics explore virtue as being context specific, relative to the individual, and developing over a lifetime. Therefore, virtue ethics considers moral decision making to be both personal and contextual, and defines ethical decisions as leading to actions that impact the common good and contribute to an individual’s real happiness and self-perceived virtue.

Although empirical research has found evidence of the constancy of individuals’ virtue characteristics under different situations, research suggests virtues are not necessarily predictive of actual ethical behavior (Jayawickreme et al. 2014 ). Empirical evidence of the application of the theory of virtue ethics at the individual level is lacking; a recent review of thirty highly cited virtue ethics papers found only two studies that collected primary empirical data at the individual level (Grant et al. 2018 ). Thus, we call on ethics scholars to investigate the development and situational or universal influence of virtue states, traits, and characteristics, as well as their impact on happiness and other outcomes.

We invite management scholars to utilize the findings summarized in this section as they research how to effectively identify, socialize, and leverage candidates possessing virtuous characteristics and moral integrity. Future research can explore the feasibility of hiring metrics centered on ethical integrity. We note the difficulty scholars have had in designing a tool for accurately assessing ethical integrity and in separating the concept from ethical sensitivity (Craig and Gustafson 1998 ). We also note the opportunity for more research to discover and improve instruments and measures to assess ethical integrity and subsequent development of high moral character.

Moral Domain Research

As with most moral psychology research, ‘domain theory’ also stems from Kohlberg’s foundational research because it emphasizes the role of cognition in moral functioning. However, the work of theorists in this branch of psychology differs philosophically from the work of Kohlberg. Domain theory incorporates moral relativity to an extent that Kohlberg would likely be uncomfortable with. For them, the study of moral behavior is less about determining how humans ought to behave and more about observing how humans do behave. This ‘descriptive’ approach to morality is reflected in the majority of the theories through the end of this section.

Elliot Turiel and Larry Nucci are prominent domain theorists; they distinguish judgments of social right and wrong into different types or categories. For Nucci ( 1997 ), morality is distinct from other domains of knowledge, including our understanding of social norms. For domain theorists, social behavior can be motivated by moral universals, cultural norms, social norms, or even personal choice (Turiel 1983 ). Thus, social judgments are organized within domains of knowledge. Whether an individual behaves morally depends upon the judgments that person makes about which domain takes precedence in a particular context.

Nucci ( 1997 ) asserts that certain types of social behavior are governed by moral universals that are independent from social beliefs. This category includes violence, theft, slander, and other behaviors that threaten or harm others. Accordingly, research suggests that notions of morality are derived from underlying perceptions about justice and welfare (Turiel 1983 ). Theories of this sort define morality as beliefs and behavior related to treating others fairly and respecting their rights and welfare. In this sense, morality is distinct from social conventions such as standards of fashion and communication. These social norms define what is correct based on social systems and cultural traditions. This category of rules has no prescriptive force and is valuable primarily as a way to coordinate social interaction (Turiel 1983 ).

Turiel ( 1983 , 1998 ) elaborates on the differences between the moral and social domain in his Social Domain Theory. In contrast to Blasi, he proposes that morality is not a domain in which judgments are central for some and peripheral for others, but that morality stands alongside other important social and personal judgments. To understand the connection between judgment and action, Turiel believes it is necessary to consider how an individual applies his or her judgments in each domain—moral, social, and personal (Turiel 2003 ).

Turiel’s social-interactionist model places behaviors that harm, cause injustice, or violate rights in the ‘moral domain.’ He claims that the definition of moral action is derived in part from criteria given in the philosophy of Aristotle where concepts of welfare, justice, and rights are not considered to be determined by consensus or existing social arrangements, but are universally valid. In contrast, actions that involve matters of social or personal convention have no intrinsic interpersonal consequences, thus they fall outside the moral domain. Individuals form concepts about social norms through involvement in social groups.

Turiel and Nucci’s work does not accept the premise that a moral judgment-action gap exists (Nucci 1997 ; Turiel 1983 , 1998 ). They explain inconsistencies between judgment and behavior as the result of individuals accessing different domains of behavior. Thus, a judgment about which domain of judgments to prioritize precedes action. While an action may be inconsistent with a person’s moral judgment, it may not be inconsistent with that person’s overarching judgments that have higher priority. In other words, the person can know something is right, but in the end decide that he would rather do something else because in balancing his moral, personal, and social concerns, something else won out as seeming more important in the end. This particular aspect of Turiel’s model could be compared to Blasi’s personal responsibility component in which after a moral judgment is made, the person decides whether he has a responsibility in the particular moment or situation to act upon the judgment. Kohlberg’s research did not sufficiently address this element of responsibility to act.

Even though Turiel and Nucci recognize the prescriptive nature of behavior in the moral domain, they assert that the individual must make a judgment about whether it merits acting upon, or whether another sphere of action takes precedence. In other words, Turiel and Nucci may deem a particular moral action to be more important than action in the social or personal conventional sphere. However, unless the individual deems it so, there is no moral failure. The individual decides which sphere takes priority at any given time. The notions of integrity, personal responsibility, and identity as the origin of moral motivation (Blasi 1995 ; Hardy and Carlo 2005 ; Lapsley and Narvaez 2004 ) do not apply within Turiel’s social-interactionist model.

Dan Ariely, Francesca Gino, and others have discovered some interesting findings about activating the moral domain through triggers such as recall of the Ten Commandments or an honor code (Ariely 2012 ; Gino et al. 2013 ; Hoffman 2016 ; Mazar et al. 2008 ). However, research in this area is still in its infancy, and other scholars have not always been able to replicate the results (c.f., Verschuere et al. 2018 ). Future research could examine factors that determine why a certain sphere of action takes precedence over other spheres in motivating specific behaviors. For example, which factors impact an individual’s decision to act within the moral domain or within the social sphere? How can the moral domain be triggered? Why does or doesn’t one’s training or knowledge (such as the ability to recall culturally accepted moral principles such as the Ten Commandments) predict one’s ethical behavior?

In a similar vein to Turiel and Nucci, Bergman’s Model ( 2002 ) accepts an individual to be moral, even if that individual does not act upon his or her moral understanding. He finds the moral in the relationships among components of reasoning, motivation, action, and identity. With this model he seeks to answer the question raised by Turiel’s model, ‘If it is just a matter of prioritizing domains of behavior, why be moral?’ He asserts that his model preserves the centrality of moral reasoning in the moral domain, while also taking seriously personal convention and motivation, without succumbing to a purely subjectivist perspective (c.f., Bergman 2002 , p. 36).

Bergman strives to articulate the motivational potential of moral understanding as truly moral even when it has not been acted upon. He does not assume that moral understanding must have an inevitable expression in action as did Kohlberg. Thus, Bergman provides another context for thinking about the problem of the judgment-action gap. He focuses on our inner moral intentions. He believes that when people behave morally, they do so simply because they define themselves as moral; acting otherwise would be inconsistent with their identity (Bergman 2002 ).

The assumptions underlying domain theory present several dangers to organizations. Moral Domain Theory assumes, with Kohlberg, that the essence of morality is in the human capability to reason, and that there is no moral issue at hand unless it is recognized cognitively. This creates the possibility of excusing individuals from the responsibility of the outcome of their actions. Even though Kohlberg believed in universal moral rules, the fact that he based such a belief in reasoning and empirical evidence allows those who build on his theory to create a morally acceptable place for behaviors that one deems reasonable even when such behaviors negatively impact the well-being of self or others. The question for management scholars is if we are willing to accept the consequences of such assumptions. We call on scholars to challenge these assumptions, such as by researching on a deeper level where morality really comes from and what it implies for decision making in organizations.

On the other hand, Moral Domain Theory addresses the influence of social norms, which is an important moral issue that Kohlberg’s research did not address. For example, empirical research by Desai and Kouchaki ( 2017 ) suggests that subordinates can use moral symbols to discourage unethical behavior by superiors. As we suggested earlier, future research should examine the influence of organizational, cultural, and social norms, symbols, and prompts. Even where universal norms do not prohibit an action, a person may be acting immorally according to expectations established within organizations or relationships. We call on scholars to consider if and when certain norms specific to a situation, organization, community, relationship, or other context may or may not (and should or should not) override universal principles. Research of this nature will help clarify what is ethically acceptable.

Moral Automaticity Research

The philosophies of the researchers we will describe in this section begin to move away from Kohlberg’s assumption that morality is found in deliberate cognitive reasoning and the assumption that universal moral standards exist. For scholars in the moral automaticity realm, morality is based on automatic mental processes developed through evolution to benefit our individual and collective social survival. However, while they discuss moral judgments in terms of automatic rather than deliberate judgments, they still hold that the meaning of morality is found in the judgments that humans make.

Additionally, accounts of morality focused on automatic, neurological processes conflict with ideas of free will and personal responsibility. These accounts rely on the concept of determinism, or the belief that all actions and events are the predetermined, inevitable consequences of various environmental and biological processes (Ellertson et al. 2016 ). If these processes are really the basis of morality, some critics argue we are reduced to creatures without individuality. There is clearly a balance between automatic and deliberative processes in human moral behavior that allows for individual differences and preserves the idea of agency. We propose that while automatic processes certainly play a role in moral decision making, that role is to assist in a more fundamental purpose of our existence as humans (Ellertson et al. 2016 ). With this in mind, we summarize some of the most prominent research based on moral automaticity, summarize the research that argues for the existence of moral schemas and moral heuristics, then suggest directions for future research.

Narvaez and Lapsley ( 2005 ) have argued that John Bargh’s research provides persuasive empirical evidence that automatic, preconscious cognition governs a large part of our daily activities (e.g., Bargh 1989 , 1990 , 1996 , 1997 ; Uleman and Bargh 1989 ). Narvaez and Lapsley ( 2005 ) assert that this literature seems to thoroughly undermine Kohlberg’s assumptions. Bargh and Ferguson ( 2000 ) note, for example, that “higher mental processes that have traditionally served as quintessential examples of choice and free will—such as goal pursuit, judgment, and interpersonal behavior—have been shown recently to occur in the absence of conscious choice or guidance” (p. 926). Bargh concludes that human behavior is not very often motivated by conscious, deliberate thought. He further states that “if moral conduct hinges on conscious, explicit deliberation, then much of human behavior simply does not qualify” (c.f., Narvaez and Lapsley 2005 , p. 142).

Haidt’s ( 2001 ) views on the moral take the field in the intuitive direction. He focuses on emotional sentiments, some of which have been seen in the previous arguments of Eisenberg ( 1986 ) and Hoffman ( 1970 , 1981 , 1982 ) as well as the original thinking of Hume ( 1739/2001 , 1777/1960 ), who concerned himself with human ‘sentiments’ as sources of moral action. Haidt claims that “the river of fMRI studies on neuroeconomics and decision making” gives empirical evidence that “the mind is driven by constant flashes of affect in response to everything we see and hear” (Haidt 2009 , p. 281). Hoffman ( 1981 , 1982 ) provides an example of these affective responses that Haidt refers to. He gives evidence that humans reliably experience feelings of empathy in response to others’ misfortunes, resulting in altruistic behavior. In Hoffman’s foundational work, we see that altruism and other pro-social behaviors fit in with empirical findings from modern psychological and biological research.

Haidt’s Social Intuitionist Model (SIM), has brought a resurgence of interest in the importance of emotion and intuition in determining the moral. He asserts that the moral is found in judgments about social processes, not in private acts of cognition. These judgments are manifest automatically as innate intuitions. He defines moral intuition as “the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling (like-dislike, good-bad) about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion” (Haidt 2001 , p. 818).


Haidt asserts that “studies of everyday reasoning show that we usually use reasoning to search for evidence to support our initial judgment, which was made in milliseconds” ( 2009 , p. 281). He believes that only rarely does reasoning override our automatic judgments. He does not like to contrast the terms emotion and cognition because he sees it all as cognition, just different kinds: (1) intuitions that are fast and affectively laden and (2) reasoning that is slow and less motivating.

Haidt focuses on innate intuitions that are linked to the social construction of the ethics of survival. He sees action as moral when it benefits survival (Haidt 2007 ). He argues that humans “come equipped with an intuitive ethics , an innate preparedness to feel flashes of approval or disapproval toward certain patterns of events involving other human beings” (Haidt and Joseph 2004 , p. 56). Haidt proposes two main questions that he believes are answered by his Social Intuitionist Model: (1) Where do moral beliefs and motivations come from? and (2) How does moral judgment work?

His answer to the first question is that moral views and motivation come from automatic and immediate emotional evaluations of right and wrong that humans are naturally programmed to make. He cites Hume who believed that the basis for morality comes from an “immediate feeling and finer internal sense” (Hume 1777/1960 , p. 2).

To answer the second question (‘How does moral judgment work?’), Haidt explains that brains “integrate information from the external and internal environments to answer one fundamental question: approach or avoid?” (Haidt and Bjorklund 2007 , p. 6). Approach is labeled good; avoid is bad . The human mind is constantly evaluating and reacting along a good-bad dimension regarding survival.

The Social Intuitionist Model presents six psychological connections that describe the relationships among intuitions, conscious judgments, and reasoning. Haidt’s main proposition is that intuition trumps reasoning in moral processing (Haidt and Bjorklund 2007 ). Moral judgment-action gaps, then, appear between an action motivated by intuition and judgments that come afterwards. Applied to Kohlberg’s empirical study, this would imply that the reasoning he observed served not to motivate decisions but to justify them after the fact.

This approach suggests that ethical behavior is driven by naturally programmed emotional responses. Recent research by Wright et al. ( 2017 ) suggests that moral emotions can influence professional behavior. Other work conducted by Peck et al. ( 1960 ) shows that social influences, especially in family settings, stimulate character development over time. They also dismiss the importance of the debate between automatic and cognitive judgments by showing that people who have developed the highest level of moral character judge their actions “either consciously or unconsciously” and that “the issue is not the consciousness, but the quality of the judgment” (Peck et al. 1960 , p. 8).

Monin et al. ( 2007 ) also strive to move beyond the debate that pits emotion or intuition against reason, vying for primacy as the source for the moral. They assert that the various models that seek to bridge the judgment–action gap are considering two very different proto-typical situations. First, those who examine how people deal with complex moral issues find that moral judgments are made by elaborate reasoning. Second, those who study reactions to alarming moral misconduct conclude that moral judgments are quick and intuitive. Benoit Monin and his colleagues propose that researchers should not arbitrarily choose between the one or the other but embrace both types of models and determine which model type has the greater applicability in any given setting (Monin et al. 2007 ).

Narvaez ( 2008a ) contends that Haidt’s analysis limits moral judgment to the evaluation of another person’s behavior or character. In other words, his narrow definition of moral reasoning is limited to processing information about others. She wonders about moral decision making involving personal goals and future planning (Narvaez 2008a ).

Narvaez ( 2008a ) also believes that Haidt over-credits flashes of affect and intuition and undervalues reasoning. In her view, flash affect is just one of many processes we use to make decisions. Numerous other factors affect moral decisions along with gut feelings, including goals, mood, preferences, environmental influences, context, social pressure, and consistency with self-perception (Narvaez 2008a ). We call on scholars to investigate whether, when, how, and with which level of complexity people wrestle with moral decisions. We also suggest researchers consider investigating whether there is anything that organizations can do to move people away from fast and automatic decisions (and toward slow and thoughtful decisions), and whether doing so motivates more ethical choices.

Moral Schemas Research

Haidt and Narvaez both believe that morality exists primarily in evolved brain structures that maximize social survival, both collectively and individually (Narvaez 2008a , b ). Narvaez asserts that Haidt’s Social Intuitionist Model includes biological and social elements but lacks a psychological perspective. Narvaez ( 2008a ) finds the moral ultimately in “psychobehavioral potentials that are genetically ingrained in brain development” as “evolutionary operants” (p. 2). To explicate these evolutionary operants, she refers to her own model of psychological schemas that humans access to make decisions. She notes that Haidt’s idea of modules in the human brain is accepted by many evolutionary psychologists but that such assertions lack solid empirical evidence in neuroscience ( 2008a ).

In contrast, Narvaez’s schemas are brain structures that organize knowledge based on an individual’s experience (Narvaez et al. 2006 ). In general, Schema Theory describes abstract cognitive formations that organize intricate networks of knowledge as the basis for learning about the world (Frimer and Walker 2008 ).

Schemas facilitate the process of appraising one’s social landscape, forming moral identity or moral character. Narvaez terms this “moral chronicity” and claims that it explains the automaticity by which many moral decisions are made. Individuals “just know” what is required of them without engaging in an elaborate decision-making process. Neither the intuition nor the activation of the schemas is a conscious, deliberative process. Schema activation, though mostly shaped by experience (thus the social aspect), is ultimately rooted in what Narvaez ( 2008b ) refers to as “evolved unconscious emotional systems” that predispose responses to particular events (p. 95).

Narvaez’s ‘Triune Ethics Theory’ ( 2008b ) explains her idea of unconscious emotional systems. This research proposes that these emotional systems are fundamentally derived from three evolved formations in the human brain. Her theory is modeled after MacLean’s ( 1990 ) Triune Brain Theory, which posited that these formations bear the resemblance of animal evolution. Each of the three areas has a “biological propensity to produce an ethical motive” (Narvaez 2008b , p. 2). With these formations, animals and humans have been able to adapt their behavior according to the challenges of life (Narvaez 2008b ). Emotional systems, because of their central location, can interact with other cognitive formations. Thus, a thought accompanies every emotion, and most thoughts also stimulate emotion. Narvaez’s model is a complex system in which moral behavior (though influenced by social events) is determined almost completely from the structures of the brain.

Some researchers (Bargh and Chartrand 1999 ; Gigerenzer 2008 ; Lapsley and Narvaez 2008 ; Sunstein 2005 ) assert that intuition and its consequent behavior are constructed almost completely through environmental stimuli. Bargh and Chartrand ( 1999 ) assert that “most of a person’s everyday life is determined not by their conscious intentions and deliberate choices but by mental processes that are put into motion by features of the environment and that operate outside of conscious awareness and guidance” (p. 462). Our brains automatically perceive our environment, including the behavior of other people. These perceptions stimulate thoughts that lead to actions and eventually to patterns of behavior. This sequence is automatic; conscious choice plays no role in it (see, e.g., Bargh and Chartrand 1999 , p. 466).

Lapsley and Hill ( 2008 ) address Frimer and Walker’s original question of whether moral judgment is more deliberate or more automatic. They include Bargh and Chartrand ( 1999 ) in their list of intuitive models of moral behavior which they label ‘System 1’ models because they describe processing which is “associative, implicit, intuitive, experiential, automatic and tacit” as opposed to ‘System 2’ models where the mental processing is “rule based, explicit, analytical, ‘rational’, conscious and controlled” (p. 4). They categorize Haidt’s and Narvaez’s models as System 1 models because they are intuitive, experiential, and automatic.

Moral Heuristics Research

Gigerenzer ( 2008 ) believes that intuitions come from moral heuristics. Moral heuristics are rules developed by experience that help us make simple moral decisions and are transferable across settings. They are shortcuts that are easier and quicker to process than deliberative, conscious reasoning. Thus, they are automatic in their presentation. They are fast and frugal. They are fast in that they enable quick decision making and frugal because they require a minimal search for information to make decisions.

Heuristics are deeply context sensitive. The science of heuristics investigates which intuitive rules are readily available to people (Gigerenzer and Selten 2001 ). Gerd Gigerenzer is interested in the success or failure of these rules in different contexts. He rejects the notion of moral functioning as either rational or intuitive. Reasoning can be the source of heuristics, but the distinction that matters most is between unconscious and conscious reasoning. Unconscious reasoning causes intuition, and—as with Haidt’s theories mentioned earlier—conscious reasoning justifies moral judgments after they are made (Lapsley and Hill 2008 ). In general, Gigerenzer asserts that moral heuristics are accurate in negotiating everyday moral behavior.

Sunstein’s ( 2005 ) model also claims that intuitions are generated by ‘moral heuristics.’ However, in contrast to Gigerenzer, he notes that heuristics can lead to moral errors or gaps between good judgment and appropriate behavior when these rules of thumb are undisciplined or decontextualized. This happens when we use heuristics as if they were universal truths or when we apply heuristics to situations that would be handled more appropriately with slower rational deliberation. Sunstein ( 2005 ) supports the view that evolution and social interaction cause the development of moral heuristics. Also, recent research by Lee et al. ( 2017 ) suggests an evolutionary account for male immorality, providing some support for the existence of an evolutionary origin and for the use of moral automaticity. To investigate the disagreement between Sunstein and Gigerenzer, we call on researchers to further examine the frequency, depth, and accuracy with which humans use moral heuristics.

Lapsley and Hill ( 2008 ) categorize the theories of Sunstein ( 2005 ) and Gigerenzer ( 2008 ) as System 1 models because the behavior they describe appears to be implicit, automatic, and intuitive. These models emphasize the automaticity of moral judgments that come from social situations. A person with a moral personality detects the moral implications of a situation and automatically makes a moral judgment (Lapsley and Hill 2009 ). For this kind of person, morality is deeply engrained into social habits.

Though Lapsley and Hill categorize heuristics models the same as Haidt’s, we observe that ‘intuition’ in the sense of heuristics means something very different to them than what it means to Haidt. In Haidt’s Social Intuitionist Model, learning structures developed through evolution are the source of automatic judgments. On the other hand, Sunstein’s ( 2005 ) intuitions come from ‘moral heuristics,’ which are quick moral rules of thumb that pop into our heads and can even be developed through reasoning. As researchers examine the roles of reasoning and intuition in moral decision making, they may consider breaking intuition into categories, such as intuitions that represent heuristics and intuitions that come from biological predispositions.

The models of moral functioning just described fall into the ‘intuitive’ category, though they are competing descriptions of how to meaningfully connect judgment and action. Frimer and Walker ( 2008 ) observe that on one hand, models based on deliberative reasoning may be the most explanatory in that they require individuals to engage in and be aware of their own moral processing; “The intuitive account, in contrast, requires a modicum of moral cognition but grants it permission to fly below the radar” (p. 339). In a way, it separates moral functioning from consciousness or the ‘self.’

Future Research Directions

The specialties of moral automaticity, moral schemas, and moral heuristics are interesting and promising areas for those interested in future research in ethical decision making. One reason is because these specialties are highly multidisciplinary. Philosophers, psychologists, sociologists, anthropologists, neuroscientists, and others, in addition to business scholars, are throwing themselves into this area. A second reason is because some of the most interesting future research questions in this multidisciplinary field are interdisciplinary. Many of the unanswered questions are complex and must be addressed from many different angles and with a variety of tools.

Consider just two research questions: (1) How does individual meaning-making actually take place if biological evolution is the primary driver and architect of both our personal moral choices and subsequent ethical interpretations? (2) What type of real accountability is possible if brains are programmed to make moral and/or immoral choices? These types of questions lie at the heart of what it means to be a human being, and these are just a few of the theoretical questions in moral automaticity research.

Future research directions in the empirical examination of moral automaticity are just as fascinating. For example, (1) Where, when, and why does the brain light up when ethical decisions are made and reflected upon? (2) Which areas of the brain fire first when confronted with a difficult ethical situation? (3) What is the sequence that the brain fires in when experiencing moral disengagement? (4) How plastic is the brain as it relates to rewiring and strengthening neural pathways that will lead to more prosocial behavior? (5) What are the predominant dispositional and situational factors that lead the brain to heal from moral injury? (6) How do various situations, social settings, and personality differences interact to activate automatic and deliberative processes?

In summary, we call for future research in moral automaticity, moral schemas, and moral heuristics to shed light on the roots of moral action. Given the research supporting the role of automaticity in moral processing, we caution against relying too heavily on models that emphasize the preeminence of rational thinking until research further examines this phenomenon. We also call for research examining the same subjects both in situations that require deliberative processing and in situations that are inherently intuitive. We suggest the use of fMRI studies to observe the activity of different sections of the brain—those associated with rational, cognitive processes and those associated with intuitive judgments—during unique situations.

Even in the earliest stages of moral philosophy, Socrates, Plato, and Aristotle noted that people do not always act on the rational understanding they possess (Kraut 2018 ). They used the term “akrasia” to describe the phenomenon in which a person knows what is right but fails to act on that knowledge. This is commonly called the moral judgment-action gap.

Lawrence Kohlberg’s work (Kohlberg 1969 , 1971a , b ) is not only widespread in research, but also in business education today. His influential theory of cognitive moral development rests on the assumption that the ability to morally reason at a certain level is the primary core and driver of a person’s morality. Kohlberg’s work proposes that stages of moral development, which are defined at a universal level, are what is most fundamental. Although his ideas are important, recent research demonstrates his theorizing is insufficient in understanding and predicting the moral judgment-action gap (Hannah et al. 2018 ; Sanders et al. 2018 ). This article has provided various examples of other research that has successfully moved beyond Kohlberg’s assumptions (Aquino and Reed 2002 ; Grant et al. 2018 ). For this reason, we encourage ethics scholars to reconsider an overreliance on rationality in their research in behavioral business ethics. In Fig.  2 , we show the major theories and the relationships between them.

Many scholars have presented research that specifically addresses the judgment-action gap. For example, moral identity theory and virtue ethics explore how a person’s self-perception motivates moral behavior (Blasi 1983 ; Hardy and Carlo 2005 , 2011 ; Walker et al. 1995 ). However, more empirical evidence and better theories and models are needed that show how a person develops moral identity and moral character. Future studies can examine the ways in which moral identity leads to ethical decision making.

Moral domain theory suggests that the judgment-action gap does not exist and that the gap can be understood through additional domains of reasoning (ex. self-interest, social interest, etc.) used to evaluate the moral implications of a given situation (Bergman 2002 ; Nucci 1997 ; Turiel 2003 ). What we do not fully understand is what causes a person to recognize moral implications in the first place and how individuals apply different domains in decision making. Given the conflicting research findings (e.g., Mazar et al. 2008 ; Verschuere et al. 2018 ), we call for more research that shows what stimuli can trigger a person to view a decision as a moral decision as opposed to a decision in which social influences or personal preferences take precedence.

Some scholars oppose the idea that conscious reasoning governs most moral behavior. For example, Bargh ( 1989 , 1990 , 1996 , 1997 , 2005 ; Bargh and Ferguson 2000 ; Uleman and Bargh 1989 ) and Haidt ( 2001 , 2007 , 2009 ) have provided evidence that people make ethical decisions based on automatic intuitions. As Narvaez ( 2008b ) has pointed out, however, we would be wrong to assume that all decisions are based solely on flashes of intuition. What we do not know is how factors such as situation, personality, and cultural background influence the relative and complimentary roles of conscious reasoning and intuition in moral behavior. We call for research that investigates the influence of these factors on moral processing.

Even Haidt ( 2009 ) recognizes the existence of moral reasoning, though he claims that it occurs only to rationalize an intuitive decision after it has been made. Scholars who discuss the development and use of heuristics (Gigerenzer 2008 ; Sunstein 2005 ) show how past reasoning about moral situations—perhaps the kind of reasoning that Haidt refers to—can influence the development of behavioral rules of thumb. These rules, or “heuristics,” appear to function automatically after they have been developed through cognition over the course of a person’s experiences. What we do not understand is the extent to which heuristics are consistent with an individual’s conscious moral understanding. We call for research that explores the formation of heuristics and their reliability in making real-life ethical decisions that are consistent with a person’s moral understanding.

This article shows that different theories point us in different directions within the fields of moral psychology and ethical decision making. Thus, it is very difficult to form a holistic understanding of moral development and processing. With this in mind, our most urgent call is for scholars to develop a holistic framework of moral character development and a comprehensive theory of ethical decision making. These types of models and theories would serve as powerful tools to fuel future empirical research to help us understand why people do not always act on their moral understanding. More robust research is critical to understanding how to prevent devastating ethical failures and how to foster ethical courage.

For simplicity throughout this article, we also use “judgment-action gap.”

Akrasia relates to the moral judgment-moral action gap discussed throughout this article.

The individual considers laws valid and worthy of obedience insofar as they are grounded in justice.

“This principle asserts that moral reasoning is a conscious process of individual moral judgment using ordinary moral language (Kohlberg et al. 1983 ). The moral quality of behavior hinges on agent phenomenology; it depends solely on the subjective perspective, judgment and intention of the agent.” (Lapsley and Hill 2009 , p. 1)

Aquino, K., & Reed, A. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83 (6), 1423–1440.

Google Scholar  

Ariely, D. (2012). The (Honest) truth about dishonesty: How we lie to everyone-especially ourselves . London: HarperCollins.

Ash, M. G., & Woodward, W. R. (1987). Psychology in twentieth-century thought and society . New York: Cambridge University Press.

Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic influence in social perception and cognition. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 3–51). New York: Guilford Press.

Bargh, J. A. (1990). Auto-motives: Preconscious determinants of thought and behavior. In E. T. Higgins & R. M. Sorrentino (Eds.), Handbook of motivation and cognition (Vol. 2, pp. 93–130). New York: Guilford Press.

Bargh, J. A. (1996). Principles of automaticity. In E. T. Higgins & A. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 169–183). New York: Guilford Press.

Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer Jr. (Ed.), The automaticity of everyday life, advances in social cognition (Vol. 10, pp. 1–61). Mahwah, NJ: Lawrence Erlbaum Associates.

Bargh, J. A. (2005). Bypassing the will: Toward demystifying the nonconscious control of social behavior. In R. R. Hassin, J. S. Uleman, & J. A. Bargh (Eds.), The new unconscious (pp. 37–60). Oxford: Oxford University Press.

Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.

Bargh, J. A., & Ferguson, M. J. (2000). Beyond behaviorism: On the automaticity of higher mental processes. Psychological Bulletin, 126, 925–945.

Bazerman, M. H., & Sezer, O. (2016). Bounded awareness: Implications for ethical decision making. Organizational Behavior and Human Decision Processes, 136, 95–105.

Bergman, R. (2002). Why be moral? A conceptual model from developmental psychology. Human Development, 45, 104–124.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88 (1), 1–45.

Blasi, A. (1983). Moral cognition and moral action: A theoretical perspective. Developmental Review, 3 (2), 178–210.

Blasi, A. (1984). Moral identity: Its role in moral functioning. In W. M. Kurtines & J. L. Gewirtz (Eds.), Morality, moral behavior, and moral development (pp. 129–139). New York: Wiley.

Blasi, A. (1993). The development of identity: Some implications for moral functioning. In G. G. Noam, T. E. Wren, G. Nunner-Winkler, & W. Edelstein (Eds.), Studies in contemporary German social thought (pp. 99–122). Cambridge, MA: The MIT Press.

Blasi, A. (1995). Moral understanding and the moral personality: The process of moral integration. In W. M. Kurtines & J. L. Gewirtz (Eds.), Moral development (pp. 229–253). Boston, MA: Allyn.

Blasi, A. (2004). Moral functioning: Moral understanding and personality. In A. Blasi, D. K. Lapsley, & D. Narváez (Eds.), Moral development, self, and identity (pp. 335–348). Mahwah, NJ: Lawrence Erlbaum Associates.

Blasi, A. (2009). The moral functioning of mature adults and the possibility of fair moral reasoning. In D. Narvaez & D. K. Lapsley (Eds.), Personality, identity, and character (pp. 396–440). New York: Cambridge University Press.

Cervone, D., & Tripathi, R. (2009). The moral functioning of the person as a whole: On moral psychology and personality science. In D. Narvaez & D. K. Lapsley (Eds.), Personality, identity and character, explorations in moral psychology (pp. 30–51). New York: Cambridge University Press.

Chelliah, J., & Swamy, Y. (2018). Deception and lies in business strategy. Journal of Business Strategy, 39 (6), 36–42.

Colby, A., & Damon, W. (1992). Some do care: Contemporary lives of moral commitment . New York: Free Press.

Colby, A., & Damon, W. (1993). The uniting of self and morality in the development of extraordinary moral commitment. In G. G. Noam & T. E. Wren (Eds.), The moral self (pp. 149–174). Cambridge, MA: The MIT Press.

Craft, J. L. (2013). A review of the empirical ethical decision-making literature: 2004-2011. Journal of Business Ethics, 117 (2), 221–259.

Craig, S. B., & Gustafson, S. B. (1998). Perceived leader integrity scale: An instrument for assessing employee perceptions of leader integrity. Leadership Quarterly, 9 (2), 127–145.

Curzer, H. J. (2014). Tweaking the four-component model. Journal of Moral Education, 43 (1), 104–123.

De Los Reyes Jr, G., Kim, T. W., & Weaver, G. R. (2017). Teaching ethics in business schools: A conversation on disciplinary differences, academic provincialism, and the case for integrated pedagogy. Academy of Management Learning and Education, 16 (2), 314–336.

Desai, S. D., & Kouchaki, M. (2017). Moral symbols: A necklace of garlic against unethical requests. Academy of Management Journal, 60 (1), 7–28.

Eisenberg, N. (1986). Altruistic emotion, cognition, and behavior . Hillsdale, NJ: Lawrence Erlbaum Associates.

Ellertson, C. F., Ingerson, M., & Williams, R. N. (2016). Behavioral ethics: A critique and a proposal. Journal of Business Ethics, 138 (1), 145–159.

Floyd, L. A., Xu, F., Atkins, R., & Caldwell, C. (2013). Ethical outcomes and business ethics: Toward improving business ethics education. Journal of Business Ethics, 117 (4), 753–776.

Frimer, J. A., & Walker, L. J. (2008). Towards a new paradigm of moral personhood. Journal of Moral Education, 37 (3), 333–356.

Gallagher, J. M. (1978). Knowledge and development Piaget and education . New York: Plenum Publishing Corporation.

Gaspar, J. P., Seabright, M. A., Reynolds, S. J., & Yam, K. C. (2015). Counterfactual and factual reflection: The influence of past misdeeds on future immoral behavior. The Journal of Social Psychology, 155 (4), 370–380.

Giammarco, E. A. (2016). The measurement of individual differences in morality. Personality and Individual Differences, 88, 26–34.

Gigerenzer, G. (2008). Moral intuitions = fast and frugal heuristics? In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 1–26)., The cognitive science of morality: Intuition and diversity Cambridge, MA: MIT Press.

Gigerenzer, G., & Selten, R. (Eds.). (2001). Bounded rationality: The adaptive toolbox . Cambridge, MA: The MIT Press.

Gino, F., Krupka, E. L., & Weber, R. A. (2013). License to cheat: Voluntary regulation and ethical behavior. Management Science, 59 (10), 2187–2203.

Grant, P., Arjoon, S., & McGhee, P. (2018). In pursuit of eudaimonia: How virtue ethics captures the self-understandings and roles of corporate directors. Journal of Business Ethics, 153 (2), 389–406.

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.

Haidt, J. (2007). The new synthesis in moral psychology. Science, 316 (5827), 998–1002.

Haidt, J. (2009). Moral Psychology and the Misunderstanding of Religion. In J. Schloss & M. Murray (Eds.), The believing primate: Scientific, philosophical, and theological reflections on the origin of religion (pp. 278–291). New York: Oxford University Press.

Haidt, J., & Bjorklund, F. (2007). Social intuitionists answer six questions about morality. In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 181–217)., The cognitive science of morality: Intuition and diversity Cambridge, MA: The MIT Press.

Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133 (44), 55–66.

Hannah, S. T., Avolio, B. J., & May, D. R. (2011). Moral maturation and moral conation: A capacity approach to explaining moral thought and action. Academy of Management Review, 36 (4), 663–685.

Hannah, S. T., Thompson, R. L., & Herbst, K. C. (2018). Moral identity complexity: Situated morality within and across work and social roles. Journal of Management . https://doi.org/10.1177/0149206318814166 .

Hardy, S. A., & Carlo, G. (2005). Identity as a source of moral motivation. Human Development, 48, 232–256.

Hardy, S. A., & Carlo, G. (2011). Moral identity: Where identity formation and moral development converge. In S. J. Schwartz, K. Luyckx, & V. L. Vignoles (Eds.), Handbook of identity theory and research (pp. 495–513). New York: Springer.

Hart, D. (2005). The development of moral identity. In G. Carlo & C. P. Edwards (Eds.), Nebraska Symposium on Motivation (Vol. 51, pp. 165–196)., Moral motivation through the life span Lincoln, NE: University of Nebraska Press.

Hart, D., & Fegley, S. (1995). Prosocial behavior and caring in adolescence: Relations to self-understanding and social judgment. Child Development, 66 (5), 1346–1359.

Hoffman, M. L. (1970). Moral Development. In P. Mussen (Ed.), Handbook of child psychology (pp. 261–361). New York: Wiley.

Hoffman, M. L. (1981). Is altruism part of human nature? Journal of Personality and Social Psychology, 40 (1), 121.

Hoffman, M. L. (1982). Development of prosocial motivation: Empathy and guilt. In N. Eisenberg (Ed.), The development of prosocial behavior (pp. 281–313). New York: Academic Press.

Hoffman, T. (2016). Contemporary neuropsychological and behavioural insights into cheating: Lessons for the workplace and application to consulting. Organisational and Social Dynamics, 16 (1), 39–54,175.

Hume, D. (1739/2001). A treatise of human nature . In D. F. Norton & M. J. Norton, (Eds.), Oxford: Oxford University Press.

Hume, D. (1777/1960). An enquiry concerning the principles of morals. La Salle, IL: Open Court.

Jayawickreme, E., Meindl, P., Helzer, E. G., Furr, R. M., & Fleeson, W. (2014). Virtuous states and virtuous traits: How the empirical evidence regarding the existence of broad traits saves virtue ethics from the situationist critique. School Field, 12 (3), 283–308.

Jewe, R. D. (2008). Do business ethics courses work? The effectiveness of business ethics education: An empirical study. Journal of Global Business Issues, 2, 1–6.

Johnson, R. (2008). Kant’s moral philosophy. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from http://plato.stanford.edu/archives/fall2008/entries/kant-moral/ .

Kant, I. (1785/1993). Grounding for the metaphysics of morals (3rd edn.) (J. W. Ellington, Trans.). Indianapolis, IN: Hackett Publishing.

Kilduff, G. J., Galinsky, A. D., Gallo, E., & Reade, J. J. (2016). Whatever it takes to win: Rivalry increases unethical behavior. Academy of Management Journal, 59 (5), 1508–1534.

Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Chicago: Rand McNally.

Kohlberg, L. (1971a). Stages of moral development as a basis for moral education. In C. M. Beck, B. S. Crittenden, & E. V. Sullivan (Eds.), Moral education: Interdisciplinary approaches (pp. 23–92). Toronto: University of Toronto Press.

Kohlberg, L. (1971b). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In T. Mischel (Ed.), Psychology and genetic epistemology (pp. 151–235). New York: Academic Press.

Kohlberg, L. (1981). Essays on moral development, Vol. 11: The philosophy of moral development . San Francisco: Harper and Row.

Kohlberg, L. (1984). Essays on moral development, Vol 2: The psychology of moral development . San Francisco, CA: Harper and Row.

Kohlberg, L., & Candee, D. (1984). The relationship of moral judgment to moral action. In L. Kohlberg (Ed.), Essays in moral development (Vol. 2, pp. 498–581)., The psychology of moral development New York: Harper and Row.

Kohlberg, L., & Hersh, R. (1977). Moral development: A review of the theory. Theory into Practice, 16 (2), 53–59.

Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics . Basel, NY: Karger.

Kraut, R. (2018). Aristotle’s ethics, In E. N Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=aristotle-ethics

Lane, M. (2017). Ancient political philosophy. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/archives/sum2017/entries/ancient-political/ .

Lapsley, D. K., & Hill, P. L. (2008). On dual processing and heuristic approaches to moral cognition. Journal of Moral Education, 37 (3), 313–332.

Lapsley, D. K., & Hill, P. L. (2009). The development of the moral personality. In D. Narvaez & D. K. Lapsley (Eds.), Moral personality, identity and character: An integrative future (pp. 185–213). New York: Cambridge University Press.

Lapsley, D. K., & Narvaez, D. (2004). A social-cognitive approach to the moral personality. In D. K. Lapsley & D. Narvaez (Eds.), Moral development, self and identity (pp. 189–212). Mahwah, NJ: Erlbaum.

Lapsley, D. K., & Narvaez, D. (2008). Psychologized morality and ethical theory, or, do good fences make good neighbors? In F. Oser & W. Veugelers (Eds.), Getting involved: Global citizenship development and sources of moral values (pp. 279–291). Rotterdam: Sense Publishers.

Lee, M., Pitesa, M., Pillutla, M. M., & Thau, S. (2017). Male immorality: An evolutionary account of sex differences in unethical negotiation behavior. Academy of Management Journal, 60 (5), 2014–2044.

Levine, C., Kohlberg, L., & Hewer, A. (1985). The current formulation of Kohlberg’s theory and a response to critics. Human Development, 28 (2), 94–100.

MacLean, P. D. (1990). The triune brain in evolution: Role in paleocerebral functions . New York: Plenum Press.

Martin, F. (2011). Human development and the pursuit of the common good: Social psychology or aristotelian virtue ethics? Journal of Business Ethics, 100 (1), 89–98.

Martin, K., & Parmar, B. (2012). Assumptions in decision making scholarship: Implications for business ethics research. Journal of Business Ethics, 105 (3), 289–306.

Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45 (6), 633–644.

Merle, R. (2018, April 19). U.S. to fine Wells Fargo $1 billion—The most aggressive bank penalty of the Trump era. The Washington Post . Retrieved from https://www.washingtonpost.com

Monin, B., Pizarro, D., & Beer, J. (2007). Deciding vs. reacting: Conceptions of moral judgment and the reason-affect debate. Review of General Psychology, 11, 99–111.

Narvaez, D. (2008a). The social-intuitionist model: Some counter-intuitions. In W. A. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of morality: Intuition and diversity (pp. 233–240). Cambridge, MA: The MIT Press.

Narvaez, D. (2008b). Triune ethics: The neurobiological roots of our multiple moralities. New Ideas in Psychology, 26, 95–119.

Narvaez, D., & Bock, T. (2002). Moral schemas and tacit judgment or how the defining issues test is supported by cognitive science. Journal of Moral Education, 31 (3), 297–314.

Narvaez, D., & Lapsley, D. K. (2005). The psychological foundations of everyday morality and moral expertise. In D. Lapsley & C. Power (Eds.), Character psychology and character education (pp. 140–165). Notre Dame, IN: University of Notre Dame Press.

Narvaez, D., Lapsley, D. K., Hagele, S., & Lasky, B. (2006). Moral chronicity and social information processing: Tests of a social cognitive approach to the moral personality. Journal of Research in Personality, 40, 966–985.

Nucci, L. (1997). Moral development and character education. In H. J. Walberg & G. D. Haertel (Eds.), Psychology and educational practice (pp. 127–157). Berkeley, CA: MacCarchan.

O’Fallon, M. J., & Butterfield, K. D. (2005). A review of the empirical ethical decision-making literature: 1996-2003. Journal of Business Ethics, 59 (4), 375–413.

Paik, Y., Lee, J. M., & Pak, Y. S. (2017). Convergence in international business ethics? A comparative study of ethical philosophies, thinking style, and ethical decision-making between US and Korean managers. Journal of Business Ethics 1–17.

Painter-Morland, M., & Werhane, P. (2008). Cutting-edge issues in business ethics: Continental challenges to tradition and practice . Englewood Cliffs, NJ: Springer.

Parry, R. (2014). Ancient ethical theory, In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/archives/fall2014/entries/ethics-ancient/ .

Peck, R. F., Havighurst, R. J., Cooper, R., Lilienthal, J., & More, D. (1960). The psychology of character development . New York: Wiley.

Piaget, J. (1977). The moral judgment of the child (M. Gabain, Trans.). Harmondsworth: Penguin. (Original work published in 1932)

Reimer, K., & Wade-Stein, D. (2004). Moral identity in adolescence: Self and other in semantic space. Identity, 4, 229–249.

Rest, J. R. (1979). Development in judging moral issues . Minneapolis, MN: University of Minnesota Press.

Rest, J. R. (1983). Morality. In P. Mussen, J. Flavell, & E. Markman (Eds.), Handbook of child psychology: Cognitive development (Vol. 3, pp. 556–629). New York: Wiley.

Rest, J. R. (1984). The major components of morality. In W. M. Kurtines & J. L. Gewirtz (Eds.), Morality, moral behavior, and moral development (pp. 24–38). New York: John Wiley and Sons.

Rest, J. R. (1999). Postconventional moral thinking: A Neo-Kohlbergian approach . Mahwah, New Jersey: Lawrence Erlbaum Associates.

Rest, J. R., Narvaez, D., Thoma, S. J., & Bebeau, M. J. (2000). A Neo-Kohlbergian approach to morality research. Journal of Moral Education, 29 (4), 381–395.

Reynolds, S. J., Leavitt, K., & DeCelles, K. A. (2010). Automatic ethics: The effects of implicit assumptions and contextual cues on moral behavior. Journal of Applied Psychology, 95 (4), 752–760.

Sanders, S., Wisse, B., Van Yperen, N. W., & Rus, D. (2018). On ethically solvent leaders: The roles of pride and moral identity in predicting leader ethical behavior. Journal of Business Ethics, 150 (3), 631–645.

Sobral, F., & Islam, G. (2013). Ethically questionable negotiating: The interactive effects of trust, competitiveness, and situation favorability on ethical decision making. Journal of Business Ethics, 117 (2), 281–296.

Stace, W. T. (1937). The concept of morals . New York: The MacMillan Company.

Stumpf, S. E., & Fieser, J. (2003). Socrates to Sartre and beyond: A history of philosophy (7th ed.). New York: McGraw-Hill.

Sunstein, C. R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28 (4), 531–573.

Thoma, S. (1994). Moral judgments and moral action. In J. R. Rest & D. Narvaez (Eds.), Moral development in the professions: Psychology and applied ethics (pp. 199–212). Mahwah, NJ: Lawrence Erlbaum Associates.

Thoma, S. J., Derryberry, P., & Narvaez, D. (2009). The distinction between moral judgment development and verbal ability: Some relevant data using socio-political outcome variables. High Ability Studies, 20 (2), 173–185.

Treviño, L. K. (1986). Ethical decision making in organizations: A person-situation interactionist model. Academy of Management Review, 11 (3), 601–617.

Treviño, L. K., den Nieuwenboer, N. A., & Kish-Gephart, J. (2014). (Un)ethical behavior in organizations. Annual Review of Psychology, 65, 635.

Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32 (6), 951–990.

Turiel, E. (1983). The development of social knowledge: Morality and convention . Cambridge: Cambridge University Press.

Turiel, E. (1998). The development of morality. In N. Eisenberg (Ed.), Handbook of child psychology (Vol. 3, pp. 863–932)., Social, emotional and personality development New York: Wiley.

Turiel, E. (2003). Resistance and subversion in everyday life. Journal of Moral Education, 32 (2), 115–130.

Uleman, J. S., & Bargh, J. A. (Eds.). (1989). Unintended thought . New York: Guilford Press.

Verschuere, B., Meijer, E. H., Jim, A., Hoogesteyn, K., Orthey, R., McCarthy, R. J., et al. (2018). Registered replication report on Mazar, Amir, and Ariely (2008). Advances in Methods and Practices in Psychological Science, 1 (3), 299–317.

Walker, L. J. (2004). Gus in the gap: Bridging the judgment-action gap in moral functioning. In D. K. Lapsley & D. Narvaez (Eds.), Moral development, self, and identity (pp. 1–20). Mahwah: Lawrence Erlbaum Associates.

Walker, L. J., & Hennig, K. H. (1997). Moral development in the broader context of personality. In S. Hala (Ed.), The development of social cognition (pp. 297–327). East Sussex: Psychology Press.

Walker, L. J., Pitts, R. C., Hennig, K. H., & Matsuba, M. K. (1995). Reasoning about morality and real-life moral problems. In M. Killen & D. Hart (Eds.), Morality in everyday life: Developmental perspectives (pp. 371–407). New York: Cambridge University Press.

Wang, L. C., & Calvano, L. (2015). Is business ethics education effective? An analysis of gender, personal ethical perspectives, and moral judgment. Journal of Business Ethics, 126 (4), 591–602.

Weber, J. (2017). Understanding the millennials’ integrated ethical decision-making process: Assessing the relationship between personal values and cognitive moral reasoning. Business and Society. Retrieved from http://journals.sagepub.com/doi/10.1177/0007650317726985

Whetten, D. A. (1989). What constitutes a theoretical contribution? Academy of Management Review, 14 (4), 490–495.

Williams, R. N., & Gantt, E. E. (2012). Felt moral obligation and the moral judgement–moral action gap: Toward a phenomenology of moral life. Journal of Moral Education, 41 (4), 417–435.

Wright, A. L., Zammuto, R. F., & Liesch, P. W. (2017). Maintaining the values of a profession: Institutional work and moral emotions in the emergency department. Academy of Management Journal, 60 (1), 200–237.

Zhong, C. (2011). The ethical dangers of deliberative decision-making. Administrative Science Quarterly, 56 (1), 1–25.

Download references

Acknowledgements

We are grateful to Richard N. Williams, Terrance D. Olson, Edwin E. Gantt, Sam A. Hardy, Daniel K. Judd, John Bingham, Sara Louise Muhr, and three anonymous reviewers for their detailed comments on earlier drafts of this paper.

This study did not have any funding associated with it.

Author information

Authors and affiliations.

Marriott School of Business, Brigham Young University, 590 TNRB, Provo, UT, 84602, USA

Kristen Bell DeTienne & William R. Dudley

Romney Institute of Public Management, Brigham Young University, 760 TNRB, Provo, UT, 84602, USA

Carol Frogley Ellertson

The Wheatley Institution, Brigham Young University, 392 Hinckley Center, Provo, UT, 84602, USA

Marc-Charles Ingerson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kristen Bell DeTienne .

Ethics declarations

Conflict of interest.

All authors declare that they have no conflict of interest.

Ethical Approval

This paper does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

DeTienne, K.B., Ellertson, C.F., Ingerson, MC. et al. Moral Development in Business Ethics: An Examination and Critique. J Bus Ethics 170 , 429–448 (2021). https://doi.org/10.1007/s10551-019-04351-0

Download citation

Received : 12 November 2018

Accepted : 04 November 2019

Published : 18 November 2019

Issue Date : May 2021

DOI : https://doi.org/10.1007/s10551-019-04351-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Behavioral ethics
  • Moral judgment-moral action gap
  • Cognitive moral development
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of brainsci

Physiological Correlates of Moral Decision-Making in the Professional Domain

Michela balconi.

1 Department of Psychology, Catholic University of the Sacred Heart, 20123 Milan, Italy; [email protected]

2 Research Unit in Affective and Social Neuroscience, Catholic University of the Sacred Heart, 20123 Milan, Italy

Giulia Fronda

Moral decision-making is central to guide our social behavior, and it is based on emotional and cognitive reasoning processes. In the present research, we investigated the moral decision-making in a company context by the recording of autonomic responses (skin conductance response, heart rate frequency, and variability), in three different moral conditions (professional fit, company fit, social fit) and three different offers (fair, unfair, neutral). In particular, the first professional fit condition required participants to accept or reject some offers proposing the money subdivision for a work done together with a colleague. The second company fit condition required participants to evaluate offers regarding the investment of a part of the money in the introduction of some company’s benefits. Finally, the third social fit condition required participants to accept or refuse a money subdivision to support a colleague’s relative with health problems financially. Results underlined the significant effect of both the condition, with increased autonomic effects more for personal and social than company fit, and the offer type, with differences for fair and neutral offers compared to unfair ones. This research shows how individual, situational, and contextual factors influence moral decision-making in a company context.

1. Introduction

Moral decision-making represents a complex process that requires individuals to make consistent decisions in actions that can harm or help others, demanding a balanced achievement between personal and other interests, immediate or deferred rewards, and emotional and rational processes [ 1 ]. Some recent studies have shown that moral decision-making is mediated by two main computational processes [ 2 , 3 ]. The first concerns the moral intuition, consisting of an emotional process that allows individuals to evaluate socially relevant stimuli as right or wrong; the second concerns moral reasoning, consisting of some controlled deductive reasoning processes and cost-benefit analyses about potential outcomes of moral decisions [ 2 , 4 ]. Moreover, moral decision-making can be led by a deontological principle if the morality action evaluation depends on its intrinsic nature, or by a utilitarianism principle if the evaluation is based on action consequences [ 5 ].

To investigate the processes underlying moral decision-making, classic paradigms required individuals to decide by evaluating it according to their criteria and consisted mainly of monetary choices or mathematical exercises [ 6 , 7 ]. Moreover, other paradigms of social decision tasks have been used for the altruistic behavior and equity perception evaluation, such as the Ultimatum Game (UG). Specifically, the UG requires two players, the proposer and the respondent, to divide a money sum. Specifically, the proposer can decide how to divide the sum and the respondent can accept or reject the offer. If the respondent refuses the offer, no player receives money. This paradigm has proven to be useful for investigating moral decision-making because participants can judge the choices’ benefits and risks by showing a more explicit knowledge of the objective probability distribution on the possible results [ 8 ].

Furthermore, the growing interest of neurosciences and the use of neuroscientific tools for the investigation of processes underlying moral decision-making, unlike the previous self-report or evaluation studies [ 9 ], have allowed researchers to deeply observe the conscious and unconscious neurophysiological correlates of moral behavior [ 10 ]. Indeed, some studies that have recorded individuals’ electrodermal and cardiovascular activity, skin conductance response (SCR), heart rate (HR), and heart rate variability (HRV), have shown that these indexes provide information about highly positive or negative emotions experienced during moral decision-making [ 11 , 12 ]. Specifically, research conducted on healthy individuals showed a different activation of SCR, as a measure of emotional arousal, and HR, an index of emotional engagement, concerning fair and unfair moral decisions consequences [ 13 ]. Moreover, some studies have shown that there is a decrease in HR frequency during the experimentation of negative emotions compared to positive ones during moral decision-making [ 14 ].

Therefore, in the present study, a task, consisting of a modified UG version, was created to investigate moral decision-making in a company context. Specifically, this task proposes three different moral contexts (professional fit, company fit, and social fit) and offers: fair, unfair, and neutral. Despite the fact that company moral decision-making has recently received increasing attention, the used approaches in research have excluded the individual and situational variables underlying moral behaviors [ 15 ]. In this regard, the main aim of the present study was to investigate the different managers’ autonomic responses (SCR, HR, HRV) in response to these three moral contexts and offers type. In addition, this research aimed to investigate how utilitarianism, fairness and unfairness perception and choices prosocial and social implications influence decision-making in a company context. In order to investigate how these factors can influence moral decision-making, individuals’ autonomic responses have been recorded during different moral decisions contexts. Autonomic activity, indeed, can provide information on individuals’ levels of personal involvement and emotional engagement experienced during moral decisions regarding different contexts. In particular, as demonstrated by previous studies [ 16 , 17 , 18 ], HR variations can provide information on the emotional impact and salience of moral decision context for individuals, while HRV variations can inform about individuals’ attentional and cognitive levels during decision-making processes [ 19 , 20 ]. Moreover, as demonstrated by other studies [ 14 , 21 ], SCR variations can provide information on individuals’ level of emotional arousal experienced according to moral decisions’ benefits or losses. In light of this evidence, the information on emotional, cognitive and attentional processes underlying moral decision-making provided by autonomic measures can lead the company to develop new managing and leadership models for the organizational team that allows an adequate assessment of personal and social decisions possible implications. Moreover, the investigation of processes underlying moral decision-making in company context appears to be fundamental since company’s decisions can produce positive or negative social consequences regarding the health and the well-being of consumers, employees, and organizational community, causing multiples effects on organizational culture quality [ 22 , 23 ]. In addition, it is useful to deeply investigate company moral decision-making, which can be considered a very complex process influenced by individual, contextual, and situational factors.

In the company context, indeed, the moral decision process foresees possible implications and risks relevant to health and at social level [ 24 ]. The relevance of moral decision-making in company has been demonstrated by some studies that have observed the differences in decisions moral intensity assessments between managers and public people, finding that managers’ assessments were more influenced by certain factors, such as social consensus, the response extent, and the action risk, compared to public people decisions, which can improve their physiological responses [ 25 , 26 ].

Specifically, we hypothesized to observe a different increase of HR and HRV, as a consequence of higher emotional engagement and attentional-emotional regulation, related to the three different choices conditions. Indeed, as demonstrated by previous studies [ 17 , 18 ], HR provides information about the emotional salience and impact of a situation for individuals. Instead, regarding HRV, some studies [ 19 , 20 ] have shown that this index appears to be implicated in individuals’ attentional and cognitive levels during decision-making processes. In particular, in the professional fit condition, in which the individual is more personally involved and emotionally engaged caused by personal interests, we expected to observe an HR increase compared to company and social fit. Furthermore, we expected to observe an HRV increase in company fit condition compared to professional and social fit ones, due to a greater attentional focus and cognitive control about their working environment. Finally, we expected to observe an SCR increase for fair offers compared to unfair ones in all choices conditions, due to the higher moral acceptability of this offer type. Indeed, as demonstrated by previous studies [ 14 , 21 ], SCR can provide information about physiological autonomic response under different emotional constraints and rewards or punishments [ 13 ].

2.1. Participants

18 managers (M age = 43.71; SD age = 11.56) coming from different companies with similar profiles (non-governmental companies) took part in the research after signing the informed consent. The following exclusion criteria were used for all participants: normal or correct-normal visual acuity and absence of neurological or psychiatric pathologies. Moreover, the following inclusion criteria were used: age between 25 and 60 years, obtained at least a high school education, and managerial position occupation.

The research was conducted according to the Helsinki Declaration and was approved by the local ethics committee of the Department of Psychology of the Catholic University of Milan. No specific sample size hypothesis was adopted. Moreover, to safeguard the integrity of the participants’ responses, the test was performed anonymously and not used for future company use.

2.2. Procedure

The subjects were seated in a room in front of a computer monitor placed at a distance of 70 cm. The experiment consisted of a task development administered through the E-Prime 2.0 software (Psychology Software Tools, Inc., Sharpsburg, Pennsylvania, USA). During the task administration, individuals’ autonomic responses were recorded via a Biofeedback system. In particular, the task required participants to make some decisions about certain hypothetical situations that were shown to them in which some offers, that they could accept or refuse, were presented. Furthermore, it was specified to the participants that the decisions concerned three different contexts. In particular, three different randomized moral decision-making choice contexts were proposed: professional fit, company fit, and social fit. Specifically, in the professional fit condition subjects were required to accept or reject the money sum subdivision (1000 euros) proposed by a colleague for a work done together (i.e., “Your boss hires you for an extra remunerated job with a bonus of €1000 together with your colleague Mary. Your boss explains to you that, when the work is finished, the job bonus must be divided in some way between you and Mary; otherwise, no one will get money. When the work is finished, you realize that you and Mary have worked fairly on the project. Mary offers you options on how to divide the sum: 60% you and 40% Mary; 50% you and 50% Mary; 40% you and 60% Mary”). Instead, in the company fit condition, they were required to accept or refuse the money sum subdivision (1000 euros) proposed by the company for the realization of some company benefits (i.e., “The company in which you work decides to give you a bonus of €1000 due to an increase in profit. The company proposes you to help the increase of company benefits using a sum of your bonus. For example, the company plans to build a residence shortly and makes proposals for the residence’s construction, including a percentage of your bonus. So, at the end of the month, you need to decide how much of your bonus to invest in your company for the construction of the residence: 50% you and 50% company residence; 40% you and 60% company residence; 60% you and 40% company residence. If you decide to reject the company’s proposals, the latter will not be able to plan the creation of the residence for the next year”). Finally, in the social fit condition, subjects were required to accept or refuse the money sum subdivision (1000 euros) proposed by the company to support a colleague’s relative with health problems financially (i.e., “Your company decides to give you a bonus equivalent to € 1000, following a profit increase. The company tells you that with a part of this bonus you can also contribute to a just social cause outside the company. Recently, one of your colleagues had to pay a lot of money for his wife’s cancer treatment and asked the company to propose, to those employees interested, to give a percentage of their bonus to go to his wife’s treatments. At the end of the month you need to decide how much of your bonus to invest for this cause: 40% you and 60% colleague’s wife; 60% you and 40% colleague’s wife; 50% you and 50% colleague’s wife. If you decide to reject the company’s proposals, your colleague’s wife cannot be treated”).

During each situation, three other randomized offers types (fair, unfair, and neutral) were proposed. Specifically, the neutral offers proposed an equal money subdivision in all the three contexts (50% respondent and 50% bidder); the fair offers proposed a favorable money subdivision for the respondent (60% respondent and 40% bidder); finally, the unfair offers proposed an unfavorable money subdivision for the respondent (40% respondent and 60% bidder). For each offer, subjects were reminded that if they refused, neither would get the money.

The trial structure was composed by the development of three blocks that proposed three different choice conditions (professional fit, company fit, and social fit). In particular, each block lasted about 15 minutes and was composed as follows: an initial blank screen presentation, the choice condition presentation (professional fit, company fit, or social fit), the scenario presentation, the first offer presentation, the blank inter-stimulus with a cross in the center of 14 second duration, the second offer presentation, a blank inter-stimulus with a cross in the center of 14 second duration, the third offer presentation, and a blank inter-stimulus with a cross in the center of 14 second duration. In particular, the block started with the presentation of the choice condition, that could be professional fit, social fit or company fit, which was randomized between participants. Then, participants were presented with a scenario that proposed a choice situation relative to that specific context. At the end of the scenario presentation, participants have to press the computer space bar to continue the task development, that proposed him to accept or reject three different offers that could be advantageous (fair) or disadvantageous (unfair) for the participant or neutral. In particular, each block (professional fit, company fit, and social fit) proposed 15 randomized scenarios, for a total presentation of 45 scenarios during the entire task development. The 15 scenarios of each condition proposed choice situations similar to each other, with small variations to avoid boring the participants during the task development. Moreover, for each scenario of each condition, three different offers (fair, unfair, and neutral) were proposed, for a total of 135 offers. In particular, 45 fair, 45 unfair, and 45 neutral offers were presented. Participants could accept or reject the proposed offer by pressing the “1” and “0” keys on the computer keyboard and they had no time limit. Before the beginning of the task, a 15 minutes familiarization task with the same structure was presented to participants. Specifically, the familiarization task proposed participants three scenarios with three offers to accept or reject for each condition (professional fit, company fit, and social fit).

2.3. Autonomic Measures Recording

The autonomic activity was recorded using Expert2000 portable Biofeedback system with a MULTI radio module (Schuhfried GmbH, Mödling, Austria).

The multipurpose integrated sensor was placed in correspondence to the distal phalanx of the non-dominant hand second finger. SCR and HR data were sampled at 40 Hz. The cardiovascular activity was collected via photoplethysmography and HR data were computed starting from peripheral blood volume measures. SCR data were directly computed by the recording software by applying a 0.05 Hz high-pass filter. An online notch filter (50 Hz) was used to minimize electrical noise. Recordings were preceded by a 120 seconds baseline recording. Inter-beat interval (IBI) metrics were computed starting from raw HR data. Finally, we extracted the mean HR and the IBI standard deviation for each experimental condition. The computation of the standard deviation of IBI mirrors high-frequency components of HRV information corresponding to the vagal influence on cardiovascular activity [ 27 ].

2.4. Autonomic Data Analyses

For statistical analysis, a repeated measure ANOVA was applied to each index (SCR, HR, HRV) with Condition (Professional fit; Company fit; Social fit) and Type (fair, unfair, and neutral offers) as repeated factors. For all the ANOVA tests, the freedom degrees were corrected using Greenhouse-Geisser epsilon where appropriate, with the level of significance set at 0.05. Additionally, post-hoc comparisons were applied to the data. A Bonferroni test was applied for multiple comparisons.

For HR, the Condition effect was significant (F(2,17) = 8.90, p < 0.01, η 2 = 0.29), with an HR increase in professional fit condition compared to company fit (F(1,17) = 8.12, p < 0.01, η 2 = 0.28) and social fit ones (F(2,17) = 7.89, p < 0.01, η 2 = 0.27) ( Figure 1 a).

An external file that holds a picture, illustration, etc.
Object name is brainsci-09-00229-g001.jpg

( a ) The figure shows a heart rate (HR) increase in professional fit condition compared to others. Bars represent ± 1SE. Stars mark statistically significant ( p < 0.05) pairwise comparisons. ( b ) The figure shows a heart rate variability (HRV) increase in company fit condition compared to others. Bars represent ± 1SE. Stars mark statistically significant ( p < 0.05) pairwise comparisons. ( c ) The figure shows a skin conductance response (SCR) increase for fair and neutral offers compared to unfair ones in professional and social fit conditions. Bars represent ± 1SE. Stars mark statistically significant ( p < 0.05) pairwise comparisons.

For HRV, the Condition effect was significant (F(2,17) = 9.33, p < 0.01, η 2 = 0.32), with an HRV increase in company fit condition compared to professional (F(1,17) = 9.56, p < 0.01, η 2 = 0.31) and social fit ones (F(1,17) = 9.21, p < 0.01, η 2 = 0.31) ( Figure 1 b).

Finally, for SCR, the Condition × Type interaction effect was significant (F(4,32) = 10.77, p < 0.01, η 2 = 0.33). Specifically, as revealed by post-hoc comparisons, there was an SCR increase for fair offers compared to unfair ones in professional fit (F(2,17) = 8.45, p < 0.01, η 2 = 0.29) and social fit (F(2,17) = 8.10, p < 0.01, η 2 = 0.29) conditions. Moreover, there was an SCR increase for neutral offers compared to unfair in professional (F(1,17) = 9.06, p < 0.01, η 2 = 0.31) and social fit conditions (F(2,17) = 8.96, p < 0.01, η 2 = 0.29) ( Figure 1 c).

4. Discussion

The present study aimed to investigate possible differences in individuals’ autonomic responses (SCR, HR, and HRV) concerning different decision-making conditions and offers within a company context. Specifically, the proposal of different moral choices contexts and offers has allowed investigating individual, situational and contextual factors that can influence moral decision-making. In particular, the professional and company fit choice conditions have allowed investigating the influence of utilitarian assessment, involving attentional and cognitive mechanisms used for the assessment of the primary and secondary decisions benefits. Finally, the social fit choice condition has allowed investigating the influence of emotions in moral decision-making. This task, therefore, has proven to be useful to investigate how individual factors, such as personal interests and fairness, contextual factors, regarding the decision context, and situational factors, related to decision personal and social implications, can influence moral decision-making.

The analyses conducted allowed us to report the following main results. Firstly, considering HR, a significant effect was observed for condition with an HR increase in professional fit condition compared to others. This result may be due to a greater individuals’ emotional engagement for this moral situation in which individuals’ choices are more focused on personal interests regarding a money subdivision proposed by a colleague for a work done together. Indeed, as shown by some studies [ 16 , 17 , 18 ], an HR increase appears to be correlated to a higher emotional engagement that is experienced when individual perceives a situation as emotionally salient and relevant and in general more favorable for himself. Furthermore, this result is in line with some studies that have observed an HR increase during utilitarian choices due to higher impact of these choices for people [ 14 , 28 ].

Secondly, considering HRV, a significant increasing effect was observed for company fit condition compared to others. In line with previous research, it was shown that an HRV increase could be linked to be more focused on individuals’ attention and cognitive engagement [ 19 , 20 ].

Furthermore, considering HRV as an index that provides information on attentional-emotional and cognitive processes regulation [ 29 ], the increase of HRV appears to be related to a greater assessment of moral decisions possible implications [ 14 ]. In light of this previous evidence, the greater HRV activation in the company fit condition could be explained because it proposes to offer a money sum for the introduction of some company benefits within the working environment. The increase in focused attention in this situation compared to others, therefore, could be because individuals occupying a managerial position are more focused on the impact consequences of new opportunities (such a social service for the company) that could be introduced into their working environment.

Finally, with regards to SCR, an SCR increase was observed for both fair and neutral offers compared to unfair ones in professional and social fit conditions. This result may be due to the fact that, firstly, fair offers trigger more positive subjects’ response in terms of emotional arousal when subjects perceive the moral acceptability of the offer itself. Indeed, as shown by some studies that considered the SCR as a reliable measure to assess physiological autonomic response under different emotional constraints such as moral decision-making [ 14 , 21 ], SCR may be modulated by favorable emotional and positive social condition. More specifically, it has been observed that higher SCR is produced mostly during situations that produce a reward or punishment [ 13 ]. In this regard, an SCR increase in fair offers compared to unfair ones could be due because the former activates individuals’ brain reward circuit [ 30 ] increasing the physiological arousal associated to positive emotional responses. The increased SCR value for neutral offers compared to unfair ones, specifically for professional and social fit condition, may further support this explanation, based on subjective perception of equity as a positive (although not so much personally favorable as in a fair condition) in situations where the subjective or the social impact of the moral decision-making is more relevant. This result also highlights how in the company fit condition there is no significant difference between fair and neutral offers compared to unfair ones. This result could be due to the fact that the company fit condition more activates individuals’ attentional and cognitive mechanisms implicated in a utilitarian evaluation regarding the possible interests and gains associated to a moral decision. On the contrary, the professional and social fit conditions caused a higher activation of individuals’ emotional responses, which influenced moral decision-making.

5. Conclusions

To conclude, the present study underlined the importance of understanding moral decision-making within a company as individuals’ actions that can have social consequences at different individual or social levels. Furthermore, the present study shows that moral decision-making appears to be influenced not only by individual factors, but by contextual and situational factors that must also be considered. This evidence was supported by the fact that in professional and company fit condition, in which personal interests are more involved, individuals’ choices appear to be influenced by a careful evaluation based on individuals’ personal interests and possible gains, while, in the social fit condition, the evaluation of personal interest becomes secondary in decision-making. Moreover, individuals’ moral decision-making appears to also be influenced by offers of moral acceptability and fairness and unfairness perception, which activate individuals’ brain reward or punishment mechanisms [ 13 ]. The fairness and unfairness perception, indeed, appears to be a relevant factor in moral decision-making, which mainly concerns situations in which individual choices can have consequences on a personal and social level.

The investigation of the factors that influence moral decision-making allows, therefore, the formation of new managing and leadership models that can lead the company to help its working team to carefully evaluate the benefits, losses, and personal and social implications of a moral decision.

Despite the innovativeness of the paradigm used for the investigation of moral decision-making, the present study has some limitations. The first concerns the lack of adjunctive self-report measure able to better describe the subjective perception of different decisional conditions. The second limitation is regarding the fact that to opportunely differentiate and create quite specific moral choices conditions, the scenario should be changed across the conditions. The third limitation is related to the small sample size. Furthermore, in future studies, we could think of observing not only autonomic responses, but also central neurophysiological correlates related moral decision-making in different situations and offers. Moreover, in future studies, we could plan to increase the experimental sample.

Author Contributions

Conceptualization, M.B.; data curation, M.B. and G.F.; project administration, M.B.; supervision, M.B.; writing–original draft, M.B.; writing–review & editing, G.F.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflicts of Interest

The author declared no conflicts of interest.

moral decision making research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Moral Decision Making

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Last »
  • Moral Judgment Follow Following
  • Implicit measures Follow Following
  • Affective Priming Follow Following
  • Experimental Aesthetics Follow Following
  • Moral Psychology Follow Following
  • Conformity Follow Following
  • Decision Making Follow Following
  • Social Cognition Follow Following
  • Emotions (Social Psychology) Follow Following
  • Affect & Arousal Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Journals
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Corpus ID: 55541449

THE IMPACT OF CULTURE ON MORAL AND ETHICAL DECISION-MAKING: AN INTEGRATIVE LITERATURE REVIEW

  • M. Thomson , Barbara D. Adams , +1 author J. Sartori
  • Published 1 June 2007
  • Political Science, Business, Sociology

8 Citations

Respecting autonomous decision making among filipinos: a re-emphasis in genetic counseling.

  • Highly Influenced

The Developmental Process of Unethical Consumer Behavior: An Investigation Grounded in China

Responsive ethics: navigating the fluid research space between hrec ethics, researcher ethics and participant ethics, trust in culturally diverse teams, 2018 military cross-cultural competence annotated bibliography, the influence of communism on ethical decision making, the impact of moral exemplar training on moral judgment and decision-making in an operational context, 69 references, culture and agency: implications for psychological theories of motivation and social development..

  • Highly Influential
  • 13 Excerpts

Reclaiming the individual from Hofstede's ecological analysis--a 20-year odyssey: comment on Oyserman et al. (2002).

The intuitive psychologist and his shortcomings: distortions in the attribution process1, in a different voice: psychological theory and women’s development, moral emotions and moral behavior., toward a new generation of cross-cultural research, cf training for moral and ethical decision making in an operational context, the “ripple effect”: cultural differences in perceptions of the consequences of events, amae in japan and the united states: an exploration of a "culturally unique" emotion., the cultural mind: environmental decision making and cultural modeling within and across populations., related papers.

Showing 1 through 3 of 0 Related Papers

IMAGES

  1. Moral Decision Making

    moral decision making research paper

  2. Moral Decision Making ( Ethics)

    moral decision making research paper

  3. Ethical Decision Making Paper

    moral decision making research paper

  4. 📌 Decision-Making: Predicting the Future & Making Moral Choices

    moral decision making research paper

  5. Making Moral Decisions (400 Words)

    moral decision making research paper

  6. Ethical Decision Making Research Paper Example

    moral decision making research paper

VIDEO

  1. The Trolley Problem: Moral Dilemmas

  2. Moral Dilemma: Levels of Decision Making

  3. Contemporary Issues in Business (Lecture-12 Business Ethics)

  4. Models for Decision Making

  5. The Trolley Problem: Ethics and Decision-Making

  6. 😁 Deity's Dilemma: Clones or Knowledge Rings? #shorts #tips #youtubeshorts

COMMENTS

  1. Moral decision-making and moral development: Toward an integrative

    The broader term moral decision-making will be used in this paper, to refer to any decision, including judgements, evaluations, and response choices, made within the 'moral domain' ... Most research into moral decision-making has focused on moral evaluations or response choices in adults, but there is some research into the development of ...

  2. Moral judgement and decision-making: theoretical predictions and null

    The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of ...

  3. The neuroscience of morality and social decision-making

    Abstract. Across cultures humans care deeply about morality and create institutions, such as criminal courts, to enforce social norms. In such contexts, judges and juries engage in complex social decision-making to ascertain a defendant's capacity, blameworthiness, and culpability. Cognitive neuroscience investigations have begun to reveal ...

  4. Moral decision-making and moral development: Toward an integrative

    Abstract. How moral decision-making occurs, matures over time and relates to behaviour is complex. To develop a full picture of moral decision-making, moral development and moral behaviour it is necessary to understand: (a) how real-time moral decisions are made (including relevant social and contextual factors), (b) what processes are required ...

  5. Rational and Emotional Sources of Moral Decision-Making: an

    Some scholars have contended that moral decision-making is primarily rational, mediated by controlled, deliberative, and reflective forms of moral reasoning. Others have contended that moral decision-making is primarily emotional, mediated by automatic, affective, and intuitive forms of decision-making. Evidence from several lines of research suggests that people make moral decisions in both ...

  6. PDF Moral judgement and decision-making: theoretical predictions ...

    The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of ...

  7. Origin and Development of Moral Sense: A Systematic Review

    As mentioned earlier in this paper, people's moral sense develops over time due to social interactions. ... examine the psychological and neurological evidence supporting dual moral decision-making models and discuss research that has attempted to identify triggers for rational-reflexive and emotional ... Moral decision-making and moral ...

  8. Moral dilemmas in cognitive neuroscience of moral decision-making: A

    This approach will ultimately allow us to draw conclusions about real-life moral decision-making which draw on these foundational psychological processes. 1.2. Moral dilemma research to date. The past decade has witnessed a blossoming of studies in Moral Psychology and Neuroethics.

  9. The neural correlates of moral decision-making: A systematic review and

    Moral decision-making task with active judgement required (response decision or evaluation) ... The National Institute for Health Research, the National Health Service, or the Department of Health had no role in the study design, collection, analysis or interpretation of the data, writing the manuscript, or the decision to submit the paper for ...

  10. (PDF) Moral Judgment and Decision Making

    It reviews contemporary research on moral judgment and decision making, and suggests ways that the major themes in the literature relate to the notion of moral flexibility. The chapter explains ...

  11. PDF Rational and Emotional Sources of Moral Decision-Making: an

    Abstract. Some scholars have contended that moral decision-making is primarily rational, mediated by con-trolled, deliberative, and reflective forms of moral reason-ing. Others have contended that ...

  12. Choosing between bad and worse: investigating choice in moral dilemmas

    Moral dilemmas that pit the utilitarian and deontological principles against each other have been widely used to study moral decision-making precisely because people are curiously inconsistent in judging them (Greene et al 2001; Koenigs et al. 2007; Bartels 2008; Suter and Hertwig 2011).A typical sacrificial moral dilemma poses a choice between an action that saves many by killing a few or ...

  13. (PDF) Moral decision-making and moral development: Toward an

    In this paper, psychological and social neuroscience theories of moral decision-making and development are briefly reviewed, as is the development of relevant component processes.

  14. PDF Principled moral sentiment and the flexibility of moral judgment and

    These studies suggest that moral rules play an important, but context-sensitive role in moral cogni-tion, and offer an account of when emotional reactions to perceived moral violations receive less weight than consideration of costs and benefits in moral judgment and decision making. 2008 Elsevier B.V.

  15. Human decision-making biases in the moral dilemmas of ...

    The challenge in building machine morality based on people's moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how ...

  16. The Psychology of Morality: A Review and Analysis of Empirical Studies

    As a result of this criterion, we only identified two external papers that were seminal to research on moral self-views. WoS = Web of Science. ... Indeed, after behaving in ways that violate moral standards (violence, delinquency, unethical decision making), people have been found to display a range of moral disengagement strategies.

  17. PDF ETHICAL DECISON-MAKING MODELS Max Torres* RESEARCH PAPER Nº 358

    College of Business: Ethical Decision-Making Models" (1996). It alerted me to the existence of a developed academic literature on the subject of ethical decision-making models. Further research revealed that: 1) a preponderance of the models relate to marketing ethics, and; 2)

  18. Models of moral decision making: Literature review and research agenda

    Instead of providing a research agenda at the end of the paper, I choose to end each of the two 2 The nature of moral decision making, 3 The origins of moral decision making with a discussion of related research directions for the discrete choice modelling community. These research directions relate to methodological and modelling challenges ...

  19. Moral Development in Business Ethics: An Examination and Critique

    This focus on understanding ethical decision making in business in a way that bridges the moral judgment-moral action gap has experienced an explosion of interest in recent decades (Bazerman and Sezer 2016; Paik et al. 2017; Treviño et al. 2014).These types of studies constitute a branch of behavioral ethics research that incorporates moral philosophy, moral psychology, and business ethics.

  20. Physiological Correlates of Moral Decision-Making in the Professional

    Abstract. Moral decision-making is central to guide our social behavior, and it is based on emotional and cognitive reasoning processes. In the present research, we investigated the moral decision-making in a company context by the recording of autonomic responses (skin conductance response, heart rate frequency, and variability), in three ...

  21. Purpose and discernment: A model for moral decision-making

    Analysis of this research pointed to moral decision-making being influenced by discernment about the context of six interrelated purposes - moral, personal, professional, organisational, public and cultural. ... informed the development of a model for moral decision-making for this paper. Part of the purpose of the study was to examine the ...

  22. Moral Decision Making Research Papers

    Kant and Confucius: On moral decision-making. The aim of this paper is to determine the patterns of moral decision-making in Kantian and Confucian thought and to assess the necessary preconditions of moral behavior for Kant and Confucius respectively. This paper focuses on comparing... more. Download. by Martyna Swiatczak-Borowy.

  23. Models of moral decision making: Literature review and research agenda

    This paper aims to explore the potential of the discrete choice analysis-approach as a toolbox and research paradigm for the study of moral decision making.This aim is motivated by the observation that while the study of moral choice behaviour has received much attention in Economics and Psychology, the explicit consideration of the moral dimension of decisions is rare in the Choice modelling ...

  24. Moral intensity and ethical decision-making: a combined importance

    Purpose The purpose of this study is to examine the influence of moral intensity on the ethical decision-making process of professional accountants based on a combined importance-performance map analysis (cIPMA). Design/methodology/approach Using partial least squares structural equation modeling (PLS-SEM) on data from 309 accountants, the study examined the relationships between four moral ...

  25. [Pdf] the Impact of Culture on Moral and Ethical Decision-making: an

    Abstract : Canadian Forces (CF) operations today occur within the Joint, Interagency, Multinational and Public (JIMP) framework. This shift has implications for moral and ethical decision making (MEDM) in operations, in part, because of the potential for cultural differences to exert influence. As part of a multi year program at Defence Research and Development Canada (DRDC) Toronto ...